High-dimensional model estimation and model selection
CERN. Geneva
2015-01-01
I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.
Feature selection for high-dimensional integrated data
Zheng, Charles; Schwartz, Scott; Chapkin, Robert S.; Carroll, Raymond J.; Ivanov, Ivan
2012-01-01
Motivated by the problem of identifying correlations between genes or features of two related biological systems, we propose a model of feature selection in which only a subset of the predictors Xt are dependent on the multidimensional variate Y, and the remainder of the predictors constitute a “noise set” Xu independent of Y. Using Monte Carlo simulations, we investigated the relative performance of two methods: thresholding and singular-value decomposition, in combination with stochastic optimization to determine “empirical bounds” on the small-sample accuracy of an asymptotic approximation. We demonstrate utility of the thresholding and SVD feature selection methods to with respect to a recent infant intestinal gene expression and metagenomics dataset.
Feature selection for high-dimensional integrated data
Zheng, Charles
2012-04-26
Motivated by the problem of identifying correlations between genes or features of two related biological systems, we propose a model of feature selection in which only a subset of the predictors Xt are dependent on the multidimensional variate Y, and the remainder of the predictors constitute a “noise set” Xu independent of Y. Using Monte Carlo simulations, we investigated the relative performance of two methods: thresholding and singular-value decomposition, in combination with stochastic optimization to determine “empirical bounds” on the small-sample accuracy of an asymptotic approximation. We demonstrate utility of the thresholding and SVD feature selection methods to with respect to a recent infant intestinal gene expression and metagenomics dataset.
International Nuclear Information System (INIS)
Zhang, Wuhong; Su, Ming; Wu, Ziwen; Lu, Meng; Huang, Bingwei; Chen, Lixiang
2013-01-01
Twisted photons enable the definition of a Hilbert space beyond two dimensions by orbital angular momentum (OAM) eigenstates. Here we propose a feasible entanglement concentration experiment, to enhance the quality of high-dimensional entanglement shared by twisted photon pairs. Our approach is started from the full characterization of entangled spiral bandwidth, and is then based on the careful selection of the Laguerre–Gaussian (LG) modes with specific radial and azimuthal indices p and ℓ. In particular, we demonstrate the possibility of high-dimensional entanglement concentration residing in the OAM subspace of up to 21 dimensions. By means of LabVIEW simulations with spatial light modulators, we show that the Shannon dimensionality could be employed to quantify the quality of the present concentration. Our scheme holds promise in quantum information applications defined in high-dimensional Hilbert space. (letter)
McParland, D; Phillips, C M; Brennan, L; Roche, H M; Gormley, I C
2017-12-10
The LIPGENE-SU.VI.MAX study, like many others, recorded high-dimensional continuous phenotypic data and categorical genotypic data. LIPGENE-SU.VI.MAX focuses on the need to account for both phenotypic and genetic factors when studying the metabolic syndrome (MetS), a complex disorder that can lead to higher risk of type 2 diabetes and cardiovascular disease. Interest lies in clustering the LIPGENE-SU.VI.MAX participants into homogeneous groups or sub-phenotypes, by jointly considering their phenotypic and genotypic data, and in determining which variables are discriminatory. A novel latent variable model that elegantly accommodates high dimensional, mixed data is developed to cluster LIPGENE-SU.VI.MAX participants using a Bayesian finite mixture model. A computationally efficient variable selection algorithm is incorporated, estimation is via a Gibbs sampling algorithm and an approximate BIC-MCMC criterion is developed to select the optimal model. Two clusters or sub-phenotypes ('healthy' and 'at risk') are uncovered. A small subset of variables is deemed discriminatory, which notably includes phenotypic and genotypic variables, highlighting the need to jointly consider both factors. Further, 7 years after the LIPGENE-SU.VI.MAX data were collected, participants underwent further analysis to diagnose presence or absence of the MetS. The two uncovered sub-phenotypes strongly correspond to the 7-year follow-up disease classification, highlighting the role of phenotypic and genotypic factors in the MetS and emphasising the potential utility of the clustering approach in early screening. Additionally, the ability of the proposed approach to define the uncertainty in sub-phenotype membership at the participant level is synonymous with the concepts of precision medicine and nutrition. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data.
Zhao, Yize; Kang, Jian; Long, Qi
2018-01-01
Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods.
A Feature Subset Selection Method Based On High-Dimensional Mutual Information
Directory of Open Access Journals (Sweden)
Chee Keong Kwoh
2011-04-01
Full Text Available Feature selection is an important step in building accurate classifiers and provides better understanding of the data sets. In this paper, we propose a feature subset selection method based on high-dimensional mutual information. We also propose to use the entropy of the class attribute as a criterion to determine the appropriate subset of features when building classifiers. We prove that if the mutual information between a feature set X and the class attribute Y equals to the entropy of Y , then X is a Markov Blanket of Y . We show that in some cases, it is infeasible to approximate the high-dimensional mutual information with algebraic combinations of pairwise mutual information in any forms. In addition, the exhaustive searches of all combinations of features are prerequisite for finding the optimal feature subsets for classifying these kinds of data sets. We show that our approach outperforms existing filter feature subset selection methods for most of the 24 selected benchmark data sets.
Inference for feature selection using the Lasso with high-dimensional data
DEFF Research Database (Denmark)
Brink-Jensen, Kasper; Ekstrøm, Claus Thorn
2014-01-01
Penalized regression models such as the Lasso have proved useful for variable selection in many fields - especially for situations with high-dimensional data where the numbers of predictors far exceeds the number of observations. These methods identify and rank variables of importance but do...... not generally provide any inference of the selected variables. Thus, the variables selected might be the "most important" but need not be significant. We propose a significance test for the selection found by the Lasso. We introduce a procedure that computes inference and p-values for features chosen...... by the Lasso. This method rephrases the null hypothesis and uses a randomization approach which ensures that the error rate is controlled even for small samples. We demonstrate the ability of the algorithm to compute $p$-values of the expected magnitude with simulated data using a multitude of scenarios...
Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search
Directory of Open Access Journals (Sweden)
Simon Fong
2013-01-01
Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.
Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.
Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack
2017-06-01
In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.
Taşkin Kaya, Gülşen
2013-10-01
Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input
Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.
Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin
We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.
Talib, Imran; Belgacem, Fethi Bin Muhammad; Asif, Naseer Ahmad; Khalil, Hammad
2017-01-01
In this research article, we derive and analyze an efficient spectral method based on the operational matrices of three dimensional orthogonal Jacobi polynomials to solve numerically the mixed partial derivatives type multi-terms high dimensions generalized class of fractional order partial differential equations. We transform the considered fractional order problem to an easily solvable algebraic equations with the aid of the operational matrices. Being easily solvable, the associated algebraic system leads to finding the solution of the problem. Some test problems are considered to confirm the accuracy and validity of the proposed numerical method. The convergence of the method is ensured by comparing our Matlab software simulations based obtained results with the exact solutions in the literature, yielding negligible errors. Moreover, comparative results discussed in the literature are extended and improved in this study.
Bhadra, Anindya
2013-04-22
We describe a Bayesian technique to (a) perform a sparse joint selection of significant predictor variables and significant inverse covariance matrix elements of the response variables in a high-dimensional linear Gaussian sparse seemingly unrelated regression (SSUR) setting and (b) perform an association analysis between the high-dimensional sets of predictors and responses in such a setting. To search the high-dimensional model space, where both the number of predictors and the number of possibly correlated responses can be larger than the sample size, we demonstrate that a marginalization-based collapsed Gibbs sampler, in combination with spike and slab type of priors, offers a computationally feasible and efficient solution. As an example, we apply our method to an expression quantitative trait loci (eQTL) analysis on publicly available single nucleotide polymorphism (SNP) and gene expression data for humans where the primary interest lies in finding the significant associations between the sets of SNPs and possibly correlated genetic transcripts. Our method also allows for inference on the sparse interaction network of the transcripts (response variables) after accounting for the effect of the SNPs (predictor variables). We exploit properties of Gaussian graphical models to make statements concerning conditional independence of the responses. Our method compares favorably to existing Bayesian approaches developed for this purpose. © 2013, The International Biometric Society.
Shahiri, Amirah Mohamed; Husain, Wahidah; Rashid, Nur'Aini Abd
2017-10-01
Huge amounts of data in educational datasets may cause the problem in producing quality data. Recently, data mining approach are increasingly used by educational data mining researchers for analyzing the data patterns. However, many research studies have concentrated on selecting suitable learning algorithms instead of performing feature selection process. As a result, these data has problem with computational complexity and spend longer computational time for classification. The main objective of this research is to provide an overview of feature selection techniques that have been used to analyze the most significant features. Then, this research will propose a framework to improve the quality of students' dataset. The proposed framework uses filter and wrapper based technique to support prediction process in future study.
A study of metaheuristic algorithms for high dimensional feature selection on microarray data
Dankolo, Muhammad Nasiru; Radzi, Nor Haizan Mohamed; Sallehuddin, Roselina; Mustaffa, Noorfa Haszlinna
2017-11-01
Microarray systems enable experts to examine gene profile at molecular level using machine learning algorithms. It increases the potentials of classification and diagnosis of many diseases at gene expression level. Though, numerous difficulties may affect the efficiency of machine learning algorithms which includes vast number of genes features comprised in the original data. Many of these features may be unrelated to the intended analysis. Therefore, feature selection is necessary to be performed in the data pre-processing. Many feature selection algorithms are developed and applied on microarray which including the metaheuristic optimization algorithms. This paper discusses the application of the metaheuristics algorithms for feature selection in microarray dataset. This study reveals that, the algorithms have yield an interesting result with limited resources thereby saving computational expenses of machine learning algorithms.
A Robust Supervised Variable Selection for Noisy High-Dimensional Data
Czech Academy of Sciences Publication Activity Database
Kalina, Jan; Schlenker, Anna
2015-01-01
Roč. 2015, Article 320385 (2015), s. 1-10 ISSN 2314-6133 R&D Projects: GA ČR GA13-17187S Institutional support: RVO:67985807 Keywords : dimensionality reduction * variable selection * robustness Subject RIV: BA - General Mathematics Impact factor: 2.134, year: 2015
Directory of Open Access Journals (Sweden)
Boulesteix Anne-Laure
2009-12-01
Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.
Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying
2017-08-01
Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.
Travnik, Jaden B; Pilarski, Patrick M
2017-07-01
Prosthetic devices have advanced in their capabilities and in the number and type of sensors included in their design. As the space of sensorimotor data available to a conventional or machine learning prosthetic control system increases in dimensionality and complexity, it becomes increasingly important that this data be represented in a useful and computationally efficient way. Well structured sensory data allows prosthetic control systems to make informed, appropriate control decisions. In this study, we explore the impact that increased sensorimotor information has on current machine learning prosthetic control approaches. Specifically, we examine the effect that high-dimensional sensory data has on the computation time and prediction performance of a true-online temporal-difference learning prediction method as embedded within a resource-limited upper-limb prosthesis control system. We present results comparing tile coding, the dominant linear representation for real-time prosthetic machine learning, with a newly proposed modification to Kanerva coding that we call selective Kanerva coding. In addition to showing promising results for selective Kanerva coding, our results confirm potential limitations to tile coding as the number of sensory input dimensions increases. To our knowledge, this study is the first to explicitly examine representations for realtime machine learning prosthetic devices in general terms. This work therefore provides an important step towards forming an efficient prosthesis-eye view of the world, wherein prompt and accurate representations of high-dimensional data may be provided to machine learning control systems within artificial limbs and other assistive rehabilitation technologies.
Lestari, A. W.; Rustam, Z.
2017-07-01
In the last decade, breast cancer has become the focus of world attention as this disease is one of the primary leading cause of death for women. Therefore, it is necessary to have the correct precautions and treatment. In previous studies, Fuzzy Kennel K-Medoid algorithm has been used for multi-class data. This paper proposes an algorithm to classify the high dimensional data of breast cancer using Fuzzy Possibilistic C-means (FPCM) and a new method based on clustering analysis using Normed Kernel Function-Based Fuzzy Possibilistic C-Means (NKFPCM). The objective of this paper is to obtain the best accuracy in classification of breast cancer data. In order to improve the accuracy of the two methods, the features candidates are evaluated using feature selection, where Laplacian Score is used. The results show the comparison accuracy and running time of FPCM and NKFPCM with and without feature selection.
Clustering high dimensional data
DEFF Research Database (Denmark)
Assent, Ira
2012-01-01
High-dimensional data, i.e., data described by a large number of attributes, pose specific challenges to clustering. The so-called ‘curse of dimensionality’, coined originally to describe the general increase in complexity of various computational problems as dimensionality increases, is known...... to render traditional clustering algorithms ineffective. The curse of dimensionality, among other effects, means that with increasing number of dimensions, a loss of meaningful differentiation between similar and dissimilar objects is observed. As high-dimensional objects appear almost alike, new approaches...... for clustering are required. Consequently, recent research has focused on developing techniques and clustering algorithms specifically for high-dimensional data. Still, open research issues remain. Clustering is a data mining task devoted to the automatic grouping of data based on mutual similarity. Each cluster...
CSIR Research Space (South Africa)
Mc
2012-07-01
Full Text Available stream_source_info McLaren_2012.pdf.txt stream_content_type text/plain stream_size 2190 Content-Encoding ISO-8859-1 stream_name McLaren_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 High dimensional... entanglement M. McLAREN1,2, F.S. ROUX1 & A. FORBES1,2,3 1. CSIR National Laser Centre, PO Box 395, Pretoria 0001 2. School of Physics, University of the Stellenbosch, Private Bag X1, 7602, Matieland 3. School of Physics, University of Kwazulu...
Chernozhukov, Victor; Hansen, Christian; Spindler, Martin
2016-01-01
In this article the package High-dimensional Metrics (\\texttt{hdm}) is introduced. It is a collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e...
Wang, Wei; Yang, Jiong
With the rapid growth of computational biology and e-commerce applications, high-dimensional data becomes very common. Thus, mining high-dimensional data is an urgent problem of great practical importance. However, there are some unique challenges for mining data of high dimensions, including (1) the curse of dimensionality and more crucial (2) the meaningfulness of the similarity measure in the high dimension space. In this chapter, we present several state-of-art techniques for analyzing high-dimensional data, e.g., frequent pattern mining, clustering, and classification. We will discuss how these methods deal with the challenges of high dimensionality.
Asymptotically Honest Confidence Regions for High Dimensional
DEFF Research Database (Denmark)
Caner, Mehmet; Kock, Anders Bredahl
While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow...
High-dimensional covariance estimation with high-dimensional data
Pourahmadi, Mohsen
2013-01-01
Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac
Ye, Fei
2017-01-01
In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks.
2017-01-01
In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks. PMID:29236718
Estimating High-Dimensional Time Series Models
DEFF Research Database (Denmark)
Medeiros, Marcelo C.; Mendes, Eduardo F.
We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-01
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-07
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Metabolic fuels: regulating fluxes to select mix.
Weber, Jean-Michel
2011-01-15
Animals must regulate the fluxes of multiple fuels to support changing metabolic rates that result from variation in physiological circumstances. The aim of fuel selection strategies is to exploit the advantages of individual substrates while minimizing the impact of disadvantages. All exercising mammals share a general pattern of fuel selection: at the same %V(O(2,max)) they oxidize the same ratio of lipids to carbohydrates. However, highly aerobic species rely more on intramuscular fuels because energy supply from the circulation is constrained by trans-sarcolemmal transfer. Fuel selection is performed by recruiting different muscles, different fibers within the same muscles or different pathways within the same fibers. Electromyographic analyses show that shivering humans can modulate carbohydrate oxidation either through the selective recruitment of type II fibers within the same muscles or by regulating pathway recruitment within type I fibers. The selection patterns of shivering and exercise are different: at the same %V(O(2,max)), a muscle producing only heat (shivering) or significant movement (exercise) strikes a different balance between lipid and carbohydrate oxidation. Long-distance migrants provide an excellent model to characterize how to increase maximal substrate fluxes. High lipid fluxes are achieved through the coordinated upregulation of mobilization, transport and oxidation by activating enzymes, lipid-solubilizing proteins and membrane transporters. These endurance athletes support record lipolytic rates in adipocytes, use lipoprotein shuttles to accelerate transport and show increased capacity for lipid oxidation in muscle mitochondria. Some migrant birds use dietary omega-3 fatty acids as performance-enhancing agents to boost their ability to process lipids. These dietary fatty acids become incorporated in membrane phospholipids and bind to peroxisome proliferator-activated receptors to activate membrane proteins and modify gene expression.
Clustering high dimensional data using RIA
Energy Technology Data Exchange (ETDEWEB)
Aziz, Nazrina [School of Quantitative Sciences, College of Arts and Sciences, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia)
2015-05-15
Clustering may simply represent a convenient method for organizing a large data set so that it can easily be understood and information can efficiently be retrieved. However, identifying cluster in high dimensionality data sets is a difficult task because of the curse of dimensionality. Another challenge in clustering is some traditional functions cannot capture the pattern dissimilarity among objects. In this article, we used an alternative dissimilarity measurement called Robust Influence Angle (RIA) in the partitioning method. RIA is developed using eigenstructure of the covariance matrix and robust principal component score. We notice that, it can obtain cluster easily and hence avoid the curse of dimensionality. It is also manage to cluster large data sets with mixed numeric and categorical value.
Model Selection with the Linear Mixed Model for Longitudinal Data
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
Chernozhukov, Victor; Hansen, Chris; Spindler, Martin
2016-01-01
The package High-dimensional Metrics (\\Rpackage{hdm}) is an evolving collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e.g., treatment or poli...
Partnership Selection Involving Mixed Types of Uncertain Preferences
Directory of Open Access Journals (Sweden)
Li-Ching Ma
2013-01-01
Full Text Available Partnership selection is an important issue in management science. This study proposes a general model based on mixed integer programming and goal-programming analytic hierarchy process (GP-AHP to solve partnership selection problems involving mixed types of uncertain or inconsistent preferences. The proposed approach is designed to deal with crisp, interval, step, fuzzy, or mixed comparison preferences, derive crisp priorities, and improve multiple solution problems. The degree of fulfillment of a decision maker’s preferences is also taken into account. The results show that the proposed approach keeps more solution ratios within the given preferred intervals and yields less deviation. In addition, the proposed approach can treat incomplete preference matrices with flexibility in reducing the number of pairwise comparisons required and can also be conveniently developed into a decision support system.
Analysing spatially extended high-dimensional dynamics by recurrence plots
Energy Technology Data Exchange (ETDEWEB)
Marwan, Norbert, E-mail: marwan@pik-potsdam.de [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Kurths, Jürgen [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Humboldt Universität zu Berlin, Institut für Physik (Germany); Nizhny Novgorod State University, Department of Control Theory, Nizhny Novgorod (Russian Federation); Foerster, Saskia [GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing, Telegrafenberg, 14473 Potsdam (Germany)
2015-05-08
Recurrence plot based measures of complexity are capable tools for characterizing complex dynamics. In this letter we show the potential of selected recurrence plot measures for the investigation of even high-dimensional dynamics. We apply this method on spatially extended chaos, such as derived from the Lorenz96 model and show that the recurrence plot based measures can qualitatively characterize typical dynamical properties such as chaotic or periodic dynamics. Moreover, we demonstrate its power by analysing satellite image time series of vegetation cover with contrasting dynamics as a spatially extended and potentially high-dimensional example from the real world. - Highlights: • We use recurrence plots for analysing partially extended dynamics. • We investigate the high-dimensional chaos of the Lorenz96 model. • The approach distinguishes different spatio-temporal dynamics. • We use the method for studying vegetation cover time series.
Runcie, Daniel E; Mukherjee, Sayan
2013-07-01
Quantitative genetic studies that model complex, multivariate phenotypes are important for both evolutionary prediction and artificial selection. For example, changes in gene expression can provide insight into developmental and physiological mechanisms that link genotype and phenotype. However, classical analytical techniques are poorly suited to quantitative genetic studies of gene expression where the number of traits assayed per individual can reach many thousand. Here, we derive a Bayesian genetic sparse factor model for estimating the genetic covariance matrix (G-matrix) of high-dimensional traits, such as gene expression, in a mixed-effects model. The key idea of our model is that we need consider only G-matrices that are biologically plausible. An organism's entire phenotype is the result of processes that are modular and have limited complexity. This implies that the G-matrix will be highly structured. In particular, we assume that a limited number of intermediate traits (or factors, e.g., variations in development or physiology) control the variation in the high-dimensional phenotype, and that each of these intermediate traits is sparse - affecting only a few observed traits. The advantages of this approach are twofold. First, sparse factors are interpretable and provide biological insight into mechanisms underlying the genetic architecture. Second, enforcing sparsity helps prevent sampling errors from swamping out the true signal in high-dimensional data. We demonstrate the advantages of our model on simulated data and in an analysis of a published Drosophila melanogaster gene expression data set.
Modeling High-Dimensional Multichannel Brain Signals
Hu, Lechuan; Fortin, Norbert J.; Ombao, Hernando
2017-01-01
aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel
High dimensional neurocomputing growth, appraisal and applications
Tripathi, Bipin Kumar
2015-01-01
The book presents a coherent understanding of computational intelligence from the perspective of what is known as "intelligent computing" with high-dimensional parameters. It critically discusses the central issue of high-dimensional neurocomputing, such as quantitative representation of signals, extending the dimensionality of neuron, supervised and unsupervised learning and design of higher order neurons. The strong point of the book is its clarity and ability of the underlying theory to unify our understanding of high-dimensional computing where conventional methods fail. The plenty of application oriented problems are presented for evaluating, monitoring and maintaining the stability of adaptive learning machine. Author has taken care to cover the breadth and depth of the subject, both in the qualitative as well as quantitative way. The book is intended to enlighten the scientific community, ranging from advanced undergraduates to engineers, scientists and seasoned researchers in computational intelligenc...
Chemical recycling of mixed waste plastics by selective pyrolysis
Energy Technology Data Exchange (ETDEWEB)
Tatsumoto, K.; Meglen, R.; Evans, R. [National Renewable Energy Laboratory, Golden, CO (United States)
1995-05-01
The goal of this work is to use selective pyrolysis to produce high-value chemicals from waste plastics mixtures. Selectivity is achieved by exploiting differences in reaction rates, catalysis, and coreactants. Target wastes are molecular mixtures such as; blends or composites, or mixtures from manufactured products such as; carpets and post-consumer mixed-plastic wastes. The experimental approach has been to use small-scale experiments using molecular beam mass spectrometry (MBMS), which provides rapid analysis of reaction products and permits rapid screening of process parameters. Rapid screening experiments permit exploration of many potential waste stream applications for the selective pyrolysis process. After initial screening, small-scale, fixed-bed and fluidized-bed reactors are used to provide products for conventional chemical analysis, to determine material balances, and to test the concept under conditions that will be used at a larger scale. Computer assisted data interpretation and intelligent chemical processing are used to extract process-relevant information from these experiments. An important element of this project employs technoeconomic assessments and market analyses of durables, the availability of other wastes, and end-product uses to identify target applications that have the potential for economic success.
Selective propene oxidation on mixed metal oxide catalysts
International Nuclear Information System (INIS)
James, David William
2002-01-01
Selective catalytic oxidation processes represent a large segment of the modern chemical industry and a major application of these is the selective partial oxidation of propene to produce acrolein. Mixed metal oxide catalysts are particularly effective in promoting this reaction, and the two primary candidates for the industrial process are based on iron antimonate and bismuth molybdate. Some debate exists in the literature regarding the operation of these materials and the roles of their catalytic components. In particular, iron antimonate catalysts containing excess antimony are known to be highly selective towards acrolein, and a variety of proposals for the enhanced selectivity of such materials have been given. The aim of this work was to provide a direct comparison between the behaviour of bismuth molybdate and iron antimonate catalysts, with additional emphasis being placed on the component single oxide phases of the latter. Studies were also extended to other antimonate-based catalysts, including cobalt antimonate and vanadium antimonate. Reactivity measurements were made using a continuous flow microreactor, which was used in conjunction with a variety of characterisation techniques to determine relationships between the catalytic behaviour and the properties of the materials. The ratio of Fe/Sb in the iron antimonate catalyst affects the reactivity of the system under steady state conditions, with additional iron beyond the stoichiometric value being detrimental to the acrolein selectivity, while extra antimony provides a means of enhancing the selectivity by decreasing acrolein combustion. Studies on the single antimony oxides of iron antimonate have shown a similarity between the reactivity of 'Sb 2 O 5 ' and FeSbO 4 , and a significant difference between these and the Sb 2 O 3 and Sb 2 O 4 phases, implying that the mixed oxide catalyst has a surface mainly comprised of Sb 5+ . The lack of reactivity of Sb 2 O 4 implies a similarity of the surface with
On spectral distribution of high dimensional covariation matrices
DEFF Research Database (Denmark)
Heinrich, Claudio; Podolskij, Mark
In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....
High Dimensional Classification Using Features Annealed Independence Rules.
Fan, Jianqing; Fan, Yingying
2008-01-01
Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.
The mixing evolutionary algorithm : indepedent selection and allocation of trials
C.H.M. van Kemenade
1997-01-01
textabstractWhen using an evolutionary algorithm to solve a problem involving building blocks we have to grow the building blocks and then mix these building blocks to obtain the (optimal) solution. Finding a good balance between the growing and the mixing process is a prerequisite to get a reliable
Introduction to high-dimensional statistics
Giraud, Christophe
2015-01-01
Ever-greater computing technologies have given rise to an exponentially growing volume of data. Today massive data sets (with potentially thousands of variables) play an important role in almost every branch of modern human activity, including networks, finance, and genetics. However, analyzing such data has presented a challenge for statisticians and data analysts and has required the development of new statistical methods capable of separating the signal from the noise.Introduction to High-Dimensional Statistics is a concise guide to state-of-the-art models, techniques, and approaches for ha
High dimensional classifiers in the imbalanced case
DEFF Research Database (Denmark)
Bak, Britta Anker; Jensen, Jens Ledet
We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...... to a bias in the classification. A theoretical analysis of the independence classifier reveals the origin of the bias and based on this we suggest two new classifiers that can handle any imbalance ratio. The analytical results are supplemented by a simulation study, where the suggested classifiers in some...
Topology of high-dimensional manifolds
Energy Technology Data Exchange (ETDEWEB)
Farrell, F T [State University of New York, Binghamton (United States); Goettshe, L [Abdus Salam ICTP, Trieste (Italy); Lueck, W [Westfaelische Wilhelms-Universitaet Muenster, Muenster (Germany)
2002-08-15
The School on High-Dimensional Manifold Topology took place at the Abdus Salam ICTP, Trieste from 21 May 2001 to 8 June 2001. The focus of the school was on the classification of manifolds and related aspects of K-theory, geometry, and operator theory. The topics covered included: surgery theory, algebraic K- and L-theory, controlled topology, homology manifolds, exotic aspherical manifolds, homeomorphism and diffeomorphism groups, and scalar curvature. The school consisted of 2 weeks of lecture courses and one week of conference. Thwo-part lecture notes volume contains the notes of most of the lecture courses.
Manifold learning to interpret JET high-dimensional operational space
International Nuclear Information System (INIS)
Cannas, B; Fanni, A; Pau, A; Sias, G; Murari, A
2013-01-01
In this paper, the problem of visualization and exploration of JET high-dimensional operational space is considered. The data come from plasma discharges selected from JET campaigns from C15 (year 2005) up to C27 (year 2009). The aim is to learn the possible manifold structure embedded in the data and to create some representations of the plasma parameters on low-dimensional maps, which are understandable and which preserve the essential properties owned by the original data. A crucial issue for the design of such mappings is the quality of the dataset. This paper reports the details of the criteria used to properly select suitable signals downloaded from JET databases in order to obtain a dataset of reliable observations. Moreover, a statistical analysis is performed to recognize the presence of outliers. Finally data reduction, based on clustering methods, is performed to select a limited and representative number of samples for the operational space mapping. The high-dimensional operational space of JET is mapped using a widely used manifold learning method, the self-organizing maps. The results are compared with other data visualization methods. The obtained maps can be used to identify characteristic regions of the plasma scenario, allowing to discriminate between regions with high risk of disruption and those with low risk of disruption. (paper)
Modeling high dimensional multichannel brain signals
Hu, Lechuan
2017-03-27
In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.
Modeling high dimensional multichannel brain signals
Hu, Lechuan; Fortin, Norbert; Ombao, Hernando
2017-01-01
In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.
The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.
2013-01-01
Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…
Elucidating high-dimensional cancer hallmark annotation via enriched ontology.
Yan, Shankai; Wong, Ka-Chun
2017-09-01
Cancer hallmark annotation is a promising technique that could discover novel knowledge about cancer from the biomedical literature. The automated annotation of cancer hallmarks could reveal relevant cancer transformation processes in the literature or extract the articles that correspond to the cancer hallmark of interest. It acts as a complementary approach that can retrieve knowledge from massive text information, advancing numerous focused studies in cancer research. Nonetheless, the high-dimensional nature of cancer hallmark annotation imposes a unique challenge. To address the curse of dimensionality, we compared multiple cancer hallmark annotation methods on 1580 PubMed abstracts. Based on the insights, a novel approach, UDT-RF, which makes use of ontological features is proposed. It expands the feature space via the Medical Subject Headings (MeSH) ontology graph and utilizes novel feature selections for elucidating the high-dimensional cancer hallmark annotation space. To demonstrate its effectiveness, state-of-the-art methods are compared and evaluated by a multitude of performance metrics, revealing the full performance spectrum on the full set of cancer hallmarks. Several case studies are conducted, demonstrating how the proposed approach could reveal novel insights into cancers. https://github.com/cskyan/chmannot. Copyright © 2017 Elsevier Inc. All rights reserved.
Light alkane (mixed feed) selective dehydrogenation using bi ...
African Journals Online (AJOL)
... refinery processes and their catalytic dehydrogenation gives corresponding alkenes. ... was prepared by sequentional impregnation method and characterized by BET, ... Optimum propene selectivity is about 48 %, obtained at 600 oC and ...
Modeling High-Dimensional Multichannel Brain Signals
Hu, Lechuan
2017-12-12
Our goal is to model and measure functional and effective (directional) connectivity in multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The difficulties from analyzing these data mainly come from two aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with potentially high lag order so that complex lead-lag temporal dynamics between the channels can be captured. Estimates of the VAR model will be obtained by our proposed hybrid LASSLE (LASSO + LSE) method which combines regularization (to control for sparsity) and least squares estimation (to improve bias and mean-squared error). Then we employ some measures of connectivity but put an emphasis on partial directed coherence (PDC) which can capture the directional connectivity between channels. PDC is a frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative to all possible receivers in the network. The proposed modeling approach provided key insights into potential functional relationships among simultaneously recorded sites during performance of a complex memory task. Specifically, this novel method was successful in quantifying patterns of effective connectivity across electrode locations, and in capturing how these patterns varied across trial epochs and trial types.
Use of selected waste materials in concrete mixes.
Batayneh, Malek; Marie, Iqbal; Asi, Ibrahim
2007-01-01
A modern lifestyle, alongside the advancement of technology has led to an increase in the amount and type of waste being generated, leading to a waste disposal crisis. This study tackles the problem of the waste that is generated from construction fields, such as demolished concrete, glass, and plastic. In order to dispose of or at least reduce the accumulation of certain kinds of waste, it has been suggested to reuse some of these waste materials to substitute a percentage of the primary materials used in the ordinary portland cement concrete (OPC). The waste materials considered to be recycled in this study consist of glass, plastics, and demolished concrete. Such recycling not only helps conserve natural resources, but also helps solve a growing waste disposal crisis. Ground plastics and glass were used to replace up to 20% of fine aggregates in concrete mixes, while crushed concrete was used to replace up to 20% of coarse aggregates. To evaluate these replacements on the properties of the OPC mixes, a number of laboratory tests were carried out. These tests included workability, unit weight, compressive strength, flexural strength, and indirect tensile strength (splitting). The main findings of this investigation revealed that the three types of waste materials could be reused successfully as partial substitutes for sand or coarse aggregates in concrete mixtures.
Use of selected waste materials in concrete mixes
International Nuclear Information System (INIS)
Batayneh, Malek; Marie, Iqbal; Asi, Ibrahim
2007-01-01
A modern lifestyle, alongside the advancement of technology has led to an increase in the amount and type of waste being generated, leading to a waste disposal crisis. This study tackles the problem of the waste that is generated from construction fields, such as demolished concrete, glass, and plastic. In order to dispose of or at least reduce the accumulation of certain kinds of waste, it has been suggested to reuse some of these waste materials to substitute a percentage of the primary materials used in the ordinary portland cement concrete (OPC). The waste materials considered to be recycled in this study consist of glass, plastics, and demolished concrete. Such recycling not only helps conserve natural resources, but also helps solve a growing waste disposal crisis. Ground plastics and glass were used to replace up to 20% of fine aggregates in concrete mixes, while crushed concrete was used to replace up to 20% of coarse aggregates. To evaluate these replacements on the properties of the OPC mixes, a number of laboratory tests were carried out. These tests included workability, unit weight, compressive strength, flexural strength, and indirect tensile strength (splitting). The main findings of this investigation revealed that the three types of waste materials could be reused successfully as partial substitutes for sand or coarse aggregates in concrete mixtures
Class prediction for high-dimensional class-imbalanced data
Directory of Open Access Journals (Sweden)
Lusa Lara
2010-10-01
Full Text Available Abstract Background The goal of class prediction studies is to develop rules to accurately predict the class membership of new samples. The rules are derived using the values of the variables available for each subject: the main characteristic of high-dimensional data is that the number of variables greatly exceeds the number of samples. Frequently the classifiers are developed using class-imbalanced data, i.e., data sets where the number of samples in each class is not equal. Standard classification methods used on class-imbalanced data often produce classifiers that do not accurately predict the minority class; the prediction is biased towards the majority class. In this paper we investigate if the high-dimensionality poses additional challenges when dealing with class-imbalanced prediction. We evaluate the performance of six types of classifiers on class-imbalanced data, using simulated data and a publicly available data set from a breast cancer gene-expression microarray study. We also investigate the effectiveness of some strategies that are available to overcome the effect of class imbalance. Results Our results show that the evaluated classifiers are highly sensitive to class imbalance and that variable selection introduces an additional bias towards classification into the majority class. Most new samples are assigned to the majority class from the training set, unless the difference between the classes is very large. As a consequence, the class-specific predictive accuracies differ considerably. When the class imbalance is not too severe, down-sizing and asymmetric bagging embedding variable selection work well, while over-sampling does not. Variable normalization can further worsen the performance of the classifiers. Conclusions Our results show that matching the prevalence of the classes in training and test set does not guarantee good performance of classifiers and that the problems related to classification with class
24 CFR 960.407 - Selection preference for mixed population developments.
2010-04-01
... population developments. 960.407 Section 960.407 Housing and Urban Development Regulations Relating to... Elderly Families and Disabled Families in Mixed Population Projects § 960.407 Selection preference for mixed population developments. (a) The PHA must give preference to elderly families and disabled...
High-dimensional cluster analysis with the Masked EM Algorithm
Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.
2014-01-01
Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694
Carbon dioxide selective mixed matrix composite membrane containing ZIF-7 nano-fillers
Li, Tao; Pan, Yichang; Peinemann, Klaus-Viktor; Lai, Zhiping
2013-01-01
Mixed matrix materials made from selective inorganic fillers and polymers are very attractive for the manufacturing of gas separation membranes. But only few of these materials could be manufactured into high-performance asymmetric or composite
Selecting an optimal mixed products using grey relationship model
Directory of Open Access Journals (Sweden)
Farshad Faezy Razi
2013-06-01
Full Text Available This paper presents an integrated supplier selection and inventory management using grey relationship model (GRM as well as multi-objective decision making process. The proposed model of this paper first ranks different suppliers based on GRM technique and then determines the optimum level of inventory by considering different objectives. To show the implementation of the proposed model, we use some benchmark data presented by Talluri and Baker [Talluri, S., & Baker, R. C. (2002. A multi-phase mathematical programming approach for effective supply chain design. European Journal of Operational Research, 141(3, 544-558.]. The preliminary results indicate that the proposed model of this paper is capable of handling different criteria for supplier selection.
Selective antibacterial effects of mixed ZnMgO nanoparticles
International Nuclear Information System (INIS)
Vidic, Jasmina; Stankic, Slavica; Haque, Francia; Ciric, Danica; Le Goffic, Ronan; Vidy, Aurore; Jupille, Jacques; Delmas, Bernard
2013-01-01
Antibiotic resistance has impelled the research for new agents that can inhibit bacterial growth without showing cytotoxic effects on humans and other species. We describe the synthesis and physicochemical characterization of nanostructured ZnMgO whose antibacterial activity was compared to its pure nano-ZnO and nano-MgO counterparts. Among the three oxides, ZnO nanocrystals—with the length of tetrapod legs about 100 nm and the diameter about 10 nm—were found to be the most effective antibacterial agents since both Gram-positive (B. subtilis) and Gram-negative (E. coli) bacteria were completely eradicated at concentration of 1 mg/mL. MgO nanocubes (the mean cube size ∼50 nm) only partially inhibited bacterial growth, whereas ZnMgO nanoparticles (sizes corresponding to pure particles) revealed high specific antibacterial activity to Gram-positive bacteria at this concentration. Transmission electron microscopy analysis showed that B. subtilis cells were damaged after contact with nano-ZnMgO, causing cell contents to leak out. Our preliminary toxicological study pointed out that nano-ZnO is toxic when applied to human HeLa cells, while nano-MgO and the mixed oxide did not induce any cell damage. Overall, our results suggested that nanostructured ZnMgO, may reconcile efficient antibacterial efficiency while being a safe new therapeutic for bacterial infections.
Selective antibacterial effects of mixed ZnMgO nanoparticles
Vidic, Jasmina; Stankic, Slavica; Haque, Francia; Ciric, Danica; Le Goffic, Ronan; Vidy, Aurore; Jupille, Jacques; Delmas, Bernard
2013-05-01
Antibiotic resistance has impelled the research for new agents that can inhibit bacterial growth without showing cytotoxic effects on humans and other species. We describe the synthesis and physicochemical characterization of nanostructured ZnMgO whose antibacterial activity was compared to its pure nano-ZnO and nano-MgO counterparts. Among the three oxides, ZnO nanocrystals—with the length of tetrapod legs about 100 nm and the diameter about 10 nm—were found to be the most effective antibacterial agents since both Gram-positive ( B. subtilis) and Gram-negative ( E. coli) bacteria were completely eradicated at concentration of 1 mg/mL. MgO nanocubes (the mean cube size 50 nm) only partially inhibited bacterial growth, whereas ZnMgO nanoparticles (sizes corresponding to pure particles) revealed high specific antibacterial activity to Gram-positive bacteria at this concentration. Transmission electron microscopy analysis showed that B. subtilis cells were damaged after contact with nano-ZnMgO, causing cell contents to leak out. Our preliminary toxicological study pointed out that nano-ZnO is toxic when applied to human HeLa cells, while nano-MgO and the mixed oxide did not induce any cell damage. Overall, our results suggested that nanostructured ZnMgO, may reconcile efficient antibacterial efficiency while being a safe new therapeutic for bacterial infections.
Selective antibacterial effects of mixed ZnMgO nanoparticles
Energy Technology Data Exchange (ETDEWEB)
Vidic, Jasmina, E-mail: jasmina.vidic@jouy.inra.fr [VIM, Institut de la Recherche Agronomique (France); Stankic, Slavica, E-mail: slavica.stankic@insp.jussieu.fr; Haque, Francia [CNRS, Institut des Nanosciences de Paris, UMR 7588 (France); Ciric, Danica; Le Goffic, Ronan; Vidy, Aurore [VIM, Institut de la Recherche Agronomique (France); Jupille, Jacques [CNRS, Institut des Nanosciences de Paris, UMR 7588 (France); Delmas, Bernard [VIM, Institut de la Recherche Agronomique (France)
2013-05-15
Antibiotic resistance has impelled the research for new agents that can inhibit bacterial growth without showing cytotoxic effects on humans and other species. We describe the synthesis and physicochemical characterization of nanostructured ZnMgO whose antibacterial activity was compared to its pure nano-ZnO and nano-MgO counterparts. Among the three oxides, ZnO nanocrystals-with the length of tetrapod legs about 100 nm and the diameter about 10 nm-were found to be the most effective antibacterial agents since both Gram-positive (B. subtilis) and Gram-negative (E. coli) bacteria were completely eradicated at concentration of 1 mg/mL. MgO nanocubes (the mean cube size {approx}50 nm) only partially inhibited bacterial growth, whereas ZnMgO nanoparticles (sizes corresponding to pure particles) revealed high specific antibacterial activity to Gram-positive bacteria at this concentration. Transmission electron microscopy analysis showed that B. subtilis cells were damaged after contact with nano-ZnMgO, causing cell contents to leak out. Our preliminary toxicological study pointed out that nano-ZnO is toxic when applied to human HeLa cells, while nano-MgO and the mixed oxide did not induce any cell damage. Overall, our results suggested that nanostructured ZnMgO, may reconcile efficient antibacterial efficiency while being a safe new therapeutic for bacterial infections.
Directory of Open Access Journals (Sweden)
Hailun Wang
2017-01-01
Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.
Lemmen-Gerdessen, van J.C.; Souverein, O.W.; Veer, van 't P.; Vries, de J.H.M.
2015-01-01
Objective To support the selection of food items for FFQs in such a way that the amount of information on all relevant nutrients is maximised while the food list is as short as possible. Design Selection of the most informative food items to be included in FFQs was modelled as a Mixed Integer Linear
Model-based Clustering of High-Dimensional Data in Astrophysics
Bouveyron, C.
2016-05-01
The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.
Multivariate statistics high-dimensional and large-sample approximations
Fujikoshi, Yasunori; Shimizu, Ryoichi
2010-01-01
A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic
Su, Hailin; Li, Hengde; Wang, Shi; Wang, Yangfan; Bao, Zhenmin
2017-02-01
Genomic selection is more and more popular in animal and plant breeding industries all around the world, as it can be applied early in life without impacting selection candidates. The objective of this study was to bring the advantages of genomic selection to scallop breeding. Two different genomic selection tools MixP and gsbay were applied on genomic evaluation of simulated data and Zhikong scallop ( Chlamys farreri) field data. The data were compared with genomic best linear unbiased prediction (GBLUP) method which has been applied widely. Our results showed that both MixP and gsbay could accurately estimate single-nucleotide polymorphism (SNP) marker effects, and thereby could be applied for the analysis of genomic estimated breeding values (GEBV). In simulated data from different scenarios, the accuracy of GEBV acquired was ranged from 0.20 to 0.78 by MixP; it was ranged from 0.21 to 0.67 by gsbay; and it was ranged from 0.21 to 0.61 by GBLUP. Estimations made by MixP and gsbay were expected to be more reliable than those estimated by GBLUP. Predictions made by gsbay were more robust, while with MixP the computation is much faster, especially in dealing with large-scale data. These results suggested that both algorithms implemented by MixP and gsbay are feasible to carry out genomic selection in scallop breeding, and more genotype data will be necessary to produce genomic estimated breeding values with a higher accuracy for the industry.
Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?
Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W
2018-03-01
The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.
Bayesian Subset Modeling for High-Dimensional Generalized Linear Models
Liang, Faming
2013-06-01
This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.
Mixing time study to select suitable stirrer for electrorefiner. Contributed Paper RD-03
International Nuclear Information System (INIS)
Agarwal, Sourabh; Mythili, M; Joseph, Joby; Nandakumar, V.; Muralidharan, B.; Padmakumar, G.; Rajan, K.K.
2014-01-01
Pyro-processing is an alternative to conventional methods of aqueous reprocessing of nuclear fuels. Electrorefining is an important process step in pyro-processing, carried out in a high temperature molten salt bath in an Electrorefiner. The recovery of actinides from the spent fuels has to be high. One of the methods to achieve this is to ensure proper mixing of the molten salt in the electrorefiner. The optimum design of the stirrer should ensure efficient mixing with minimum mixing time. Studies have been carried out in an engineering scale model of the electrorefiner to study the mixing phenomena. This paper brings outs the series of experiments conducted on an ambient temperature electrorefiner to select a suitable stirrer. (author)
A metal ion charged mixed matrix membrane for selective adsorption of hemoglobin
Tetala, K.K.R.; Skrzypek, K.; Levisson, M.; Stamatialis, D.F.
2013-01-01
In this work, we developed a mixed matrix membrane by incorporating 20–40 µm size iminodiacetic acid modified immobeads within porous Ethylene vinyl alcohol (EVAL) polymer matrix. The MMM were charged with copper ions for selective adsorption of bovine hemoglobin in presence of bovine serum albumin.
A metal ion charged mixed matrix membrane for selective adsorption of hemoglobin
Tetala, K.K.R.; Skrzypek, Katarzyna; Levisson, M.; Stamatialis, Dimitrios
2013-01-01
In this work, we developed a mixed matrix membrane by incorporating 20–40 μm size iminodiacetic acid modified immobeads within porous Ethylene vinyl alcohol (EVAL) polymer matrix. The MMM were charged with copper ions for selective adsorption of bovine hemoglobin in presence of bovine serum albumin.
Mode-Selective Wavelength Conversion Based on Four-Wave Mixing in a Multimode Silicon Waveguide
DEFF Research Database (Denmark)
Ding, Yunhong; Xu, Jing; Ou, Haiyan
2013-01-01
We report all-optical mode-selective wavelength conversion based on four-wave mixing in a multimode Si waveguide. A two-mode division multiplexing circuit using tapered directional coupler based (de)multiplexers is used for the application. Experimental results show clear eye-diagrams and moderate...
Research on ration selection of mixed absorbent solution for membrane air-conditioning system
International Nuclear Information System (INIS)
Li, Xiu-Wei; Zhang, Xiao-Song; Wang, Fang; Zhao, Xiao; Zhang, Zhuo
2015-01-01
Highlights: • We derive models of the membrane air-conditioning system with mixed absorbents. • We make analysis on system COP, cost-effectiveness and economy. • The paper provides a new method for ideal absorbent selection. • The solutes concentration of 50% achieves the best cost-effectiveness and the economy. - Abstract: Absorption air-conditioning system is a good alternative to vapor compression system for developing low carbon society. To improve the performance of the traditional absorption system, the membrane air-conditioning system is configured and its COP can reach as high as 6. Mixed absorbents are potential for cost reduction of the membrane system while maintaining a high COP. On the purpose of finding ideal mixed absorbent groups, this paper makes analysis on COP, cost-effectiveness and economy of the membrane system with mixed LiBr–CaCl 2 absorbent solution. The models of the system have been developed for the analysis. The results show the COP is higher for the absorbent groups with lower concentration of the total solute and higher concentration ratio of LiBr. It also reveals when the total solutes concentration is about 50%, it achieves the best cost-effectiveness and the economy. The process of the analysis provides a useful method for mixed absorbents selection
Selection of analytical methods for mixed waste analysis at the Hanford Site
International Nuclear Information System (INIS)
Morant, P.M.
1994-09-01
This document describes the process that the US Department of Energy (DOE), Richland Operations Office (RL) and contractor laboratories use to select appropriate or develop new or modified analytical methods. These methods are needed to provide reliable mixed waste characterization data that meet project-specific quality assurance (QA) requirements while also meeting health and safety standards for handling radioactive materials. This process will provide the technical basis for DOE's analysis of mixed waste and support requests for regulatory approval of these new methods when they are used to satisfy the regulatory requirements of the Hanford Federal Facility Agreement and Consent Order (Tri-party Agreement) (Ecology et al. 1992)
Selectivity of single, mixed, and modified pseudostationary phases in electrokinetic chromatography.
Fuguet, Elisabet; Ràfols, Clara; Bosch, Elisabeth; Abraham, Michael H; Rosés, Martí
2006-05-01
The selectivity of a compilation of single, mixed, and modified EKC pseudostationary phases, described in the literature and characterized through the solvation parameter model, is analyzed. Not only have micellar systems of different nature been included but also microemulsions, polymeric, and liposomial phases. In order to compare the systems, a principal component analysis of the coefficients of the solvation equation is performed. From this analysis, direct information of the system properties, differences in selectivity, as well as evidence of lack of accuracy in some system characterizations are obtained. These results become a very useful tool to perform separations with mixtures of surfactants, since it is possible to know which mixtures will provide a greater selectivity variation by changing only the composition of the pseudostationary phases. Furthermore, the variation of the selectivity of some mixtures, as well as the effect of the addition of organic solvents on selectivity, is also discussed.
International Nuclear Information System (INIS)
Streit, R.D.; Couture, S.A.
1995-03-01
The purpose of this document is to establish the foundation for the selection and implementation of technologies to be demonstrated in the Mixed Waste Management Facility, and to select the technologies for initial pilot-scale demonstration. Criteria are defined for judging demonstration technologies, and the framework for future technology selection is established. On the basis of these criteria, an initial suite of technologies was chosen, and the demonstration implementation scheme was developed. Part 1, previously released, addresses the selection of the primary processes. Part II addresses process support systems that are considered ''demonstration technologies.'' Other support technologies, e.g., facility off-gas, receiving and shipping, and water treatment, while part of the integrated demonstration, use best available commercial equipment and are not selected against the demonstration technology criteria
Energy Technology Data Exchange (ETDEWEB)
Bostick, W.D.; Hoffmann, D.P.; Chiang, J.M.; Hermes, W.H.; Gibson, L.V. Jr.; Richmond, A.A. [Martin Marietta Energy Systems, Inc., Oak Ridge, TN (United States); Mayberry, J. [Science Applications International Corp., Idaho Falls, ID (United States); Frazier, G. [Univ. of Tennessee, Knoxville, TN (United States)
1994-01-01
This report summarizes the formulation of surrogate waste packages, representing the major bulk constituent compositions for 12 waste stream classifications selected by the US DOE Mixed Waste Treatment Program. These waste groupings include: neutral aqueous wastes; aqueous halogenated organic liquids; ash; high organic content sludges; adsorbed aqueous and organic liquids; cement sludges, ashes, and solids; chloride; sulfate, and nitrate salts; organic matrix solids; heterogeneous debris; bulk combustibles; lab packs; and lead shapes. Insofar as possible, formulation of surrogate waste packages are referenced to authentic wastes in inventory within the DOE; however, the surrogate waste packages are intended to represent generic treatability group compositions. The intent is to specify a nonradiological synthetic mixture, with a minimal number of readily available components, that can be used to represent the significant challenges anticipated for treatment of the specified waste class. Performance testing and evaluation with use of a consistent series of surrogate wastes will provide a means for the initial assessment (and intercomparability) of candidate treatment technology applicability and performance. Originally the surrogate wastes were intended for use with emerging thermal treatment systems, but use may be extended to select nonthermal systems as well.
International Nuclear Information System (INIS)
Bostick, W.D.; Hoffmann, D.P.; Chiang, J.M.; Hermes, W.H.; Gibson, L.V. Jr.; Richmond, A.A.; Mayberry, J.; Frazier, G.
1994-01-01
This report summarizes the formulation of surrogate waste packages, representing the major bulk constituent compositions for 12 waste stream classifications selected by the US DOE Mixed Waste Treatment Program. These waste groupings include: neutral aqueous wastes; aqueous halogenated organic liquids; ash; high organic content sludges; adsorbed aqueous and organic liquids; cement sludges, ashes, and solids; chloride; sulfate, and nitrate salts; organic matrix solids; heterogeneous debris; bulk combustibles; lab packs; and lead shapes. Insofar as possible, formulation of surrogate waste packages are referenced to authentic wastes in inventory within the DOE; however, the surrogate waste packages are intended to represent generic treatability group compositions. The intent is to specify a nonradiological synthetic mixture, with a minimal number of readily available components, that can be used to represent the significant challenges anticipated for treatment of the specified waste class. Performance testing and evaluation with use of a consistent series of surrogate wastes will provide a means for the initial assessment (and intercomparability) of candidate treatment technology applicability and performance. Originally the surrogate wastes were intended for use with emerging thermal treatment systems, but use may be extended to select nonthermal systems as well
Harnessing high-dimensional hyperentanglement through a biphoton frequency comb
Xie, Zhenda; Zhong, Tian; Shrestha, Sajan; Xu, Xinan; Liang, Junlin; Gong, Yan-Xiao; Bienfang, Joshua C.; Restelli, Alessandro; Shapiro, Jeffrey H.; Wong, Franco N. C.; Wei Wong, Chee
2015-08-01
Quantum entanglement is a fundamental resource for secure information processing and communications, and hyperentanglement or high-dimensional entanglement has been separately proposed for its high data capacity and error resilience. The continuous-variable nature of the energy-time entanglement makes it an ideal candidate for efficient high-dimensional coding with minimal limitations. Here, we demonstrate the first simultaneous high-dimensional hyperentanglement using a biphoton frequency comb to harness the full potential in both the energy and time domain. Long-postulated Hong-Ou-Mandel quantum revival is exhibited, with up to 19 time-bins and 96.5% visibilities. We further witness the high-dimensional energy-time entanglement through Franson revivals, observed periodically at integer time-bins, with 97.8% visibility. This qudit state is observed to simultaneously violate the generalized Bell inequality by up to 10.95 standard deviations while observing recurrent Clauser-Horne-Shimony-Holt S-parameters up to 2.76. Our biphoton frequency comb provides a platform for photon-efficient quantum communications towards the ultimate channel capacity through energy-time-polarization high-dimensional encoding.
Directory of Open Access Journals (Sweden)
Jeane Eliete Laguila Visentainer
Full Text Available CONTEXT: Mixed lymphocyte culturing has led to conflicting opinions regarding the selection of donors for bone marrow transplantation. The association between a positive mixed lymphocyte culture and the development of graft-versus-host disease (GVHD is unclear. The use of exogenous cytokines in mixed lymphocyte cultures could be an alternative for increasing the sensitivity of culture tests. OBJECTIVE: To increase the sensitivity of mixed lymphocyte cultures between donor and recipient human leukocyte antigen (HLA identical siblings, using exogenous cytokines, in order to predict post-transplantation GVHD and/or rejection. TYPE OF STUDY: Prospective study. SETTING: Bone Marrow Transplantation Unit, Universidade Estadual de Campinas. PARTICIPANTS: Seventeen patients with hematological malignancies and their respective donors selected for bone marrow transplantation procedures. PROCEDURES: Standard and modified mixed lymphocyte culturing by cytokine supplementation was carried out using donor and recipient cells typed for HLA. MAIN MEASUREMENTS: Autologous and allogenic responses in mixed lymphocyte cultures after the addition of IL-4 or IL-2. RESULTS: In comparison with the standard method, average responses in the modified mixed lymphocyte cultures increased by a factor of 2.0 using IL-4 (p < 0.001 and 6.4 using IL-2 (p < 0.001, for autologous donor culture responses. For donor-versus-recipient culture responses, the increase was by a factor of 1.9 using IL-4 (p < 0.001 and 4.1 using IL-2 (p < 0.001. For donor-versus-unrelated culture responses, no significant increase was observed using IL-4, and a mean response inhibition of 20% was observed using IL-2 (p < 0.001. Neither of the cytokines produced a significant difference in the unrelated control versus recipient cell responses. CONCLUSION: IL-4 supplementation was the best for increasing the mixed lymphocyte culture sensitivity. However, IL-4 also increased autologous responses, albeit less
Directory of Open Access Journals (Sweden)
Pau Baya
2011-05-01
Full Text Available Remenat (Catalan (Mixed, "revoltillo" (Scrambled in Spanish, is a dish which, in Catalunya, consists of a beaten egg cooked with vegetables or other ingredients, normally prawns or asparagus. It is delicious. Scrambled refers to the action of mixing the beaten egg with other ingredients in a pan, normally using a wooden spoon Thought is frequently an amalgam of past ideas put through a spinner and rhythmically shaken around like a cocktail until a uniform and dense paste is made. This malleable product, rather like a cake mixture can be deformed pulling it out, rolling it around, adapting its shape to the commands of one’s hands or the tool which is being used on it. In the piece Mixed, the contortion of the wood seeks to reproduce the plasticity of this slow heavy movement. Each piece lays itself on the next piece consecutively like a tongue of incandescent lava slowly advancing but with unstoppable inertia.
Mixed integer linear programming model for dynamic supplier selection problem considering discounts
Directory of Open Access Journals (Sweden)
Adi Wicaksono Purnawan
2018-01-01
Full Text Available Supplier selection is one of the most important elements in supply chain management. This function involves evaluation of many factors such as, material costs, transportation costs, quality, delays, supplier capacity, storage capacity and others. Each of these factors varies with time, therefore, supplier identified for one period is not necessarily be same for the next period to supply the same product. So, mixed integer linear programming (MILP was developed to overcome the dynamic supplier selection problem (DSSP. In this paper, a mixed integer linear programming model is built to solve the lot-sizing problem with multiple suppliers, multiple periods, multiple products and quantity discounts. The buyer has to make a decision for some products which will be supplied by some suppliers for some periods cosidering by discount. To validate the MILP model with randomly generated data. The model is solved by Lingo 16.
An Unbiased Distance-based Outlier Detection Approach for High-dimensional Data
DEFF Research Database (Denmark)
Nguyen, Hoang Vu; Gopalkrishnan, Vivekanand; Assent, Ira
2011-01-01
than a global property. Different from existing approaches, it is not grid-based and dimensionality unbiased. Thus, its performance is impervious to grid resolution as well as the curse of dimensionality. In addition, our approach ranks the outliers, allowing users to select the number of desired...... outliers, thus mitigating the issue of high false alarm rate. Extensive empirical studies on real datasets show that our approach efficiently and effectively detects outliers, even in high-dimensional spaces....
GAMLSS for high-dimensional data – a flexible approach based on boosting
Mayr, Andreas; Fenske, Nora; Hofner, Benjamin; Kneib, Thomas; Schmid, Matthias
2010-01-01
Generalized additive models for location, scale and shape (GAMLSS) are a popular semi-parametric modelling approach that, in contrast to conventional GAMs, regress not only the expected mean but every distribution parameter (e.g. location, scale and shape) to a set of covariates. Current fitting procedures for GAMLSS are infeasible for high-dimensional data setups and require variable selection based on (potentially problematic) information criteria. The present work describes a boosting algo...
International Nuclear Information System (INIS)
Allan, Grant; Eromenko, Igor; McGregor, Peter; Swales, Kim
2011-01-01
Standalone levelised cost assessments of electricity supply options miss an important contribution that renewable and non-fossil fuel technologies can make to the electricity portfolio: that of reducing the variability of electricity costs, and their potentially damaging impact upon economic activity. Portfolio theory applications to the electricity generation mix have shown that renewable technologies, their costs being largely uncorrelated with non-renewable technologies, can offer such benefits. We look at the existing Scottish generation mix and examine drivers of changes out to 2020. We assess recent scenarios for the Scottish generation mix in 2020 against mean-variance efficient portfolios of electricity-generating technologies. Each of the scenarios studied implies a portfolio cost of electricity that is between 22% and 38% higher than the portfolio cost of electricity in 2007. These scenarios prove to be mean-variance 'inefficient' in the sense that, for example, lower variance portfolios can be obtained without increasing portfolio costs, typically by expanding the share of renewables. As part of extensive sensitivity analysis, we find that Wave and Tidal technologies can contribute to lower risk electricity portfolios, while not increasing portfolio cost. - Research Highlights: → Portfolio analysis of scenarios for Scotland's electricity generating mix in 2020. → Reveals potential inefficiencies of selecting mixes based on levelised cost alone. → Portfolio risk-reducing contribution of Wave and Tidal technologies assessed.
Energy Technology Data Exchange (ETDEWEB)
Allan, Grant, E-mail: grant.j.allan@strath.ac.u [Fraser of Allander Institute, Department of Economics, University of Strathclyde, Sir William Duncan Building, 130 Rottenrow, Glasgow G4 0GE (United Kingdom); Eromenko, Igor; McGregor, Peter [Fraser of Allander Institute, Department of Economics, University of Strathclyde, Sir William Duncan Building, 130 Rottenrow, Glasgow G4 0GE (United Kingdom); Swales, Kim [Department of Economics, University of Strathclyde, Sir William Duncan Building, 130 Rottenrow, Glasgow G4 0GE (United Kingdom)
2011-01-15
Standalone levelised cost assessments of electricity supply options miss an important contribution that renewable and non-fossil fuel technologies can make to the electricity portfolio: that of reducing the variability of electricity costs, and their potentially damaging impact upon economic activity. Portfolio theory applications to the electricity generation mix have shown that renewable technologies, their costs being largely uncorrelated with non-renewable technologies, can offer such benefits. We look at the existing Scottish generation mix and examine drivers of changes out to 2020. We assess recent scenarios for the Scottish generation mix in 2020 against mean-variance efficient portfolios of electricity-generating technologies. Each of the scenarios studied implies a portfolio cost of electricity that is between 22% and 38% higher than the portfolio cost of electricity in 2007. These scenarios prove to be mean-variance 'inefficient' in the sense that, for example, lower variance portfolios can be obtained without increasing portfolio costs, typically by expanding the share of renewables. As part of extensive sensitivity analysis, we find that Wave and Tidal technologies can contribute to lower risk electricity portfolios, while not increasing portfolio cost. - Research Highlights: {yields} Portfolio analysis of scenarios for Scotland's electricity generating mix in 2020. {yields} Reveals potential inefficiencies of selecting mixes based on levelised cost alone. {yields} Portfolio risk-reducing contribution of Wave and Tidal technologies assessed.
Selected Aspects of Cultural Differences and their Influence on the International Marketing Mix
Svendsen, Anne Sakseide
2010-01-01
Culture is an important business element which can make the difference between success and failure for businesses that will expand abroad. The differences between two cultures do not have to vary to a large extent, but they still have to be considered. Hence knowledge about culture plays an important role in a company's decision making process. This master thesis is focused on selected aspects of cultural differences and their influence on the international marketing mix. The first part of th...
Mixed matrix membranes with fast and selective transport pathways for efficient CO2 separation
Hou, Jinpeng; Li, Xueqin; Guo, Ruili; Zhang, Jianshu; Wang, Zhongming
2018-03-01
To improve CO2 separation performance, porous carbon nanosheets (PCNs) were used as a filler into a Pebax MH 1657 (Pebax) matrix, fabricating mixed matrix membranes (MMMs). The PCNs exhibited a preferential horizontal orientation within the Pebax matrix because of the extremely large 2D plane and nanoscale thickness of the matrix. Therefore, the micropores of the PCNs provided fast CO2 transport pathways, which led to increased CO2 permeability. The reduced pore size of the PCNs was a consequence of the overlapping of PCNs and the polymer chains penetrating into the pores of the PCNs. The reduction in the pore size of the PCNs improved the CO2/gas selectivity. As a result, the CO2 permeability and CO2/CH4 selectivity of the Pebax membrane with 10 wt% PCNs-loading (Pebax-PCNs-10) were 520 barrer and 51, respectively, for CO2/CH4 mixed-gas. The CO2 permeability and CO2/N2 selectivity of the Pebax-PCNs-10 membrane were 614 barrer and 61, respectively, for CO2/N2 mixed-gas.
Deng, Jiu-shuai; Mao, Ying-bo; Wen, Shu-ming; Liu, Jian; Xian, Yong-jun; Feng, Qi-cheng
2015-02-01
Selective flotation separation of Cu-Zn mixed sulfides has been proven to be difficult. Thus far, researchers have found no satisfactory way to separate Cu-Zn mixed sulfides by selective flotation, mainly because of the complex surface and interface interaction mechanisms in the flotation solution. Undesired activation occurs between copper ions and the sphalerite surfaces. In addition to recycled water and mineral dissolution, ancient fluids in the minerals are observed to be a new source of metal ions. In this study, significant amounts of ancient fluids were found to exist in Cu-Zn sulfide and gangue minerals, mostly as gas-liquid fluid inclusions. The concentration of copper ions released from the ancient fluids reached 1.02 × 10-6 mol/L, whereas, in the cases of sphalerite and quartz, this concentration was 0.62 × 10-6 mol/L and 0.44 × 10-6 mol/L, respectively. As a result, the ancient fluid is a significant source of copper ions compared to mineral dissolution under the same experimental conditions, which promotes the unwanted activation of sphalerite. Therefore, the ancient fluid is considered to be a new factor that affects the selective flotation separation of Cu-Zn mixed sulfide ores.
Silva, V B; Daher, R F; Araújo, M S B; Souza, Y P; Cassaro, S; Menezes, B R S; Gravina, L M; Novo, A A C; Tardin, F D; Júnior, A T Amaral
2017-09-27
Genetically improved cultivars of elephant grass need to be adapted to different ecosystems with a faster growth speed and lower seasonality of biomass production over the year. This study aimed to use selection indices using mixed models (REML/BLUP) for selecting families and progenies within full-sib families of elephant grass (Pennisetum purpureum) for biomass production. One hundred and twenty full-sib progenies were assessed from 2014 to 2015 in a randomized block design with three replications. During this period, the traits dry matter production, the number of tillers, plant height, stem diameter, and neutral detergent fiber were assessed. Families 3 and 1 were the best classified, being the most indicated for selection effect. Progenies 40, 45, 46, and 49 got the first positions in the three indices assessed in the first cut. The gain for individual 40 was 161.76% using Mulamba and Mock index. The use of selection indices using mixed models is advantageous in elephant grass since they provide high gains with the selection, which are distributed among all the assessed traits in the most appropriate situation to breeding programs.
Carbon dioxide selective mixed matrix composite membrane containing ZIF-7 nano-fillers
Li, Tao
2013-01-01
Mixed matrix materials made from selective inorganic fillers and polymers are very attractive for the manufacturing of gas separation membranes. But only few of these materials could be manufactured into high-performance asymmetric or composite membranes. We report here the first mixed matrix composite membrane made of commercially available poly (amide-b-ethylene oxide) (Pebax®1657, Arkema) mixed with the nano-sized zeolitic imidazole framework ZIF-7. This hybrid material has been successfully deposited as a thin layer (less than 1μm) on a porous polyacrylonitrile (PAN) support. An intermediate gutter layer of PTMSP was applied to serve as a flat and smooth surface for coating to avoid polymer penetration into the porous support. Key features of this work are the preparation and use of ultra-small ZIF-7 nano-particles (around 30-35nm) and the membrane processability of Pebax®1657. SEM pictures show that excellent adhesion and almost ideal morphology between the two phases has been obtained simply by mixing the as-synthesized ZIF-7 suspension into the Pebax®1657 dope, and no voids or clusters can be observed. The performance of the composite membrane is characterized by single gas permeation measurement of CO2, N2 and CH4. Both, permeability (PCO2 up to 145barrer) and gas selectivity (CO2/N2 up to 97 and CO2/CH4 up to 30) can be increased at low ZIF- loading. The CO2/CH4 selectivity can be further increased to 44 with the filler loading of 34wt%, but the permeability is reduced compared to the pure Pebax®1657 membrane. Polymer chain rigidification at high filler loading is supposed to be a reason for the reduced permeability. The composite membranes prepared in this work show better performance in terms of permeance and selectivity when compared with asymmetric mixed matrix membranes described in the recent literature. Overall, the ZIF 7/Pebax mixed matrix membranes show a high performance for CO2 separation from methane and other gas streams. They are easy to
CO2 Selective, Zeolitic Imidazolate Framework-7 Based Polymer Composite Mixed-Matrix Membranes
Chakrabarty, Tina; Neelakanda, Pradeep; Peinemann, Klaus-Viktor
2018-01-01
CO2 removal is necessary to mitigate the effects of global warming but it is a challenging process to separate CO2 from natural gas, biogas, and other gas streams. Development of hybrid membranes by use of polymers and metal-organic framework (MOF) particles is a viable option to overcome this challenge. A ZIF-7 nano-filler that was synthesized in our lab was embedded into a designed polymer matrix at various loadings and the performance of the mixed matrix membranes was evaluated in terms of gas permeance and selectivity. Hybrid membranes with various loadings (20, 30 and 40 wt%) were developed and tested at room temperature by a custom made time lag equipment and a jump in selectivity was observed when compared with the pristine polymer. A commercially attractive region for the selectivity CO2 over CH4 was achieved with a selectivity of 39 for 40 wt% particle loading. An increase in selectivity was observed with the increase of ZIF-7 loadings. Best performance was seen at 40% ZIF-7 loaded membrane with an ideal selectivity of 39 for CO2 over CH4. The obtained selectivity was 105% higher for CO2 over CH4 than the selectivity of the pristine polymer with a slight decrease in permeance. Morphological characterization of such developed membranes showed an excellent compatibility between the polymer and particle adhesion.
CO2 Selective, Zeolitic Imidazolate Framework-7 Based Polymer Composite Mixed-Matrix Membranes
Chakrabarty, Tina
2018-05-17
CO2 removal is necessary to mitigate the effects of global warming but it is a challenging process to separate CO2 from natural gas, biogas, and other gas streams. Development of hybrid membranes by use of polymers and metal-organic framework (MOF) particles is a viable option to overcome this challenge. A ZIF-7 nano-filler that was synthesized in our lab was embedded into a designed polymer matrix at various loadings and the performance of the mixed matrix membranes was evaluated in terms of gas permeance and selectivity. Hybrid membranes with various loadings (20, 30 and 40 wt%) were developed and tested at room temperature by a custom made time lag equipment and a jump in selectivity was observed when compared with the pristine polymer. A commercially attractive region for the selectivity CO2 over CH4 was achieved with a selectivity of 39 for 40 wt% particle loading. An increase in selectivity was observed with the increase of ZIF-7 loadings. Best performance was seen at 40% ZIF-7 loaded membrane with an ideal selectivity of 39 for CO2 over CH4. The obtained selectivity was 105% higher for CO2 over CH4 than the selectivity of the pristine polymer with a slight decrease in permeance. Morphological characterization of such developed membranes showed an excellent compatibility between the polymer and particle adhesion.
Supporting Dynamic Quantization for High-Dimensional Data Analytics.
Guzun, Gheorghi; Canahuate, Guadalupe
2017-05-01
Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.
A hybridized K-means clustering approach for high dimensional ...
African Journals Online (AJOL)
International Journal of Engineering, Science and Technology ... Due to incredible growth of high dimensional dataset, conventional data base querying methods are inadequate to extract useful information, so researchers nowadays ... Recently cluster analysis is a popularly used data analysis method in number of areas.
On Robust Information Extraction from High-Dimensional Data
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2014-01-01
Roč. 9, č. 1 (2014), s. 131-144 ISSN 1452-4864 Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : data mining * high-dimensional data * robust econometrics * outliers * machine learning Subject RIV: IN - Informatics, Computer Science
Inference in High-dimensional Dynamic Panel Data Models
DEFF Research Database (Denmark)
Kock, Anders Bredahl; Tang, Haihan
We establish oracle inequalities for a version of the Lasso in high-dimensional fixed effects dynamic panel data models. The inequalities are valid for the coefficients of the dynamic and exogenous regressors. Separate oracle inequalities are derived for the fixed effects. Next, we show how one can...
Pricing High-Dimensional American Options Using Local Consistency Conditions
Berridge, S.J.; Schumacher, J.M.
2004-01-01
We investigate a new method for pricing high-dimensional American options. The method is of finite difference type but is also related to Monte Carlo techniques in that it involves a representative sampling of the underlying variables.An approximating Markov chain is built using this sampling and
Irregular grid methods for pricing high-dimensional American options
Berridge, S.J.
2004-01-01
This thesis proposes and studies numerical methods for pricing high-dimensional American options; important examples being basket options, Bermudan swaptions and real options. Four new methods are presented and analysed, both in terms of their application to various test problems, and in terms of
Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014
Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina
2016-01-01
This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...
New narrow boson resonances and SU(4) symmetry: Selection rules, SU(4) mixing, and mass formulas
International Nuclear Information System (INIS)
Takasugi, E.; Oneda, S.
1975-01-01
General SU(4) sum rules are obtained for bosons in the theoretical framework of asymptotic SU(4), chiral SU(4) direct-product SU(4) charge algebra, and a simple mechanism of SU(4) and chiral SU(4) direct-product SU(4) breaking. The sum rules exhibit a remarkable interplay of the masses, SU(4) mixing angles, and axial-vector matrix elements of 16-plet boson multiplets. Under a particular circumstance (i.e., in the ''ideal'' limit) this interplay produces selection rules which may explain the remarkable stability of the newly found narrow boson resonances. General SU(4) mass formulas and inter-SU(4) -multiplet mass relations are derived and SU(4) mixing parameters are completely determined. Ground state 1 -- and 0 -+ 16-plets are especially discussed and the masses of charmed and uncharmed new members of these multiplets are predicted
Akhtar, Faheem Hassan
2017-09-13
Polybenzimidazole (PBI), a thermal and chemically stable polymer, is commonly used to fabricate membranes for applications like hydrogen recovery at temperatures of more than 300 °C, fuel cells working in a highly acidic environment, and nanofiltration in aggressive solvents. This report shows for the first time use of PBI dense membranes for water vapor/gas separation applications. They showed an excellent selectivity and high water vapor permeability. Incorporation of inorganic hydrophilic titanium-based nano-fillers into the PBI matrix further increased the water vapor permeability and water vapor/N2 selectivity. The most selective mixed matrix membrane with 0.5 wt% loading of TiO2 nanotubes yielded a water vapor permeability of 6.8×104 Barrer and a H2O/N2 selectivity of 3.9×106. The most permeable membrane with 1 wt% loading of carboxylated TiO2 nanoparticles had a 7.1×104 Barrer water vapor permeability and a H2O/N2 selectivity of 3.1×106. The performance of these membranes in terms of water vapor transport and selectivity is among the highest reported ones. The remarkable ability of PBI to efficiently permeate water versus other gases opens the possibility to fabricate membranes for dehumidification of streams in harsh environments. This includes the removal of water from high temperature reaction mixtures to shift the equilibrium towards products.
Torres, F E; Teodoro, P E; Rodrigues, E V; Santos, A; Corrêa, A M; Ceccon, G
2016-04-29
The aim of this study was to select erect cowpea (Vigna unguiculata L.) genotypes simultaneously for high adaptability, stability, and yield grain in Mato Grosso do Sul, Brazil using mixed models. We conducted six trials of different cowpea genotypes in 2005 and 2006 in Aquidauana, Chapadão do Sul, Dourados, and Primavera do Leste. The experimental design was randomized complete blocks with four replications and 20 genotypes. Genetic parameters were estimated by restricted maximum likelihood/best linear unbiased prediction, and selection was based on the harmonic mean of the relative performance of genetic values method using three strategies: selection based on the predicted breeding value, having considered the performance mean of the genotypes in all environments (no interaction effect); the performance in each environment (with an interaction effect); and the simultaneous selection for grain yield, stability, and adaptability. The MNC99542F-5 and MNC99-537F-4 genotypes could be grown in various environments, as they exhibited high grain yield, adaptability, and stability. The average heritability of the genotypes was moderate to high and the selective accuracy was 82%, indicating an excellent potential for selection.
Akhtar, Faheem Hassan; Kumar, Mahendra; Villalobos, Luis Francisco; Shevate, Rahul; Vovusha, Hakkim; Schwingenschlö gl, Udo; Peinemann, Klaus-Viktor
2017-01-01
Polybenzimidazole (PBI), a thermal and chemically stable polymer, is commonly used to fabricate membranes for applications like hydrogen recovery at temperatures of more than 300 °C, fuel cells working in a highly acidic environment, and nanofiltration in aggressive solvents. This report shows for the first time use of PBI dense membranes for water vapor/gas separation applications. They showed an excellent selectivity and high water vapor permeability. Incorporation of inorganic hydrophilic titanium-based nano-fillers into the PBI matrix further increased the water vapor permeability and water vapor/N2 selectivity. The most selective mixed matrix membrane with 0.5 wt% loading of TiO2 nanotubes yielded a water vapor permeability of 6.8×104 Barrer and a H2O/N2 selectivity of 3.9×106. The most permeable membrane with 1 wt% loading of carboxylated TiO2 nanoparticles had a 7.1×104 Barrer water vapor permeability and a H2O/N2 selectivity of 3.1×106. The performance of these membranes in terms of water vapor transport and selectivity is among the highest reported ones. The remarkable ability of PBI to efficiently permeate water versus other gases opens the possibility to fabricate membranes for dehumidification of streams in harsh environments. This includes the removal of water from high temperature reaction mixtures to shift the equilibrium towards products.
International Nuclear Information System (INIS)
Whyman, R.
1986-01-01
A feature which is a key to any wider utilization of chemistry based on synthesis gas is an understanding of, and more particularly, an ability to control, those factors which determine the selectivity of the C 1 to C 2 transformation during the hydrogenation of carbon monoxide. With the exception of the rhodium-catalyzed conversion of carbon monoxide and hydrogen into ethylene glycol and methanol, in which molar ethylene glycol/methanol selectivities of ca 2/1 may be achieved, other catalyst systems containing metals such as cobalt or ruthenium exhibit only poor selectivities to ethylene glycol. The initial studies in this area were based on the reasoning that, since the reduction of carbon monoxide to C 2 products is a complex, multi-step process, the use of appropriate combinations of metals could generate synergistic effects which might prove more effective (in terms of both catalytic activity and selectivity) than simply the sum of the individual metal components. In particular, the concept of the combination of a good hydrogenation catalyst with a good carbonylation, or ''CO insertion'', catalyst seemed particularly germane. As a result of this approach the authors discovered an unprecedented example of the effect of catalyst promoters, particularly in the enhancement of C 2 /C 1 selectivity, and one which has led to the development of composite mixed-metal homogeneous catalyst systems for the conversion of CO/H 2 into C 2 -oxygenate esters
High Dimensional Modulation and MIMO Techniques for Access Networks
DEFF Research Database (Denmark)
Binti Othman, Maisara
Exploration of advanced modulation formats and multiplexing techniques for next generation optical access networks are of interest as promising solutions for delivering multiple services to end-users. This thesis addresses this from two different angles: high dimensionality carrierless...... the capacity per wavelength of the femto-cell network. Bit rate up to 1.59 Gbps with fiber-wireless transmission over 1 m air distance is demonstrated. The results presented in this thesis demonstrate the feasibility of high dimensionality CAP in increasing the number of dimensions and their potentially......) optical access network. 2 X 2 MIMO RoF employing orthogonal frequency division multiplexing (OFDM) with 5.6 GHz RoF signaling over all-vertical cavity surface emitting lasers (VCSEL) WDM passive optical networks (PONs). We have employed polarization division multiplexing (PDM) to further increase...
HSM: Heterogeneous Subspace Mining in High Dimensional Data
DEFF Research Database (Denmark)
Müller, Emmanuel; Assent, Ira; Seidl, Thomas
2009-01-01
Heterogeneous data, i.e. data with both categorical and continuous values, is common in many databases. However, most data mining algorithms assume either continuous or categorical attributes, but not both. In high dimensional data, phenomena due to the "curse of dimensionality" pose additional...... challenges. Usually, due to locally varying relevance of attributes, patterns do not show across the full set of attributes. In this paper we propose HSM, which defines a new pattern model for heterogeneous high dimensional data. It allows data mining in arbitrary subsets of the attributes that are relevant...... for the respective patterns. Based on this model we propose an efficient algorithm, which is aware of the heterogeneity of the attributes. We extend an indexing structure for continuous attributes such that HSM indexing adapts to different attribute types. In our experiments we show that HSM efficiently mines...
Analysis of chaos in high-dimensional wind power system.
Wang, Cong; Zhang, Hongli; Fan, Wenhui; Ma, Ping
2018-01-01
A comprehensive analysis on the chaos of a high-dimensional wind power system is performed in this study. A high-dimensional wind power system is more complex than most power systems. An 11-dimensional wind power system proposed by Huang, which has not been analyzed in previous studies, is investigated. When the systems are affected by external disturbances including single parameter and periodic disturbance, or its parameters changed, chaotic dynamics of the wind power system is analyzed and chaotic parameters ranges are obtained. Chaos existence is confirmed by calculation and analysis of all state variables' Lyapunov exponents and the state variable sequence diagram. Theoretical analysis and numerical simulations show that the wind power system chaos will occur when parameter variations and external disturbances change to a certain degree.
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2011-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.
Hypergraph-based anomaly detection of high-dimensional co-occurrences.
Silva, Jorge; Willett, Rebecca
2009-03-01
This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.
High-dimensional data in economics and their (robust) analysis
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2017-01-01
Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Institutional support: RVO:67985556 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BA - General Mathematics OBOR OECD: Business and management http://library.utia.cas.cz/separaty/2017/SI/kalina-0474076.pdf
High-dimensional Data in Economics and their (Robust) Analysis
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2017-01-01
Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability
Quantifying high dimensional entanglement with two mutually unbiased bases
Directory of Open Access Journals (Sweden)
Paul Erker
2017-07-01
Full Text Available We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.
Fradinho, J C; Reis, M A M; Oehmen, A
2016-11-15
Currently, the feast and famine (FF) regime is the most widely applied strategy to select for polyhydroxyalkanoate (PHA) accumulating organisms in PHA production systems with mixed microbial cultures. As an alternative to the FF regime, this work studied the possibility of utilizing a permanent feast regime as a new operational strategy to select for PHA accumulating photosynthetic mixed cultures (PMCs). The PMC was selected in an illuminated environment and acetate was constantly present in the mixed liquor to guarantee a feast regime. During steady-state operation, the culture presented low PHA accumulation levels, likely due to low light availability, which resulted in most of the acetate being used for biomass growth (Y x/s of 0.64 ± 0.18 Cmol X/Cmol Acet). To confirm the light limitation on the PMC, SBR tests were conducted with higher light availability, at similar levels as would be expectable from natural sunlight. In this case, the Y x/s reduced to 0.11 ± 0.01 Cmol X/Cmol Acet and the culture presented a PHB production yield on acetate of 0.67 ± 0.01 Cmol PHB/Cmol Acet, leading to a maximum PHB content of 60%. Unlike other studied PMCs, the PMC was capable of simultaneous growth and PHB accumulation continuously throughout the cycle. Thus far, 60% PHA content is the maximum value ever reported for a PMC, a result that prospects the utilization of feast regimes as an alternative strategy for the selection of PHA accumulating PMCs. Furthermore, the PMC also presented high phosphate removal rates, delivering an effluent that complies with phosphate discharge limits. The advantages of selecting PMCs under a permanent feast regime are that no aeration inputs are required; it allows higher PHA contents and phosphate removal rates in comparison to FF-operated PMC systems; and it represents a novel means of integrating wastewater treatment with resource recovery in the form of PHA. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zhang, F.; Zhang, Y.; Ding, J.; Dai, K.; Van Loosdrecht, M.C.M.; Zeng, R.J.
2014-01-01
The control of metabolite production is difficult in mixed culture fermentation. This is particularly related to hydrogen inhibition. In this work, hydrogenotrophic methanogens were selectively enriched to reduce the hydrogen partial pressure and to realize efficient acetate production in
High dimensional model representation method for fuzzy structural dynamics
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
High-dimensional quantum cloning and applications to quantum hacking.
Bouchard, Frédéric; Fickler, Robert; Boyd, Robert W; Karimi, Ebrahim
2017-02-01
Attempts at cloning a quantum system result in the introduction of imperfections in the state of the copies. This is a consequence of the no-cloning theorem, which is a fundamental law of quantum physics and the backbone of security for quantum communications. Although perfect copies are prohibited, a quantum state may be copied with maximal accuracy via various optimal cloning schemes. Optimal quantum cloning, which lies at the border of the physical limit imposed by the no-signaling theorem and the Heisenberg uncertainty principle, has been experimentally realized for low-dimensional photonic states. However, an increase in the dimensionality of quantum systems is greatly beneficial to quantum computation and communication protocols. Nonetheless, no experimental demonstration of optimal cloning machines has hitherto been shown for high-dimensional quantum systems. We perform optimal cloning of high-dimensional photonic states by means of the symmetrization method. We show the universality of our technique by conducting cloning of numerous arbitrary input states and fully characterize our cloning machine by performing quantum state tomography on cloned photons. In addition, a cloning attack on a Bennett and Brassard (BB84) quantum key distribution protocol is experimentally demonstrated to reveal the robustness of high-dimensional states in quantum cryptography.
Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen
2018-01-25
Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please
The cross-validated AUC for MCP-logistic regression with high-dimensional data.
Jiang, Dingfeng; Huang, Jian; Zhang, Ying
2013-10-01
We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.
Multi-diversity combining and selection for relay-assisted mixed RF/FSO system
Chen, Li; Wang, Weidong
2017-12-01
We propose and analyze multi-diversity combining and selection to enhance the performance of relay-assisted mixed radio frequency/free-space optics (RF/FSO) system. We focus on a practical scenario for cellular network where a single-antenna source is communicating to a multi-apertures destination through a relay equipped with multiple receive antennas and multiple transmit apertures. The RF single input multiple output (SIMO) links employ either maximal-ratio combining (MRC) or receive antenna selection (RAS), and the FSO multiple input multiple output (MIMO) links adopt either repetition coding (RC) or transmit laser selection (TLS). The performance is evaluated via an outage probability analysis over Rayleigh fading RF links and Gamma-Gamma atmospheric turbulence FSO links with pointing errors where channel state information (CSI) assisted amplify-and-forward (AF) scheme is considered. Asymptotic closed-form expressions at high signal-to-noise ratio (SNR) are also derived. Coding gain and diversity order for different combining and selection schemes are further discussed. Numerical results are provided to verify and illustrate the analytical results.
International Nuclear Information System (INIS)
Steude, J.; Tucker, B.
1991-01-01
The remediation of sites containing radioactive and mixed wastes is in a period of rapid growth. The state of the art of remediation is progressing to handle the shortcomings of conventional pump and treat or disposal technologies. The objective of this paper is to review the status of selected innovative technologies which treat soils contaminated with radioactive and mixed waste. Technologies are generally classified as innovative if they are fully developed, but lack sufficient cost or performance data for comparison with conventional technologies. The Environmental Protection Agency recommends inclusion of innovative technologies in the RI/FS screening process if there is reason to believe that they would offer advantages in performance, implementability, cost, etc. This paper serves as a compilation of the pertinent information necessary to gain an overview of the selected innovative technologies to aid in the RI/F'S screening process. The innovative technologies selected for evaluation are listed below. Bioremediation, although innovative, was not included due to the combination of the vast amount of literature on this subject and the limited scope of this project. 1. Soil washing and flushing; 2. Low temperature thermal treatment; 3. Electrokinetics; 4. Infrared incineration; 5. Ultrasound; 6. In situ vitrification; 7. Soil vapor extraction; 8. Plasma torch slagging; 9. In situ hot air/steam extraction; 10. Cyclone reactor treatment; 11. In situ radio frequency; 12. Vegetative radionuclide uptake; and 13. In situ soil heating. The information provided on each technology includes a technical description, status, summary of results including types of contaminants and soils treated, technical effectiveness, feasibility and estimated cost
Zhang, Yu; Wu, Jianxin; Cai, Jianfei
2016-05-01
In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.
Directory of Open Access Journals (Sweden)
Heleen L. P. Mees
2014-06-01
Full Text Available Policy instruments can help put climate adaptation plans into action. Here, we propose a method for the systematic assessment and selection of policy instruments for stimulating adaptation action. The multi-disciplinary set of six assessment criteria is derived from economics, policy, and legal studies. These criteria are specified for the purpose of climate adaptation by taking into account four challenges to the governance of climate adaptation: uncertainty, spatial diversity, controversy, and social complexity. The six criteria and four challenges are integrated into a step-wise method that enables the selection of instruments starting from a generic assessment and ending with a specific assessment of policy instrument mixes for the stimulation of a specific adaptation measure. We then apply the method to three examples of adaptation measures. The method's merits lie in enabling deliberate choices through a holistic and comprehensive set of adaptation specific criteria, as well as deliberative choices by offering a stepwise method that structures an informed dialog on instrument selection. Although the method was created and applied by scientific experts, policy-makers can also use the method.
Light alkane (mixed feed selective dehydrogenation using bi-metallic zeolite supported catalyst
Directory of Open Access Journals (Sweden)
Zeeshan Nawaz
2009-12-01
Full Text Available Light alkanes are the important intermediates of many refinery processes and their catalytic dehydrogenation gives corresponding alkenes. The aim behind this experimentation is to investigate reaction behavior of mixed alkanes during direct catalytic dehydrogenation and emphasis has been given to enhance propene. Bi-metallic zeolite supported catalyst Pt-Sn/ZSM-5 was prepared by sequentional impregnation method and characterized by BET, EDS and XRD. Direct dehydrogenation reaction is highly endothermic and its conversion is thermodynamically limited. Results showed that the increase in temperature increases the conversion to some extent but there is no overall effect on selectivity of propene. Increase in time-on-stream (TOS remarkably improves propene selectivity at the expense of lower conversion. The performances of bi-metallic zeolite based catalyst largely affected by coke deposition. The presence of butane and ethane adversely affected propane conversion. Optimum propene selectivity is about 48 %, obtained at 600 oC and time-on-stream 10 h.
Optimising the selection of food items for FFQs using Mixed Integer Linear Programming.
Gerdessen, Johanna C; Souverein, Olga W; van 't Veer, Pieter; de Vries, Jeanne Hm
2015-01-01
To support the selection of food items for FFQs in such a way that the amount of information on all relevant nutrients is maximised while the food list is as short as possible. Selection of the most informative food items to be included in FFQs was modelled as a Mixed Integer Linear Programming (MILP) model. The methodology was demonstrated for an FFQ with interest in energy, total protein, total fat, saturated fat, monounsaturated fat, polyunsaturated fat, total carbohydrates, mono- and disaccharides, dietary fibre and potassium. The food lists generated by the MILP model have good performance in terms of length, coverage and R 2 (explained variance) of all nutrients. MILP-generated food lists were 32-40 % shorter than a benchmark food list, whereas their quality in terms of R 2 was similar to that of the benchmark. The results suggest that the MILP model makes the selection process faster, more standardised and transparent, and is especially helpful in coping with multiple nutrients. The complexity of the method does not increase with increasing number of nutrients. The generated food lists appear either shorter or provide more information than a food list generated without the MILP model.
A novel algorithm of artificial immune system for high-dimensional function numerical optimization
Institute of Scientific and Technical Information of China (English)
DU Haifeng; GONG Maoguo; JIAO Licheng; LIU Ruochen
2005-01-01
Based on the clonal selection theory and immune memory theory, a novel artificial immune system algorithm, immune memory clonal programming algorithm (IMCPA), is put forward. Using the theorem of Markov chain, it is proved that IMCPA is convergent. Compared with some other evolutionary programming algorithms (like Breeder genetic algorithm), IMCPA is shown to be an evolutionary strategy capable of solving complex machine learning tasks, like high-dimensional function optimization, which maintains the diversity of the population and avoids prematurity to some extent, and has a higher convergence speed.
High-dimensional data: p >> n in mathematical statistics and bio-medical applications
Van De Geer, Sara A.; Van Houwelingen, Hans C.
2004-01-01
The workshop 'High-dimensional data: p >> n in mathematical statistics and bio-medical applications' was held at the Lorentz Center in Leiden from 9 to 20 September 2002. This special issue of Bernoulli contains a selection of papers presented at that workshop. ¶ The introduction of high-throughput micro-array technology to measure gene-expression levels and the publication of the pioneering paper by Golub et al. (1999) has brought to life a whole new branch of data analysis under the name of...
Hawking radiation of a high-dimensional rotating black hole
Energy Technology Data Exchange (ETDEWEB)
Zhao, Ren; Zhang, Lichun; Li, Huaifan; Wu, Yueqin [Shanxi Datong University, Institute of Theoretical Physics, Department of Physics, Datong (China)
2010-01-15
We extend the classical Damour-Ruffini method and discuss Hawking radiation spectrum of high-dimensional rotating black hole using Tortoise coordinate transformation defined by taking the reaction of the radiation to the spacetime into consideration. Under the condition that the energy and angular momentum are conservative, taking self-gravitation action into account, we derive Hawking radiation spectrums which satisfy unitary principle in quantum mechanics. It is shown that the process that the black hole radiates particles with energy {omega} is a continuous tunneling process. We provide a theoretical basis for further studying the physical mechanism of black-hole radiation. (orig.)
The additive hazards model with high-dimensional regressors
DEFF Research Database (Denmark)
Martinussen, Torben; Scheike, Thomas
2009-01-01
This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...
High-dimensional quantum channel estimation using classical light
CSIR Research Space (South Africa)
Mabena, Chemist M
2017-11-01
Full Text Available stream_source_info Mabena_20007_2017.pdf.txt stream_content_type text/plain stream_size 960 Content-Encoding UTF-8 stream_name Mabena_20007_2017.pdf.txt Content-Type text/plain; charset=UTF-8 PHYSICAL REVIEW A 96, 053860... (2017) High-dimensional quantum channel estimation using classical light Chemist M. Mabena CSIR National Laser Centre, P.O. Box 395, Pretoria 0001, South Africa and School of Physics, University of the Witwatersrand, Johannesburg 2000, South...
Data analysis in high-dimensional sparse spaces
DEFF Research Database (Denmark)
Clemmensen, Line Katrine Harder
classification techniques for high-dimensional problems are presented: Sparse discriminant analysis, sparse mixture discriminant analysis and orthogonality constrained support vector machines. The first two introduces sparseness to the well known linear and mixture discriminant analysis and thereby provide low...... are applied to classifications of fish species, ear canal impressions used in the hearing aid industry, microbiological fungi species, and various cancerous tissues and healthy tissues. In addition, novel applications of sparse regressions (also called the elastic net) to the medical, concrete, and food...
Selective effects of alpha interferon on human T-lymphocyte subsets during mixed lymphocyte cultures
DEFF Research Database (Denmark)
Hokland, M; Hokland, P; Heron, I
1983-01-01
Mixed lymphocyte reaction (MLR) cultures of human lymphocyte subsets with or without the addition of physiological doses of human alpha interferon (IFN-alpha) were compared with respect to surface marker phenotypes and proliferative capacities of the responder cells. A selective depression on the T...... T4 cells and decreased numbers of T4 cells harvested from IFN MLRs (days 5-6 of culture). In contrast, it was shown that the T8 (cytotoxic/suppressor) subset in MLRs was either not affected or slightly stimulated by the addition of IFN. The depression of the T4 cells by IFN was accompanied...... by a decrease in the number of activated T cells expressing Ia antigens. On the other hand, IFN MLRs contained greater numbers of cells expressing the T10 differentiation antigen. In experiments with purified T-cell subsets the IFN effect was exerted directly on the T4 cells and not mediated by either T8...
Huang, Lin
2014-08-01
A new mixed-ligand Zeolitic Imidazolate Framework Zn4(2-mbIm) 3(bIm)5·4H2O (named JUC-160, 2-mbIm = 2-methylbenzimidazole, bIm = benzimidazole and JUC = Jilin University China) was synthesized with a solvothermal reaction of Zn(NO3) 2·6H2O, bIm and 2-mbIm in DMF solution at 180 °C. Topological analysis indicated that JUC-160 has a zeolite GIS (gismondine) topology. Study of the gas adsorption and thermal and chemical stability of JUC-160 demonstrated its selective adsorption property for carbon dioxide, high thermal stability, and remarkable chemical resistance to boiling alkaline water and organic solvent for up to one week. © 2014 Elsevier B.V.
Effect of correlation on covariate selection in linear and nonlinear mixed effect models.
Bonate, Peter L
2017-01-01
The effect of correlation among covariates on covariate selection was examined with linear and nonlinear mixed effect models. Demographic covariates were extracted from the National Health and Nutrition Examination Survey III database. Concentration-time profiles were Monte Carlo simulated where only one covariate affected apparent oral clearance (CL/F). A series of univariate covariate population pharmacokinetic models was fit to the data and compared with the reduced model without covariate. The "best" covariate was identified using either the likelihood ratio test statistic or AIC. Weight and body surface area (calculated using Gehan and George equation, 1970) were highly correlated (r = 0.98). Body surface area was often selected as a better covariate than weight, sometimes as high as 1 in 5 times, when weight was the covariate used in the data generating mechanism. In a second simulation, parent drug concentration and three metabolites were simulated from a thorough QT study and used as covariates in a series of univariate linear mixed effects models of ddQTc interval prolongation. The covariate with the largest significant LRT statistic was deemed the "best" predictor. When the metabolite was formation-rate limited and only parent concentrations affected ddQTc intervals the metabolite was chosen as a better predictor as often as 1 in 5 times depending on the slope of the relationship between parent concentrations and ddQTc intervals. A correlated covariate can be chosen as being a better predictor than another covariate in a linear or nonlinear population analysis by sheer correlation These results explain why for the same drug different covariates may be identified in different analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Evaluating the effects of compaction of hot mix asphalt on selected laboratory tests
CSIR Research Space (South Africa)
Kekana, SL
2008-07-01
Full Text Available of the gyratory prepared samples for standard laboratory design mix and the field prepared samples, while the short-term aged (SA) sample shows similar rut rates to the field compacted samples. Early failure was evident after 2 000 wheel passes for the short.... The laboratory design mixed is represented by short-term aged mixed and design mix (fresh mix in the laboratory). The type of mix discussed in this study is summarised in Tables 1 and 2 and Figure 1. Detailed information about the mix is discussed in Denneman...
Scalable Nearest Neighbor Algorithms for High Dimensional Data.
Muja, Marius; Lowe, David G
2014-11-01
For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.
High-Dimensional Adaptive Particle Swarm Optimization on Heterogeneous Systems
International Nuclear Information System (INIS)
Wachowiak, M P; Sarlo, B B; Foster, A E Lambe
2014-01-01
Much work has recently been reported in parallel GPU-based particle swarm optimization (PSO). Motivated by the encouraging results of these investigations, while also recognizing the limitations of GPU-based methods for big problems using a large amount of data, this paper explores the efficacy of employing other types of parallel hardware for PSO. Most commodity systems feature a variety of architectures whose high-performance capabilities can be exploited. In this paper, high-dimensional problems and those that employ a large amount of external data are explored within the context of heterogeneous systems. Large problems are decomposed into constituent components, and analyses are undertaken of which components would benefit from multi-core or GPU parallelism. The current study therefore provides another demonstration that ''supercomputing on a budget'' is possible when subtasks of large problems are run on hardware most suited to these tasks. Experimental results show that large speedups can be achieved on high dimensional, data-intensive problems. Cost functions must first be analysed for parallelization opportunities, and assigned hardware based on the particular task
High-dimensional single-cell cancer biology.
Irish, Jonathan M; Doxie, Deon B
2014-01-01
Cancer cells are distinguished from each other and from healthy cells by features that drive clonal evolution and therapy resistance. New advances in high-dimensional flow cytometry make it possible to systematically measure mechanisms of tumor initiation, progression, and therapy resistance on millions of cells from human tumors. Here we describe flow cytometry techniques that enable a "single-cell " view of cancer. High-dimensional techniques like mass cytometry enable multiplexed single-cell analysis of cell identity, clinical biomarkers, signaling network phospho-proteins, transcription factors, and functional readouts of proliferation, cell cycle status, and apoptosis. This capability pairs well with a signaling profiles approach that dissects mechanism by systematically perturbing and measuring many nodes in a signaling network. Single-cell approaches enable study of cellular heterogeneity of primary tissues and turn cell subsets into experimental controls or opportunities for new discovery. Rare populations of stem cells or therapy-resistant cancer cells can be identified and compared to other types of cells within the same sample. In the long term, these techniques will enable tracking of minimal residual disease (MRD) and disease progression. By better understanding biological systems that control development and cell-cell interactions in healthy and diseased contexts, we can learn to program cells to become therapeutic agents or target malignant signaling events to specifically kill cancer cells. Single-cell approaches that provide deep insight into cell signaling and fate decisions will be critical to optimizing the next generation of cancer treatments combining targeted approaches and immunotherapy.
Construction of PAH-degrading mixed microbial consortia by induced selection in soil.
Zafra, German; Absalón, Ángel E; Anducho-Reyes, Miguel Ángel; Fernandez, Francisco J; Cortés-Espinosa, Diana V
2017-04-01
Bioremediation of polycyclic aromatic hydrocarbons (PAHs)-contaminated soils through the biostimulation and bioaugmentation processes can be a strategy for the clean-up of oil spills and environmental accidents. In this work, an induced microbial selection method using PAH-polluted soils was successfully used to construct two microbial consortia exhibiting high degradation levels of low and high molecular weight PAHs. Six fungal and seven bacterial native strains were used to construct mixed consortia with the ability to tolerate high amounts of phenanthrene (Phe), pyrene (Pyr) and benzo(a)pyrene (BaP) and utilize these compounds as a sole carbon source. In addition, we used two engineered PAH-degrading fungal strains producing heterologous ligninolytic enzymes. After a previous selection using microbial antagonism tests, the selection was performed in microcosm systems and monitored using PCR-DGGE, CO 2 evolution and PAH quantitation. The resulting consortia (i.e., C1 and C2) were able to degrade up to 92% of Phe, 64% of Pyr and 65% of BaP out of 1000 mg kg -1 of a mixture of Phe, Pyr and BaP (1:1:1) after a two-week incubation. The results indicate that constructed microbial consortia have high potential for soil bioremediation by bioaugmentation and biostimulation and may be effective for the treatment of sites polluted with PAHs due to their elevated tolerance to aromatic compounds, their capacity to utilize them as energy source. Copyright © 2016 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Ferrada, J.J.; Berry, J.B.
1993-11-01
The US Department of Energy (DOE) Mixed Waste Integrated Program (MWIP) has as one of its tasks the identification of a decision methodology and key decision criteria for the selection methodology. The aim of a multicriteria analysis is to provide an instrument for a systematic evaluation of distinct alternative projects. Determination of this methodology will clarify (1) the factors used to evaluate these alternatives, (2) the evaluator's view of the importance of the factors, and (3) the relative value of each alternative. The selected methodology must consider the Comprehensive Environmental Response Compensation and Liability Act (CERCLA) decision-making criteria for application to the analysis technology subsystems developed by the DOE Office of Technology Development. This report contains a compilation of several decision methodologies developed in various national laboratories, institutions, and universities. The purpose of these methodologies may vary, but the core of the decision attributes are very similar. Six approaches were briefly analyzed; from these six, in addition to recommendations made by the MWIP technical support group leaders and CERCLA, the final decision methodology was extracted. Slight variations are observed in the many methodologies developed by different groups, but most of the analyzed methodologies address similar aspects for the most part. These common aspects were the core of the methodology suggested in this report for use within MWIP for the selection of technologies. The set of criteria compiled and developed for this report have been grouped in five categories: (1) process effectiveness, (2) developmental status, (3) life-cycle cost, (4) implementability, and (5) regulatory compliance
Reducing the Complexity of Genetic Fuzzy Classifiers in Highly-Dimensional Classification Problems
Directory of Open Access Journals (Sweden)
DimitrisG. Stavrakoudis
2012-04-01
Full Text Available This paper introduces the Fast Iterative Rule-based Linguistic Classifier (FaIRLiC, a Genetic Fuzzy Rule-Based Classification System (GFRBCS which targets at reducing the structural complexity of the resulting rule base, as well as its learning algorithm's computational requirements, especially when dealing with high-dimensional feature spaces. The proposed methodology follows the principles of the iterative rule learning (IRL approach, whereby a rule extraction algorithm (REA is invoked in an iterative fashion, producing one fuzzy rule at a time. The REA is performed in two successive steps: the first one selects the relevant features of the currently extracted rule, whereas the second one decides the antecedent part of the fuzzy rule, using the previously selected subset of features. The performance of the classifier is finally optimized through a genetic tuning post-processing stage. Comparative results in a hyperspectral remote sensing classification as well as in 12 real-world classification datasets indicate the effectiveness of the proposed methodology in generating high-performing and compact fuzzy rule-based classifiers, even for very high-dimensional feature spaces.
Rodrigues, E V; Daher, R F; Dos Santos, A; Vivas, M; Machado, J C; Gravina, G do A; de Souza, Y P; Vidal, A K; Rocha, A Dos S; Freitas, R S
2017-05-18
Brazil has great potential to produce bioenergy since it is located in a tropical region that receives high incidence of solar energy and presents favorable climatic conditions for such purpose. However, the use of bioenergy in the country is below its productivity potential. The aim of the current study was to select full-sib progenies and families of elephant grass (Pennisetum purpureum S.) to optimize phenotypes relevant to bioenergy production through mixed models (REML/BLUP). The circulating diallel-based crossing of ten elephant grass genotypes was performed. An experimental design using the randomized block methodology, with three repetitions, was set to assess both the hybrids and the parents. Each plot comprised 14-m rows, 1.40 m spacing between rows, and 1.40 m spacing between plants. The number of tillers, plant height, culm diameter, fresh biomass production, dry biomass rate, and the dry biomass production were assessed. Genetic-statistical analyses were performed through mixed models (REML/BLUP). The genetic variance in the assessed families was explained through additive genetic effects and dominance genetic effects; the dominance variance was prevalent. Families such as Capim Cana D'África x Guaçu/I.Z.2, Cameroon x Cuba-115, CPAC x Cuba-115, Cameroon x Guaçu/I.Z.2, and IAC-Campinas x CPAC showed the highest dry biomass production. The family derived from the crossing between Cana D'África and Guaçu/I.Z.2 showed the largest number of potential individuals for traits such as plant height, culm diameter, fresh biomass production, dry biomass production, and dry biomass rate. The individual 5 in the family Cana D'África x Guaçu/I.Z.2, planted in blocks 1 and 2, showed the highest dry biomass production.
High-Dimensional Quantum Information Processing with Linear Optics
Fitzpatrick, Casey A.
Quantum information processing (QIP) is an interdisciplinary field concerned with the development of computers and information processing systems that utilize quantum mechanical properties of nature to carry out their function. QIP systems have become vastly more practical since the turn of the century. Today, QIP applications span imaging, cryptographic security, computation, and simulation (quantum systems that mimic other quantum systems). Many important strategies improve quantum versions of classical information system hardware, such as single photon detectors and quantum repeaters. Another more abstract strategy engineers high-dimensional quantum state spaces, so that each successful event carries more information than traditional two-level systems allow. Photonic states in particular bring the added advantages of weak environmental coupling and data transmission near the speed of light, allowing for simpler control and lower system design complexity. In this dissertation, numerous novel, scalable designs for practical high-dimensional linear-optical QIP systems are presented. First, a correlated photon imaging scheme using orbital angular momentum (OAM) states to detect rotational symmetries in objects using measurements, as well as building images out of those interactions is reported. Then, a statistical detection method using chains of OAM superpositions distributed according to the Fibonacci sequence is established and expanded upon. It is shown that the approach gives rise to schemes for sorting, detecting, and generating the recursively defined high-dimensional states on which some quantum cryptographic protocols depend. Finally, an ongoing study based on a generalization of the standard optical multiport for applications in quantum computation and simulation is reported upon. The architecture allows photons to reverse momentum inside the device. This in turn enables realistic implementation of controllable linear-optical scattering vertices for
High-dimensional change-point estimation: Combining filtering with convex optimization
Soh, Yong Sheng; Chandrasekaran, Venkat
2017-01-01
We consider change-point estimation in a sequence of high-dimensional signals given noisy observations. Classical approaches to this problem such as the filtered derivative method are useful for sequences of scalar-valued signals, but they have undesirable scaling behavior in the high-dimensional setting. However, many high-dimensional signals encountered in practice frequently possess latent low-dimensional structure. Motivated by this observation, we propose a technique for high-dimensional...
Variance inflation in high dimensional Support Vector Machines
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2013-01-01
Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...
Applying recursive numerical integration techniques for solving high dimensional integrals
International Nuclear Information System (INIS)
Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan
2016-11-01
The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.
Network Reconstruction From High-Dimensional Ordinary Differential Equations.
Chen, Shizhe; Shojaie, Ali; Witten, Daniela M
2017-01-01
We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.
Quantum correlation of high dimensional system in a dephasing environment
Ji, Yinghua; Ke, Qiang; Hu, Juju
2018-05-01
For a high dimensional spin-S system embedded in a dephasing environment, we theoretically analyze the time evolutions of quantum correlation and entanglement via Frobenius norm and negativity. The quantum correlation dynamics can be considered as a function of the decoherence parameters, including the ratio between the system oscillator frequency ω0 and the reservoir cutoff frequency ωc , and the different environment temperature. It is shown that the quantum correlation can not only measure nonclassical correlation of the considered system, but also perform a better robustness against the dissipation. In addition, the decoherence presents the non-Markovian features and the quantum correlation freeze phenomenon. The former is much weaker than that in the sub-Ohmic or Ohmic thermal reservoir environment.
Evaluating Clustering in Subspace Projections of High Dimensional Data
DEFF Research Database (Denmark)
Müller, Emmanuel; Günnemann, Stephan; Assent, Ira
2009-01-01
Clustering high dimensional data is an emerging research field. Subspace clustering or projected clustering group similar objects in subspaces, i.e. projections, of the full space. In the past decade, several clustering paradigms have been developed in parallel, without thorough evaluation...... and comparison between these paradigms on a common basis. Conclusive evaluation and comparison is challenged by three major issues. First, there is no ground truth that describes the "true" clusters in real world data. Second, a large variety of evaluation measures have been used that reflect different aspects...... of the clustering result. Finally, in typical publications authors have limited their analysis to their favored paradigm only, while paying other paradigms little or no attention. In this paper, we take a systematic approach to evaluate the major paradigms in a common framework. We study representative clustering...
Applying recursive numerical integration techniques for solving high dimensional integrals
Energy Technology Data Exchange (ETDEWEB)
Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik
2016-11-15
The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.
Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations
Mitry, Mina
Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.
Asymptotics of empirical eigenstructure for high dimensional spiked covariance.
Wang, Weichen; Fan, Jianqing
2017-06-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.
Yu, Wenbao; Park, Taesung
2014-01-01
It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. We propose an AUC-based approach using penalized regression (AucPR), which is a parametric method used for obtaining a linear combination for maximizing the AUC. To obtain the AUC maximizer in a high-dimensional context, we transform a classical parametric AUC maximizer, which is used in a low-dimensional context, into a regression framework and thus, apply the penalization regression approach directly. Two kinds of penalization, lasso and elastic net, are considered. The parametric approach can avoid some of the difficulties of a conventional non-parametric AUC-based approach, such as the lack of an appropriate concave objective function and a prudent choice of the smoothing parameter. We apply the proposed AucPR for gene selection and classification using four real microarray and synthetic data. Through numerical studies, AucPR is shown to perform better than the penalized logistic regression and the nonparametric AUC-based method, in the sense of AUC and sensitivity for a given specificity, particularly when there are many correlated genes. We propose a powerful parametric and easily-implementable linear classifier AucPR, for gene selection and disease prediction for high-dimensional data. AucPR is recommended for its good prediction performance. Beside gene expression microarray data, AucPR can be applied to other types of high-dimensional omics data, such as miRNA and protein data.
Paul, Ganesh C.; Saha, Arijit; Das, Sourin
2018-05-01
We theoretically investigate the transport properties of a quasi-one-dimensional ferromagnet-superconductor junction where the superconductor consists of mixed singlet and triplet pairings. We show that the relative orientation of the Stoner field (h ˜) in the ferromagnetic lead and the d vector of the superconductor acts like a on-off switch for the zero bias conductance of the device. In the regime, where triplet pairing amplitude dominates over the singlet counterpart (topological phase), a pair of Majorana zero modes appear at each end of the superconducting part of the nanowire. When h ˜ is parallel or antiparallel to the d vector, transport gets completely blocked due to blockage in pairing while, when h ˜ and d are perpendicular to each other, the zero energy two terminal differential conductance spectra exhibits sharp transition from 4 e2/h to 2 e2/h as the magnetization strength in the lead becomes larger than the chemical potential indicating the spin-selective coupling of a pair of Majorana zero modes to the lead.
Directory of Open Access Journals (Sweden)
Maria P. Deane
1984-12-01
Full Text Available From an initial double infection in mice, established by simultaneous and equivalent inocula of bloodstream forms of strains Y and F of Trypanosoma cruzi, two lines were derived by subinoculations: one (W passaged every week, the other (M every month. Through biological and biochemical methods only the Y strain was identified at the end of the 10th and 16th passages of line W and only the F strain at the 2nd and 4th passages of line M. The results illustrate strain selection through laboratory manipulation of initially mixed populations of T. cruzi.De uma infecção inicialmente dupla em camundongo, estabelecida por inóculo simultaneo e equivalente de formas sanguíneas das cepas Y e F de Trypanosoma cruzi, duas linhagens foram originadas por subinoculações: uma (W passada casa semana, a outra (M cada mês. Por métodos biológicos e bioquímicos apenas a cepa Y foi identificada ao fim a 10a. e 16a. passagens da linhagem W e apenas a cepa F na 2a. e 4a.passagens de linhagem M. Os resultados demonstram a seleção de cepas através de manipulação em laboratorio de populações inicialmente mistas de T. cruzi.
Selective conversion of carbon monoxide to hydrogen by anaerobic mixed culture.
Liu, Yafeng; Wan, Jingjing; Han, Sheng; Zhang, Shicheng; Luo, Gang
2016-02-01
A new method for the conversion of CO to H2 was developed by anaerobic mixed culture in the current study. Higher CO consumption rate was obtained by anaerobic granular sludge (AGS) compared to waste activated sludge (WAS) at 55 °C and pH 7.5. However, H2 was the intermediate and CH4 was the final product. Fermentation at pH 5.5 by AGS inhibited CH4 production, while the lower CO consumption rate (50% of that at pH 7.5) and the production of acetate were found. Fermentation at pH 7.5 with the addition of chloroform achieved efficient and selective conversion of CO to H2. Stable and efficient H2 production was achieved in a continuous reactor inoculated with AGS, and gas recirculation was crucial to increase the CO conversion efficiency. Microbial community analysis showed that high abundance (44%) of unclassified sequences and low relative abundance (1%) of known CO-utilizing bacteria Desulfotomaculum were enriched in the reactor. Copyright © 2015 Elsevier Ltd. All rights reserved.
Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok
2016-12-05
High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.
Arbianti, Rita; Surya Utami, Tania; Leondo, Vifki; Elisabeth; Andyah Putri, Syafira; Hermansyah, Heri
2018-03-01
Microbial Fuel Cell (MFC) provides a new alternative in the treatment of organic waste. MFC produces 50-90% less sludge to be disposed than other methods. MFC technology can utilize existing microorganisms in the waste as a catalyst to generate electricity and simultaneously also serves as a wastewater treatment unit itself. Tempeh wastewater is one of the abundant industrial wastewater which can be processed using MFC. Research using the selective mixed culture is very likely to do due to the good result on COD removals by adding mixed culture. Microorganisms in tempeh wastewater consist of bacteria gram positive and gram negative. This study focused on the aspects of waste treatment which is determined by decreased levels of COD and BOD. Variations in this study are the formation time of biofilm and the addition of selective gram. MFC operated for 50 hours. For a variation of biofilm formation, experiments were performed after incubation by replacing incubation substrates used in the formation of biofilms. Biofilm formation time in this study was 3 days, 5 days, 7 days and 14 days. Gram positive and gram negative bacteria were used in selective mixed culture experiments. Selective mixed culture added to the reactor by 1 mL and 5 mL. Selection of gram-positive or gram-negative bacteria carried by growing mixed culture on selective media. COD and BOD levels were measured in the wastewater before and after the experiment conducted in each variation. Biofilm formation optimum time is 7 days which decrease COD and BOD levels by 18.2% and 35.9%. The addition of gram negative bacteria decreases COD and BOD levels by 29.32% and 51.32%. Further research is needed in order to get a better result on decreasing levels of COD and BOD.
Geraci, Joseph; Dharsee, Moyez; Nuin, Paulo; Haslehurst, Alexandria; Koti, Madhuri; Feilotter, Harriet E; Evans, Ken
2014-03-01
We introduce a novel method for visualizing high dimensional data via a discrete dynamical system. This method provides a 2D representation of the relationship between subjects according to a set of variables without geometric projections, transformed axes or principal components. The algorithm exploits a memory-type mechanism inherent in a certain class of discrete dynamical systems collectively referred to as the chaos game that are closely related to iterative function systems. The goal of the algorithm was to create a human readable representation of high dimensional patient data that was capable of detecting unrevealed subclusters of patients from within anticipated classifications. This provides a mechanism to further pursue a more personalized exploration of pathology when used with medical data. For clustering and classification protocols, the dynamical system portion of the algorithm is designed to come after some feature selection filter and before some model evaluation (e.g. clustering accuracy) protocol. In the version given here, a univariate features selection step is performed (in practice more complex feature selection methods are used), a discrete dynamical system is driven by this reduced set of variables (which results in a set of 2D cluster models), these models are evaluated for their accuracy (according to a user-defined binary classification) and finally a visual representation of the top classification models are returned. Thus, in addition to the visualization component, this methodology can be used for both supervised and unsupervised machine learning as the top performing models are returned in the protocol we describe here. Butterfly, the algorithm we introduce and provide working code for, uses a discrete dynamical system to classify high dimensional data and provide a 2D representation of the relationship between subjects. We report results on three datasets (two in the article; one in the appendix) including a public lung cancer
Wang, Xing-Chen; Li, Ai-Hua; Dizy, Marta; Ullah, Niamat; Sun, Wei-Xuan; Tao, Yong-Sheng
2017-08-01
To improve the aroma profile of Ecolly dry white wine, the simultaneous and sequential inoculations of selected Rhodotorula mucilaginosa and Saccharomyces cerevisiae were performed in wine making of this work. The two yeasts were mixed in various ratios for making the mixed inoculum. The amount of volatiles and aroma characteristics were determined the following year. Mixed fermentation improved both the varietal and fermentative aroma compound composition, especially that of (Z)-3-hexene-1-ol, nerol oxide, certain acetates and ethyls group compounds. Citrus, sweet fruit, acid fruit, berry, and floral aroma traits were enhanced by mixed fermentation; however, an animal note was introduced upon using higher amounts of R. mucilaginosa. Aroma traits were regressed with volatiles as observed by the partial least-square regression method. Analysis of correlation coefficients revealed that the aroma traits were the multiple interactions of volatile compounds, with the fermentative volatiles having more impact on aroma than varietal compounds. Copyright © 2017 Elsevier Ltd. All rights reserved.
Newton, Paul; Chandler, Val; Morris-Thomson, Trish; Sayer, Jane; Burke, Linda
2015-01-01
To map current selection and recruitment processes for newly qualified nurses and to explore the advantages and limitations of current selection and recruitment processes. The need to improve current selection and recruitment practices for newly qualified nurses is highlighted in health policy internationally. A cross-sectional, sequential-explanatory mixed-method design with 4 components: (1) Literature review of selection and recruitment of newly qualified nurses; and (2) Literature review of a public sector professions' selection and recruitment processes; (3) Survey mapping existing selection and recruitment processes for newly qualified nurses; and (4) Qualitative study about recruiters' selection and recruitment processes. Literature searches on the selection and recruitment of newly qualified candidates in teaching and nursing (2005-2013) were conducted. Cross-sectional, mixed-method data were collected from thirty-one (n = 31) individuals in health providers in London who had responsibility for the selection and recruitment of newly qualified nurses using a survey instrument. Of these providers who took part, six (n = 6) purposively selected to be interviewed qualitatively. Issues of supply and demand in the workforce, rather than selection and recruitment tools, predominated in the literature reviews. Examples of tools to measure values, attitudes and skills were found in the nursing literature. The mapping exercise found that providers used many selection and recruitment tools, some providers combined tools to streamline process and assure quality of candidates. Most providers had processes which addressed the issue of quality in the selection and recruitment of newly qualified nurses. The 'assessment centre model', which providers were adopting, allowed for multiple levels of assessment and streamlined recruitment. There is a need to validate the efficacy of the selection tools. © 2014 John Wiley & Sons Ltd.
The validation and assessment of machine learning: a game of prediction from high-dimensional data.
Directory of Open Access Journals (Sweden)
Tune H Pers
Full Text Available In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often implies that multiple methods are tested and compared on the same set of data. This is particularly difficult in situations that are prone to over-fitting where the number of subjects is low compared to the number of potential predictors. The article presents a game which provides some grounds for conducting a fair model comparison. Each player selects a modeling strategy for predicting individual response from potential predictors. A strictly proper scoring rule, bootstrap cross-validation, and a set of rules are used to make the results obtained with different strategies comparable. To illustrate the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively.
Zhang, Bo; Chen, Zhen; Albert, Paul S
2012-01-01
High-dimensional biomarker data are often collected in epidemiological studies when assessing the association between biomarkers and human disease is of interest. We develop a latent class modeling approach for joint analysis of high-dimensional semicontinuous biomarker data and a binary disease outcome. To model the relationship between complex biomarker expression patterns and disease risk, we use latent risk classes to link the 2 modeling components. We characterize complex biomarker-specific differences through biomarker-specific random effects, so that different biomarkers can have different baseline (low-risk) values as well as different between-class differences. The proposed approach also accommodates data features that are common in environmental toxicology and other biomarker exposure data, including a large number of biomarkers, numerous zero values, and complex mean-variance relationship in the biomarkers levels. A Monte Carlo EM (MCEM) algorithm is proposed for parameter estimation. Both the MCEM algorithm and model selection procedures are shown to work well in simulations and applications. In applying the proposed approach to an epidemiological study that examined the relationship between environmental polychlorinated biphenyl (PCB) exposure and the risk of endometriosis, we identified a highly significant overall effect of PCB concentrations on the risk of endometriosis.
Exposure to selected fragrance materials. A case study of fragrance-mix-positive eczema patients
DEFF Research Database (Denmark)
Johansen, J D; Rastogi, Suresh Chandra; Menné, T
1996-01-01
. In all cases, the use of these cosmetics completely or partly explained present or past episodes of eczema. Between 1 to 6 constituents of the fragrance mix were found in 22 out of 23 products. The cosmetics of all the patients sensitive to hydroxycitronellal, eugenol, cinnamic alcohol and alpha......The aim of the present study was to assess exposure to constituents of the fragrance mix from cosmetic products used by fragrance-mix-positive eczema patients. 23 products, which had either given a positive patch and/or use test in a total of 11 fragrance-mix-positive patients, were analyzed....... It is concluded that exposure to constituents of the fragrance mix is common in fragrance-allergic patients with cosmetic eczema, and that the fragrance mix is a good reflection of actual exposure....
McCrudden, Matthew T.; Stenseth, Tonje; Bråten, Ivar; Strømsø, Helge I.
2016-01-01
This mixed methods study investigated the extent to which author expertise and content relevance were salient to secondary Norwegian students (N = 153) when they selected documents that pertained to more familiar and less familiar topics. Quantitative results indicated that author expertise was more salient for the less familiar topic (nuclear…
Explorations on High Dimensional Landscapes: Spin Glasses and Deep Learning
Sagun, Levent
This thesis deals with understanding the structure of high-dimensional and non-convex energy landscapes. In particular, its focus is on the optimization of two classes of functions: homogeneous polynomials and loss functions that arise in machine learning. In the first part, the notion of complexity of a smooth, real-valued function is studied through its critical points. Existing theoretical results predict that certain random functions that are defined on high dimensional domains have a narrow band of values whose pre-image contains the bulk of its critical points. This section provides empirical evidence for convergence of gradient descent to local minima whose energies are near the predicted threshold justifying the existing asymptotic theory. Moreover, it is empirically shown that a similar phenomenon may hold for deep learning loss functions. Furthermore, there is a comparative analysis of gradient descent and its stochastic version showing that in high dimensional regimes the latter is a mere speedup. The next study focuses on the halting time of an algorithm at a given stopping condition. Given an algorithm, the normalized fluctuations of the halting time follow a distribution that remains unchanged even when the input data is sampled from a new distribution. Two qualitative classes are observed: a Gumbel-like distribution that appears in Google searches, human decision times, and spin glasses and a Gaussian-like distribution that appears in conjugate gradient method, deep learning with MNIST and random input data. Following the universality phenomenon, the Hessian of the loss functions of deep learning is studied. The spectrum is seen to be composed of two parts, the bulk which is concentrated around zero, and the edges which are scattered away from zero. Empirical evidence is presented for the bulk indicating how over-parametrized the system is, and for the edges that depend on the input data. Furthermore, an algorithm is proposed such that it would
A qualitative numerical study of high dimensional dynamical systems
Albers, David James
Since Poincare, the father of modern mathematical dynamical systems, much effort has been exerted to achieve a qualitative understanding of the physical world via a qualitative understanding of the functions we use to model the physical world. In this thesis, we construct a numerical framework suitable for a qualitative, statistical study of dynamical systems using the space of artificial neural networks. We analyze the dynamics along intervals in parameter space, separating the set of neural networks into roughly four regions: the fixed point to the first bifurcation; the route to chaos; the chaotic region; and a transition region between chaos and finite-state neural networks. The study is primarily with respect to high-dimensional dynamical systems. We make the following general conclusions as the dimension of the dynamical system is increased: the probability of the first bifurcation being of type Neimark-Sacker is greater than ninety-percent; the most probable route to chaos is via a cascade of bifurcations of high-period periodic orbits, quasi-periodic orbits, and 2-tori; there exists an interval of parameter space such that hyperbolicity is violated on a countable, Lebesgue measure 0, "increasingly dense" subset; chaos is much more likely to persist with respect to parameter perturbation in the chaotic region of parameter space as the dimension is increased; moreover, as the number of positive Lyapunov exponents is increased, the likelihood that any significant portion of these positive exponents can be perturbed away decreases with increasing dimension. The maximum Kaplan-Yorke dimension and the maximum number of positive Lyapunov exponents increases linearly with dimension. The probability of a dynamical system being chaotic increases exponentially with dimension. The results with respect to the first bifurcation and the route to chaos comment on previous results of Newhouse, Ruelle, Takens, Broer, Chenciner, and Iooss. Moreover, results regarding the high-dimensional
Kernel based methods for accelerated failure time model with ultra-high dimensional data
Directory of Open Access Journals (Sweden)
Jiang Feng
2010-12-01
Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.
Informed Design of Mixed-Mode Surveys : Evaluating mode effects on measurement and selection error
Klausch, Thomas|info:eu-repo/dai/nl/341427306
2014-01-01
“Mixed-mode designs” are innovative types of surveys which combine more than one mode of administration in the same project, such as surveys administered partly on the web (online), on paper, by telephone, or face-to-face. Mixed-mode designs have become increasingly popular in international survey
Biomarker identification and effect estimation on schizophrenia –a high dimensional data analysis
Directory of Open Access Journals (Sweden)
Yuanzhang eLi
2015-05-01
Full Text Available Biomarkers have been examined in schizophrenia research for decades. Medical morbidity and mortality rates, as well as personal and societal costs, are associated with schizophrenia patients. The identification of biomarkers and alleles, which often have a small effect individually, may help to develop new diagnostic tests for early identification and treatment. Currently, there is not a commonly accepted statistical approach to identify predictive biomarkers from high dimensional data. We used space Decomposition-Gradient-Regression method (DGR to select biomarkers, which are associated with the risk of schizophrenia. Then, we used the gradient scores, generated from the selected biomarkers, as the prediction factor in regression to estimate their effects. We also used an alternative approach, classification and regression tree (CART, to compare the biomarker selected by DGR and found about 70% of the selected biomarkers were the same. However, the advantage of DGR is that it can evaluate individual effects for each biomarker from their combined effect. In DGR analysis of serum specimens of US military service members with a diagnosis of schizophrenia from 1992 to 2005 and their controls, Alpha-1-Antitrypsin (AAT, Interleukin-6 receptor (IL-6r and Connective Tissue Growth Factor (CTGF were selected to identify schizophrenia for males; and Alpha-1-Antitrypsin (AAT, Apolipoprotein B (Apo B and Sortilin were selected for females. If these findings from military subjects are replicated by other studies, they suggest the possibility of a novel biomarker panel as an adjunct to earlier diagnosis and initiation of treatment.
Progress in high-dimensional percolation and random graphs
Heydenreich, Markus
2017-01-01
This text presents an engaging exposition of the active field of high-dimensional percolation that will likely provide an impetus for future work. With over 90 exercises designed to enhance the reader’s understanding of the material, as well as many open problems, the book is aimed at graduate students and researchers who wish to enter the world of this rich topic. The text may also be useful in advanced courses and seminars, as well as for reference and individual study. Part I, consisting of 3 chapters, presents a general introduction to percolation, stating the main results, defining the central objects, and proving its main properties. No prior knowledge of percolation is assumed. Part II, consisting of Chapters 4–9, discusses mean-field critical behavior by describing the two main techniques used, namely, differential inequalities and the lace expansion. In Parts I and II, all results are proved, making this the first self-contained text discussing high-dimensiona l percolation. Part III, consist...
Effects of dependence in high-dimensional multiple testing problems
Directory of Open Access Journals (Sweden)
van de Wiel Mark A
2008-02-01
Full Text Available Abstract Background We consider effects of dependence among variables of high-dimensional data in multiple hypothesis testing problems, in particular the False Discovery Rate (FDR control procedures. Recent simulation studies consider only simple correlation structures among variables, which is hardly inspired by real data features. Our aim is to systematically study effects of several network features like sparsity and correlation strength by imposing dependence structures among variables using random correlation matrices. Results We study the robustness against dependence of several FDR procedures that are popular in microarray studies, such as Benjamin-Hochberg FDR, Storey's q-value, SAM and resampling based FDR procedures. False Non-discovery Rates and estimates of the number of null hypotheses are computed from those methods and compared. Our simulation study shows that methods such as SAM and the q-value do not adequately control the FDR to the level claimed under dependence conditions. On the other hand, the adaptive Benjamini-Hochberg procedure seems to be most robust while remaining conservative. Finally, the estimates of the number of true null hypotheses under various dependence conditions are variable. Conclusion We discuss a new method for efficient guided simulation of dependent data, which satisfy imposed network constraints as conditional independence structures. Our simulation set-up allows for a structural study of the effect of dependencies on multiple testing criterions and is useful for testing a potentially new method on π0 or FDR estimation in a dependency context.
High-dimensional quantum cryptography with twisted light
International Nuclear Information System (INIS)
Mirhosseini, Mohammad; Magaña-Loaiza, Omar S; O’Sullivan, Malcolm N; Rodenburg, Brandon; Malik, Mehul; Boyd, Robert W; Lavery, Martin P J; Padgett, Miles J; Gauthier, Daniel J
2015-01-01
Quantum key distribution (QKD) systems often rely on polarization of light for encoding, thus limiting the amount of information that can be sent per photon and placing tight bounds on the error rates that such a system can tolerate. Here we describe a proof-of-principle experiment that indicates the feasibility of high-dimensional QKD based on the transverse structure of the light field allowing for the transfer of more than 1 bit per photon. Our implementation uses the orbital angular momentum (OAM) of photons and the corresponding mutually unbiased basis of angular position (ANG). Our experiment uses a digital micro-mirror device for the rapid generation of OAM and ANG modes at 4 kHz, and a mode sorter capable of sorting single photons based on their OAM and ANG content with a separation efficiency of 93%. Through the use of a seven-dimensional alphabet encoded in the OAM and ANG bases, we achieve a channel capacity of 2.05 bits per sifted photon. Our experiment demonstrates that, in addition to having an increased information capacity, multilevel QKD systems based on spatial-mode encoding can be more resilient against intercept-resend eavesdropping attacks. (paper)
Inference for High-dimensional Differential Correlation Matrices.
Cai, T Tony; Zhang, Anru
2016-01-01
Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.
Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression
Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph
2017-10-01
In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.
The literary uses of high-dimensional space
Directory of Open Access Journals (Sweden)
Ted Underwood
2015-12-01
Full Text Available Debates over “Big Data” shed more heat than light in the humanities, because the term ascribes new importance to statistical methods without explaining how those methods have changed. What we badly need instead is a conversation about the substantive innovations that have made statistical modeling useful for disciplines where, in the past, it truly wasn’t. These innovations are partly technical, but more fundamentally expressed in what Leo Breiman calls a new “culture” of statistical modeling. Where 20th-century methods often required humanists to squeeze our unstructured texts, sounds, or images into some special-purpose data model, new methods can handle unstructured evidence more directly by modeling it in a high-dimensional space. This opens a range of research opportunities that humanists have barely begun to discuss. To date, topic modeling has received most attention, but in the long run, supervised predictive models may be even more important. I sketch their potential by describing how Jordan Sellers and I have begun to model poetic distinction in the long 19th century—revealing an arc of gradual change much longer than received literary histories would lead us to expect.
Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.
Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel
2011-05-09
Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM
High-dimensional statistical inference: From vector to matrix
Zhang, Anru
Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The
An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling
Li, Weixuan; Lin, Guang; Zhang, Dongxiao
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect-except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functions is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated
Genuinely high-dimensional nonlocality optimized by complementary measurements
International Nuclear Information System (INIS)
Lim, James; Ryu, Junghee; Yoo, Seokwon; Lee, Changhyoup; Bang, Jeongho; Lee, Jinhyoung
2010-01-01
Qubits exhibit extreme nonlocality when their state is maximally entangled and this is observed by mutually unbiased local measurements. This criterion does not hold for the Bell inequalities of high-dimensional systems (qudits), recently proposed by Collins-Gisin-Linden-Massar-Popescu and Son-Lee-Kim. Taking an alternative approach, called the quantum-to-classical approach, we derive a series of Bell inequalities for qudits that satisfy the criterion as for the qubits. In the derivation each d-dimensional subsystem is assumed to be measured by one of d possible measurements with d being a prime integer. By applying to two qubits (d=2), we find that a derived inequality is reduced to the Clauser-Horne-Shimony-Holt inequality when the degree of nonlocality is optimized over all the possible states and local observables. Further applying to two and three qutrits (d=3), we find Bell inequalities that are violated for the three-dimensionally entangled states but are not violated by any two-dimensionally entangled states. In other words, the inequalities discriminate three-dimensional (3D) entanglement from two-dimensional (2D) entanglement and in this sense they are genuinely 3D. In addition, for the two qutrits we give a quantitative description of the relations among the three degrees of complementarity, entanglement and nonlocality. It is shown that the degree of complementarity jumps abruptly to very close to its maximum as nonlocality starts appearing. These characteristics imply that complementarity plays a more significant role in the present inequality compared with the previously proposed inequality.
Approximation of High-Dimensional Rank One Tensors
Bachmayr, Markus
2013-11-12
Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called \\'curse of dimensionality\\'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.
Statistical mechanics of complex neural systems and high dimensional data
International Nuclear Information System (INIS)
Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya
2013-01-01
Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks. (paper)
Quality and efficiency in high dimensional Nearest neighbor search
Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos
2009-01-01
Nearest neighbor (NN) search in high dimensional space is an important problem in many applications. Ideally, a practical solution (i) should be implementable in a relational database, and (ii) its query cost should grow sub-linearly with the dataset size, regardless of the data and query distributions. Despite the bulk of NN literature, no solution fulfills both requirements, except locality sensitive hashing (LSH). The existing LSH implementations are either rigorous or adhoc. Rigorous-LSH ensures good quality of query results, but requires expensive space and query cost. Although adhoc-LSH is more efficient, it abandons quality control, i.e., the neighbor it outputs can be arbitrarily bad. As a result, currently no method is able to ensure both quality and efficiency simultaneously in practice. Motivated by this, we propose a new access method called the locality sensitive B-tree (LSB-tree) that enables fast highdimensional NN search with excellent quality. The combination of several LSB-trees leads to a structure called the LSB-forest that ensures the same result quality as rigorous-LSH, but reduces its space and query cost dramatically. The LSB-forest also outperforms adhoc-LSH, even though the latter has no quality guarantee. Besides its appealing theoretical properties, the LSB-tree itself also serves as an effective index that consumes linear space, and supports efficient updates. Our extensive experiments confirm that the LSB-tree is faster than (i) the state of the art of exact NN search by two orders of magnitude, and (ii) the best (linear-space) method of approximate retrieval by an order of magnitude, and at the same time, returns neighbors with much better quality. © 2009 ACM.
Approximation of High-Dimensional Rank One Tensors
Bachmayr, Markus; Dahmen, Wolfgang; DeVore, Ronald; Grasedyck, Lars
2013-01-01
Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called 'curse of dimensionality'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.
Directory of Open Access Journals (Sweden)
Xiaoli Wang
2016-01-01
Full Text Available A series of metal cation modified layered-double hydroxides (LDHs and mixed oxides were prepared and used to be the selective oxidation of toluene with O2. The results revealed that the modified LDHs exhibited much higher catalytic performance than their parent LDH and the modified mixed oxides. Moreover, the metal cations were also found to play important roles in the catalytic performance and stabilities of modified catalysts. Under the optimal reaction conditions, the highest toluene conversion reached 8.7% with 97.5% of the selectivity to benzyldehyde; moreover, the catalytic performance remained after nine catalytic runs. In addition, the reaction probably involved a free-radical mechanism.
Bosch, H.; Bruggink, A.A.; Ross, J.R.H.
1987-01-01
The selective oxidation of n-butane to maleic anhydride over V-P-O mixed oxides was studied under oxygen deficient conditions. The mixed oxides were prepared with P/V atomic ratios ranging from 0.7 to 1.0. Catalysts with P/V <1.0 did not show any selectivity to maleic anhydride formation, regardless
Zuo, Hao-yi; Gao, Jie; Yang, Jing-guo
2007-03-01
A new method to enhance the intensity of the different orders of Stokes lines of SRS by using mixed dye fluorescence is reported. The Stokes lines from the second-order to the fifth-order of CCl4 were enhanced by the fluorescence of mixed R6G and RB solutions in different proportions of 20:2, 20:13 and 20:40 (R6g:Rb), respectively. It is considered that the Stokes lines from the second-order to the fifth-order are near the fluorescence peaks of the three mixed solutions, and far from the absorption peaks of R6g and Rb, so the enhancement effect dominates the absorption effect; as a result, these stokes lines are enhanced. On the contrary, the first-order stokes line is near the absorption peak of RB and far from the fluorescence peaks of the mixed solutions, which leads to the weakening of this stokes line. It is also reported that the first-order, the second-order and the third-order Stokes lines of benzene were enhanced by the fluorescence of mixed solutions of R6g and DCM with of different proportions. The potential application of this method is forecasted.
Crocker, Joanna C; Beecham, Emma; Kelly, Paula; Dinsdale, Andrew P; Hemsley, June; Jones, Louise; Bluebond-Langner, Myra
2015-01-01
Background: Recruitment to paediatric palliative care research is challenging, with high rates of non-invitation of eligible families by clinicians. The impact on sample characteristics is unknown. Aim: To investigate, using mixed methods, non-invitation of eligible families and ensuing selection bias in an interview study about parents? experiences of advance care planning (ACP). Design: We examined differences between eligible families invited and not invited to participate by clinicians us...
Ghalei, Behnam; Sakurai, Kento; Kinoshita, Yosuke; Wakimoto, Kazuki; Isfahani, Ali Pournaghshband; Song, Qilei; Doitomi, Kazuki; Furukawa, Shuhei; Hirao, Hajime; Kusuda, Hiromu; Kitagawa, Susumu; Sivaniah, Easan
2017-07-01
Mixed matrix membranes (MMMs) for gas separation applications have enhanced selectivity when compared with the pure polymer matrix, but are commonly reported with low intrinsic permeability, which has major cost implications for implementation of membrane technologies in large-scale carbon capture projects. High-permeability polymers rarely generate sufficient selectivity for energy-efficient CO2 capture. Here we report substantial selectivity enhancements within high-permeability polymers as a result of the efficient dispersion of amine-functionalized, nanosized metal-organic framework (MOF) additives. The enhancement effects under optimal mixing conditions occur with minimal loss in overall permeability. Nanosizing of the MOF enhances its dispersion within the polymer matrix to minimize non-selective microvoid formation around the particles. Amination of such MOFs increases their interaction with thepolymer matrix, resulting in a measured rigidification and enhanced selectivity of the overall composite. The optimal MOF MMM performance was verified in three different polymer systems, and also over pressure and temperature ranges suitable for carbon capture.
Supplier selection in automobile industry: A mixed balanced scorecard–fuzzy AHP approa
Directory of Open Access Journals (Sweden)
Masoud Rahiminezhad Galankashi
2016-03-01
Full Text Available This study proposed an integrated Balanced Scorecard–Fuzzy Analytic Hierarchical Process (BSC–FAHP model to select suppliers in the automotive industry. In spite of the vast amount of studies on supplier selection, the evaluation and selection of suppliers using the specific measures of the automotive industry are less investigated. In order to fill this gap, this research proposed a new BSC for supplier selection of automobile industry. Measures were gathered using a literature survey and accredited using Nominal Group Technique (NGT. Finally, a fuzzy AHP was used to select the best supplier.
Bibenzimidazole containing mixed ligand cobalt(III) complex as a selective receptor for iodide
Digital Repository Service at National Institute of Oceanography (India)
Indumathy, R.; Parameswarana, P.S.; Aiswarya, C.V.; Nair, B.U.
Two new mixed ligand cobalt(III) complexes containing bibenzimidazole (bbenzimH_{2}) ligand with composition [Co(phen)_{2}bbenzimH_{2}](ClO_{4})_{3} (1) and [Co(bpy)_{2}bbenzimH_{2}](ClO_{4...}
Energy Technology Data Exchange (ETDEWEB)
Schoenauer, D.; Moos, R. [Bayreuth Univ. (Germany). Lehrstuhl fuer Funktionsmaterialien; Wiesner, K.; Fleischer, M. [Siemens AG, CT PS 6, Muenchen (Germany)
2007-07-01
Mixed potential sensors with additional catalytic deposits on one of two electrodes show a high potential for NH{sub 3} detection. With defined reactions at the covered electrode it is possible to derive a temperature dependent correlation between the gas concentration/composition and the sensor signal which is characteristic for the used electrode material and the catalyst.
Huang, Lin; Xue, Ming; Song, Qingshan; Chen, Siru; Pan, Ying; Qiu, Shilun
2014-01-01
A new mixed-ligand Zeolitic Imidazolate Framework Zn4(2-mbIm) 3(bIm)5·4H2O (named JUC-160, 2-mbIm = 2-methylbenzimidazole, bIm = benzimidazole and JUC = Jilin University China) was synthesized with a solvothermal reaction of Zn(NO3) 2·6H2O, b
Individual-based models for adaptive diversification in high-dimensional phenotype spaces.
Ispolatov, Iaroslav; Madhok, Vaibhav; Doebeli, Michael
2016-02-07
Most theories of evolutionary diversification are based on equilibrium assumptions: they are either based on optimality arguments involving static fitness landscapes, or they assume that populations first evolve to an equilibrium state before diversification occurs, as exemplified by the concept of evolutionary branching points in adaptive dynamics theory. Recent results indicate that adaptive dynamics may often not converge to equilibrium points and instead generate complicated trajectories if evolution takes place in high-dimensional phenotype spaces. Even though some analytical results on diversification in complex phenotype spaces are available, to study this problem in general we need to reconstruct individual-based models from the adaptive dynamics generating the non-equilibrium dynamics. Here we first provide a method to construct individual-based models such that they faithfully reproduce the given adaptive dynamics attractor without diversification. We then show that a propensity to diversify can be introduced by adding Gaussian competition terms that generate frequency dependence while still preserving the same adaptive dynamics. For sufficiently strong competition, the disruptive selection generated by frequency-dependence overcomes the directional evolution along the selection gradient and leads to diversification in phenotypic directions that are orthogonal to the selection gradient. Copyright © 2015 Elsevier Ltd. All rights reserved.
Matrix correlations for high-dimensional data: The modified RV-coefficient
Smilde, A.K.; Kiers, H.A.L.; Bijlsma, S.; Rubingh, C.M.; Erk, M.J. van
2009-01-01
Motivation: Modern functional genomics generates high-dimensional datasets. It is often convenient to have a single simple number characterizing the relationship between pairs of such high-dimensional datasets in a comprehensive way. Matrix correlations are such numbers and are appealing since they
Directory of Open Access Journals (Sweden)
Omid Hamidi
2014-01-01
Full Text Available Microarray technology results in high-dimensional and low-sample size data sets. Therefore, fitting sparse models is substantial because only a small number of influential genes can reliably be identified. A number of variable selection approaches have been proposed for high-dimensional time-to-event data based on Cox proportional hazards where censoring is present. The present study applied three sparse variable selection techniques of Lasso, smoothly clipped absolute deviation and the smooth integration of counting, and absolute deviation for gene expression survival time data using the additive risk model which is adopted when the absolute effects of multiple predictors on the hazard function are of interest. The performances of used techniques were evaluated by time dependent ROC curve and bootstrap .632+ prediction error curves. The selected genes by all methods were highly significant (P<0.001. The Lasso showed maximum median of area under ROC curve over time (0.95 and smoothly clipped absolute deviation showed the lowest prediction error (0.105. It was observed that the selected genes by all methods improved the prediction of purely clinical model indicating the valuable information containing in the microarray features. So it was concluded that used approaches can satisfactorily predict survival based on selected gene expression measurements.
Zhang, Fang; Zhang, Yan; Ding, Jing; Dai, Kun; van Loosdrecht, Mark C. M.; Zeng, Raymond J.
2014-06-01
The control of metabolite production is difficult in mixed culture fermentation. This is particularly related to hydrogen inhibition. In this work, hydrogenotrophic methanogens were selectively enriched to reduce the hydrogen partial pressure and to realize efficient acetate production in extreme-thermophilic (70°C) mixed culture fermentation. The continuous stirred tank reactor (CSTR) was stable operated during 100 days, in which acetate accounted for more than 90% of metabolites in liquid solutions. The yields of acetate, methane and biomass in CSTR were 1.5 +/- 0.06, 1.0 +/- 0.13 and 0.4 +/- 0.05 mol/mol glucose, respectively, close to the theoretical expected values. The CSTR effluent was stable and no further conversion occurred when incubated for 14 days in a batch reactor. In fed-batch experiments, acetate could be produced up to 34.4 g/L, significantly higher than observed in common hydrogen producing fermentations. Acetate also accounted for more than 90% of soluble products formed in these fed-batch fermentations. The microbial community analysis revealed hydrogenotrophic methanogens (mainly Methanothermobacter thermautotrophicus and Methanobacterium thermoaggregans) as 98% of Archaea, confirming that high temperature will select hydrogenotrophic methanogens over aceticlastic methanogens effectively. This work demonstrated a potential application to effectively produce acetate as a value chemical and methane as an energy gas together via mixed culture fermentation.
Dijkstra, Jan Kornelis; Berger, Christian
2018-01-01
The present study examined to what extent selection and influence processes for physical aggression and prosociality in friendship networks differed between sex-specific contexts (i.e., all-male, all-female, and mixed-sex classrooms), while controlling for perceived popularity. Whereas selection processes reflect how behaviors shape friendships, influence processes reveal the reversed pattern by indicating how friends affect individual behaviors. Data were derived from a longitudinal sample of early adolescents from Chile. Four all-male classrooms ( n = 150 male adolescents), four all-female classrooms ( n = 190 female adolescents), and eight mixed-sex classrooms ( n = 272 students) were followed one year from grades 5 to 6 ( M age = 13). Analyses were conducted by means of stochastic-actor-based modeling as implemented in RSIENA. Although it was expected that selection and influence effects for physical aggression and prosociality would vary by context, these effects showed remarkably similar trends across all-male, all-female, and mixed-sex classrooms, with physical aggression reducing and with prosociality increasing the number of nominations received as best friend in all-male and particularly all-female classrooms. Further, perceived popularity increased the number of friendship nominations received in all contexts. Influence processes were only found for perceived popularity, but not for physical aggression and prosociality in any of the three contexts. Together, these findings highlight the importance of both behaviors for friendship selection independent of sex-specific contexts, attenuating the implications of these gendered behaviors for peer relations.
Iron-tellurium-selenium mixed oxide catalysts for the selective oxidation of propylene to acrolein
International Nuclear Information System (INIS)
Patel, B.M.; Price, G.L.
1990-01-01
This paper reports on iron-tellurium-selenium mixed oxide catalysts prepared by coprecipitation from aqueous solution investigated for the propylene to acrolein reaction in the temperature range 543-773 K. Infrared spectroscopy, electron dispersive X-ray analysis, X-ray diffraction, and isotopic tracer techniques have also been employed to characterize this catalytic system. Properties of the Fe-Te-Se mixed oxide catalysts have been compared with Fe-Te mixed oxides in an effort to deduce the functionality of Se. The selenium in the Fe-Te-Se-O catalyst has been found to be the hydrocarbon activating site. The activation energies for the acrolein and carbon dioxide formation are 71 and 54 kJ/mol, respectively. Reactions carried out with 18 O 2 have shown lattice oxygen to be primarily responsible for the formation of both acrolein and carbon dioxide. The initial and rate-determining step for acrolein formation is hydrogen abstraction as determined by an isotope effect associated with the C 3 D 6 reaction. No isotope effect is observed for carbon dioxide formation from C 3 D 6 suggesting that CO 2 is formed by parallel, not consecutive, oxidation of propylene
Energy Technology Data Exchange (ETDEWEB)
Allan, Grant; Eromenko, Igor; McGregor, Peter [Fraser of Allander Institute, Department of Economics, University of Strathclyde, Sir William Duncan Building, 130 Rottenrow, Glasgow G4 0GE (United Kingdom); Swales, Kim [Department of Economics, University of Strathclyde, Sir William Duncan Building, 130 Rottenrow, Glasgow G4 0GE (United Kingdom)
2011-01-15
Standalone levelised cost assessments of electricity supply options miss an important contribution that renewable and non-fossil fuel technologies can make to the electricity portfolio: that of reducing the variability of electricity costs, and their potentially damaging impact upon economic activity. Portfolio theory applications to the electricity generation mix have shown that renewable technologies, their costs being largely uncorrelated with non-renewable technologies, can offer such benefits. We look at the existing Scottish generation mix and examine drivers of changes out to 2020. We assess recent scenarios for the Scottish generation mix in 2020 against mean-variance efficient portfolios of electricity-generating technologies. Each of the scenarios studied implies a portfolio cost of electricity that is between 22% and 38% higher than the portfolio cost of electricity in 2007. These scenarios prove to be mean-variance 'inefficient' in the sense that, for example, lower variance portfolios can be obtained without increasing portfolio costs, typically by expanding the share of renewables. As part of extensive sensitivity analysis, we find that Wave and Tidal technologies can contribute to lower risk electricity portfolios, while not increasing portfolio cost. (author)
Directory of Open Access Journals (Sweden)
Riccardo Rea
2018-01-01
Full Text Available We fabricated novel composite (mixed matrix membranes based on a permeable glassy polymer, Poly(2,6-dimethyl-1,4-phenylene oxide (PPO, and variable loadings of few-layer graphene, to test their potential in gas separation and CO2 capture applications. The permeability, selectivity and diffusivity of different gases as a function of graphene loading, from 0.3 to 15 wt %, was measured at 35 and 65 °C. Samples with small loadings of graphene show a higher permeability and He/CO2 selectivity than pure PPO, due to a favorable effect of the nanofillers on the polymer morphology. Higher amounts of graphene lower the permeability of the polymer, due to the prevailing effect of increased tortuosity of the gas molecules in the membrane. Graphene also allows dramatically reducing the increase of permeability with temperature, acting as a “stabilizer” for the polymer matrix. Such effect reduces the temperature-induced loss of size-selectivity for He/N2 and CO2/N2, and enhances the temperature-induced increase of selectivity for He/CO2. The study confirms that, as observed in the case of other graphene-based mixed matrix glassy membranes, the optimal concentration of graphene in the polymer is below 1 wt %. Below such threshold, the morphology of the nanoscopic filler added in solution affects positively the glassy chains packing, enhancing permeability and selectivity, and improving the selectivity of the membrane at increasing temperatures. These results suggest that small additions of graphene to polymers can enhance their permselectivity and stabilize their properties.
Mixed signals: The effect of conflicting reward- and goal-driven biases on selective attention
Preciado, Daniel; Munneke, Jaap; Theeuwes, Jan
2017-01-01
Attentional selection depends on the interaction between exogenous (stimulus-driven), endogenous (goal-driven), and selection history (experience-driven) factors. While endogenous and exogenous biases have been widely investigated, less is known about their interplay with value-driven attention. The present study investigated the interaction between reward-history and goal-driven biases on perceptual sensitivity (d?) and response time (RT) in a modified cueing paradigm presenting two coloured...
Directory of Open Access Journals (Sweden)
O. B. Demianova
2014-01-01
Full Text Available Goal. To study the dependence between the recurrent balanoposthitis of mixed etiology and oral sex. To assess the efficacy, tolerance and cosmetic acceptability of a combination topical drug on the basis of a cream for the treatment of balanoposthitis of Candida and bacterial etiology. Materials and methods. An open-label single-arm non-randomized study involved 48 men aged 22-43 suffering from recurrent balanoposthitis of mixed etiology and their long-term sex partners. All of the subjects underwent the following tests: complete blood count, clinical urine test, blood biochemistry (AST, ALT, total bilirubin, thymol test and blood glucose, MRSA, blood tests for anti-hepatitis B and C virus antibodies, HIV-1/-2 antibody screening test, microscopy of urethral, vaginal and cervical canal materials, PCR for Chlamydia trachomatis, Trichomonas vaginalis, N. gonorrhoeae, Mycoplasma genitalium, Ureaplasma spp, bacterial swab tests based on urethral materials (in men, vaginal materials (in women and throat (in subjects of both sexes, and microscopy of tongue scrapings. 46 male patients used the Candiderm cream (Glenmark Pharmaceuticals Ltd. for 10-14 days. Physicians assessed the efficacy based on the symptom intensity and patient’s opinion. Results. In people who practiced unprotected oral sex, a high contamination of mucous coats in the oral cavity, throat and genitals with yeast fungi and opportunistic bacteria was revealed. C. Аlbicans was often found in diagnostically significant amounts in couples. The authors substantiate the possibility of a contact-type transmission of opportunistic bacteria during oral sex resulting in balanoposthitis of mixed Candida and bacterial etiology or exacerbation of their condition after sexual contacts in men practicing unprotected oral sex. Evident clinical efficacy and safety of the combination as well as good tolerance and convenience of application of the combination topical drug comprising beclomethasone
Luo, Ying-Di; Zhang, Qi-Lei; Yao, Shan-Jing; Lin, Dong-Qiang
2018-01-19
This study investigated adsorption selectivity of immunoglobulin M (IgM), immunoglobulin A (IgA) and immunoglobulin (IgG) on four mixed-mode resins with the functional ligands of 4-mercatoethyl-pyridine (MEP), 2-mercapto-1-methylimidazole (MMI), 5-aminobenzimidazole (ABI) and tryptophan-5-aminobenzimidazole (W-ABI), respectively. IgM purification processes with mixed-mode resins were also proposed. All resins showed typical pH-dependent adsorption, and high adsorption capacity was found at pH 5.0-8.0 with low adsorption capacity under acidic conditions. Meanwhile, high selectivity of IgM/IgA and IgM/IgG was obtained with ABI-4FF and MMI-4FF resins at pH 4.0-5.0, which was used to develop a method for IgM, IgA and IgG separation by controlling loading and elution pH. Capture of monoclonal IgM from cell culture supernatant with ABI-4FF resins was studied and high purity (∼99%) and good recovery (80.8%) were obtained. Moreover, IgM direct separation from human serum with combined two-step chromatography (ABI-4FF and MMI-4FF) was investigated, and IgM purity of 65.2% and a purification factor of 28.3 were obtained after optimization. The antibody activity of IgM was maintained after purification. The results demonstrated that mixed-mode chromatography with specially-designed ligands is a promising way to improve adsorption selectivity and process efficiency of IgM purification from complex feedstock. Copyright © 2017 Elsevier B.V. All rights reserved.
Guzel, Mehmet A; Higham, Philip A
2013-07-01
Two experiments are reported in which we used type-2 signal detection theory to separate the effects of semantic categorization on early- and late-selection processes in free and cued recall. In Experiment 1, participants studied cue-target pairs for which the targets belonged to two, six, or 24 semantic categories, and later the participants were required to recall the targets either with (cued recall) or without (free recall) the studied cues. A confidence rating and a report decision were also required, so that we could compute both forced-report quantity and metacognitive resolution (type-2 discrimination), which served as our estimates of early- and late-selection processes, respectively. Consistent with prior research, having fewer categories enhanced the early-selection process (in performance, two > six > 24 categories). However, in contrast, the late-selection process was impaired (24 > six = two categories). In Experiment 2, encoding of paired associates, for which the targets belonged to either two or 20 semantic categories, was manipulated by having participants either form interactive images or engage in rote repetition. Having fewer categories again was associated with enhanced early selection (two > 20 categories); this effect was greater for rote repetition than for interactive imagery, and greater for free recall than for cued recall. However, late selection again showed the opposite pattern (20 > two categories), even with interactive-imagery encoding, which formed distinctive, individuated memory traces. The results are discussed in terms of early- and late-selection processes in retrieval, as well as overt versus covert recognition.
Energy Technology Data Exchange (ETDEWEB)
Sivaramakrishna, D.; Sreekanth, D.; Himabindu, V. [Centre for Environment, Institute of Science and Technology, Jawaharlal Nehru Technological University, Kukatpally, Hyderabad 500072, Andhra Pradesh (India); Anjaneyulu, Y. [TLGVRC, JSU Box 18739, JSU, Jackson, MS 32917-0939 (United States)
2009-03-15
Biohydrogen production from probiotic wastewater using mixed anaerobic consortia is reported in this paper. Batch tests are carried out in a 5.0 L batch reactor under constant mesophillic temperature (37 C). The maximum hydrogen yield 1.8 mol-hydrogen/mol-carbohydrate is obtained at an optimum pH of 5.5 and substrate concentration 5 g/L. The maximum hydrogen production rate is 168 ml/h. The hydrogen content in the biogas is more than 65% and no significant methane is observed throughout the study. In addition to hydrogen, acetate, propionate, butyrate and ethanol are found to be the main by-products in the metabolism of hydrogen fermentation. (author)
High School Biology Students' Transfer of the Concept of Natural Selection: A Mixed-Methods Approach
Pugh, Kevin J.; Koskey, Kristin L. K.; Linnenbrink-Garcia, Lisa
2014-01-01
The concept of natural selection serves as a foundation for understanding diverse biological concepts and has broad applicability to other domains. However, we know little about students' abilities to transfer (i.e. apply to a new context or use generatively) this concept and the relation between students' conceptual understanding and transfer…
Ren, Jie; He, Tao; Li, Ye; Liu, Sai; Du, Yinhao; Jiang, Yu; Wu, Cen
2017-05-16
Over the past decades, the prevalence of type 2 diabetes mellitus (T2D) has been steadily increasing around the world. Despite large efforts devoted to better understand the genetic basis of the disease, the identified susceptibility loci can only account for a small portion of the T2D heritability. Some of the existing approaches proposed for the high dimensional genetic data from the T2D case-control study are limited by analyzing a few number of SNPs at a time from a large pool of SNPs, by ignoring the correlations among SNPs and by adopting inefficient selection techniques. We propose a network constrained regularization method to select important SNPs by taking the linkage disequilibrium into account. To accomodate the case control study, an iteratively reweighted least square algorithm has been developed within the coordinate descent framework where optimization of the regularized logistic loss function is performed with respect to one parameter at a time and iteratively cycle through all the parameters until convergence. In this article, a novel approach is developed to identify important SNPs more effectively through incorporating the interconnections among them in the regularized selection. A coordinate descent based iteratively reweighed least squares (IRLS) algorithm has been proposed. Both the simulation study and the analysis of the Nurses's Health Study, a case-control study of type 2 diabetes data with high dimensional SNP measurements, demonstrate the advantage of the network based approach over the competing alternatives.
International Nuclear Information System (INIS)
Zhang, Liangwei; Lin, Jing; Karim, Ramin
2015-01-01
The accuracy of traditional anomaly detection techniques implemented on full-dimensional spaces degrades significantly as dimensionality increases, thereby hampering many real-world applications. This work proposes an approach to selecting meaningful feature subspace and conducting anomaly detection in the corresponding subspace projection. The aim is to maintain the detection accuracy in high-dimensional circumstances. The suggested approach assesses the angle between all pairs of two lines for one specific anomaly candidate: the first line is connected by the relevant data point and the center of its adjacent points; the other line is one of the axis-parallel lines. Those dimensions which have a relatively small angle with the first line are then chosen to constitute the axis-parallel subspace for the candidate. Next, a normalized Mahalanobis distance is introduced to measure the local outlier-ness of an object in the subspace projection. To comprehensively compare the proposed algorithm with several existing anomaly detection techniques, we constructed artificial datasets with various high-dimensional settings and found the algorithm displayed superior accuracy. A further experiment on an industrial dataset demonstrated the applicability of the proposed algorithm in fault detection tasks and highlighted another of its merits, namely, to provide preliminary interpretation of abnormality through feature ordering in relevant subspaces. - Highlights: • An anomaly detection approach for high-dimensional reliability data is proposed. • The approach selects relevant subspaces by assessing vectorial angles. • The novel ABSAD approach displays superior accuracy over other alternatives. • Numerical illustration approves its efficacy in fault detection applications
N-mix for fish: estimating riverine salmonid habitat selection via N-mixture models
Som, Nicholas A.; Perry, Russell W.; Jones, Edward C.; De Juilio, Kyle; Petros, Paul; Pinnix, William D.; Rupert, Derek L.
2018-01-01
Models that formulate mathematical linkages between fish use and habitat characteristics are applied for many purposes. For riverine fish, these linkages are often cast as resource selection functions with variables including depth and velocity of water and distance to nearest cover. Ecologists are now recognizing the role that detection plays in observing organisms, and failure to account for imperfect detection can lead to spurious inference. Herein, we present a flexible N-mixture model to associate habitat characteristics with the abundance of riverine salmonids that simultaneously estimates detection probability. Our formulation has the added benefits of accounting for demographics variation and can generate probabilistic statements regarding intensity of habitat use. In addition to the conceptual benefits, model application to data from the Trinity River, California, yields interesting results. Detection was estimated to vary among surveyors, but there was little spatial or temporal variation. Additionally, a weaker effect of water depth on resource selection is estimated than that reported by previous studies not accounting for detection probability. N-mixture models show great promise for applications to riverine resource selection.
Directory of Open Access Journals (Sweden)
Lulu Lu
2017-01-01
Full Text Available The electrical activities of neurons are dependent on the complex electrophysiological condition in neuronal system, the three-variable Hindmarsh-Rose (HR neuron model is improved to describe the dynamical behaviors of neuronal activities with electromagnetic induction being considered, and the mode transition of electrical activities in neuron is detected when external electromagnetic radiation is imposed on the neuron. In this paper, different types of electrical stimulus impended with a high-low frequency current are imposed on new HR neuron model, and mixed stimulus-induced mode selection in neural activity is discussed in detail. It is found that mode selection of electrical activities stimulated by high-low frequency current, which also changes the excitability of neuron, can be triggered owing to adding the Gaussian white noise. Meanwhile, the mode selection of the neuron electrical activity is much dependent on the amplitude B of the high frequency current under the same noise intensity, and the high frequency response is selected preferentially by applying appropriate parameters and noise intensity. Our results provide insights into the transmission of complex signals in nerve system, which is valuable in engineering prospective applications such as information encoding.
Mitigating the Insider Threat Using High-Dimensional Search and Modeling
National Research Council Canada - National Science Library
Van Den Berg, Eric; Uphadyaya, Shambhu; Ngo, Phi H; Muthukrishnan, Muthu; Palan, Rajago
2006-01-01
In this project a system was built aimed at mitigating insider attacks centered around a high-dimensional search engine for correlating the large number of monitoring streams necessary for detecting insider attacks...
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
Energy Technology Data Exchange (ETDEWEB)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); CREST, JST, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012 (Japan); Shiro, Masanori [Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); Mathematical Neuroinformatics Group, Advanced Industrial Science and Technology, Tsukuba, Ibaraki 305-8568 (Japan); Takahashi, Nozomu; Mas, Paloma [Center for Research in Agricultural Genomics (CRAG), Consorci CSIC-IRTA-UAB-UB, Barcelona 08193 (Spain)
2015-01-15
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
International Nuclear Information System (INIS)
Hirata, Yoshito; Aihara, Kazuyuki; Suzuki, Hideyuki; Shiro, Masanori; Takahashi, Nozomu; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Efficient and accurate nearest neighbor and closest pair search in high-dimensional space
Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos
2010-01-01
Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii
Teixeira, Carlos A; Russo, Mário; Matos, Cristina; Bentes, Isabel
2014-12-01
This article describes an accurate methodology for an operational, economic, and environmental assessment of municipal solid waste collection. The proposed methodological tool uses key performance indicators to evaluate independent operational and economic efficiency and performance of municipal solid waste collection practices. These key performance indicators are then used in life cycle inventories and life cycle impact assessment. Finally, the life cycle assessment environmental profiles provide the environmental assessment. We also report a successful application of this tool through a case study in the Portuguese city of Porto. Preliminary results demonstrate the applicability of the methodological tool to real cases. Some of the findings focus a significant difference between average mixed and selective collection effective distance (2.14 km t(-1); 16.12 km t(-1)), fuel consumption (3.96 L t(-1); 15.37 L t(-1)), crew productivity (0.98 t h(-1) worker(-1); 0.23 t h(-1) worker(-1)), cost (45.90 € t(-1); 241.20 € t(-1)), and global warming impact (19.95 kg CO2eq t(-1); 57.47 kg CO2eq t(-1)). Preliminary results consistently indicate: (a) higher global performance of mixed collection as compared with selective collection; (b) dependency of collection performance, even in urban areas, on the waste generation rate and density; (c) the decline of selective collection performances with decreasing source-separated material density and recycling collection rate; and (d) that the main threats to collection route efficiency are the extensive collection distances, high fuel consumption vehicles, and reduced crew productivity. © The Author(s) 2014.
Bayesian Inference of High-Dimensional Dynamical Ocean Models
Lin, J.; Lermusiaux, P. F. J.; Lolla, S. V. T.; Gupta, A.; Haley, P. J., Jr.
2015-12-01
This presentation addresses a holistic set of challenges in high-dimension ocean Bayesian nonlinear estimation: i) predict the probability distribution functions (pdfs) of large nonlinear dynamical systems using stochastic partial differential equations (PDEs); ii) assimilate data using Bayes' law with these pdfs; iii) predict the future data that optimally reduce uncertainties; and (iv) rank the known and learn the new model formulations themselves. Overall, we allow the joint inference of the state, equations, geometry, boundary conditions and initial conditions of dynamical models. Examples are provided for time-dependent fluid and ocean flows, including cavity, double-gyre and Strait flows with jets and eddies. The Bayesian model inference, based on limited observations, is illustrated first by the estimation of obstacle shapes and positions in fluid flows. Next, the Bayesian inference of biogeochemical reaction equations and of their states and parameters is presented, illustrating how PDE-based machine learning can rigorously guide the selection and discovery of complex ecosystem models. Finally, the inference of multiscale bottom gravity current dynamics is illustrated, motivated in part by classic overflows and dense water formation sites and their relevance to climate monitoring and dynamics. This is joint work with our MSEAS group at MIT.
Recent Immigration to Canada and the United States: A Mixed Tale of Relative Selection*
Kaushal, Neeraj; Lu, Yao
2014-01-01
Using large-scale census data and adjusting for sending-country fixed effect to account for changing composition of immigrants, we study relative immigrant selection to Canada and the U.S. during 1990–2006, a period characterized by diverging immigration policies in the two countries. Results show a gradual change in selection patterns in educational attainment and host country language proficiency in favor of Canada as its post-1990 immigration policy allocated more points to the human capital of new entrants. Specifically, in 1990, new immigrants in Canada were less likely to have a B.A. degree than those in the U.S.; they were also less likely to have a high-school or lower education. By 2006, Canada surpassed the U.S. in drawing highly-educated immigrants, while continuing to attract fewer low-educated immigrants. Canada also improved its edge over the U.S. in terms of host-country language proficiency of new immigrants. Entry-level earnings, however, do not reflect the same trend: recent immigrants to Canada have experienced a wage disadvantage compared to recent immigrants to the U.S., as well as Canadian natives. One plausible explanation is that, while the Canadian points system has successfully attracted more educated immigrants, it may not be effective in capturing productivity-related traits that are not easily measurable. PMID:27642205
Design guidelines for high dimensional stability of CFRP optical bench
Desnoyers, Nichola; Boucher, Marc-André; Goyette, Philippe
2013-09-01
In carbon fiber reinforced plastic (CFRP) optomechanical structures, particularly when embodying reflective optics, angular stability is critical. Angular stability or warping stability is greatly affected by moisture absorption and thermal gradients. Unfortunately, it is impossible to achieve the perfect laminate and there will always be manufacturing errors in trying to reach a quasi-iso laminate. Some errors, such as those related to the angular position of each ply and the facesheet parallelism (for a bench) can be easily monitored in order to control the stability more adequately. This paper presents warping experiments and finite-element analyses (FEA) obtained from typical optomechanical sandwich structures. Experiments were done using a thermal vacuum chamber to cycle the structures from -40°C to 50°C. Moisture desorption tests were also performed for a number of specific configurations. The selected composite material for the study is the unidirectional prepreg from Tencate M55J/TC410. M55J is a high modulus fiber and TC410 is a new-generation cyanate ester designed for dimensionally stable optical benches. In the studied cases, the main contributors were found to be: the ply angular errors, laminate in-plane parallelism (between 0° ply direction of both facesheets), fiber volume fraction tolerance and joints. Final results show that some tested configurations demonstrated good warping stability. FEA and measurements are in good agreement despite the fact that some defects or fabrication errors remain unpredictable. Design guidelines to maximize the warping stability by taking into account the main dimensional stability contributors, the bench geometry and the optical mount interface are then proposed.
Selection the Optimum Suppliers Compound Using a Mixed Model of MADM and Fault Tree Analysis
Directory of Open Access Journals (Sweden)
Meysam Azimian
2017-03-01
Full Text Available In this paper, an integrated approach of MADM and fault tree analysis (FTA is provided for determining the most reliable combination of suppliers for a strategic product in IUT University. At first, risks of suppliers is estimated by defining the indices for evaluating them, determining their relative status indices and using satisfying and SAW methods. Then, intrinsic risks of utilized equipments in the products are qualified and the final integrated risk for equipments is determined. Finally, through all the different scenarios, the best composition of equipment suppliers is selected by defining the palpable top events and fault tree analysis. The contribution of this paper is about proposing an integrated method of MADM and FTA to determine the most reliable suppliers in order to minimize the final risk of providing a product.
Energy Technology Data Exchange (ETDEWEB)
Deng, Y.; Hunnius, M.; Storck, S.; Maier, W.F. [Max-Planck-Institut fuer Kohlenforschung, Muelheim an der Ruhr (Germany)
1998-12-31
The catalytic oxygen transfer properties of vanadium containing zeolites and vanadium based sol-gel catalysts with hydrogen peroxides are well known. The severe problem of vanadium leaching caused by the presence of the by-product water has been addressed. To avoid any interference with homogeneously catalyzed reactions, our study focusses on selective oxidations in a moisture-free medium with tert.-butylhydroperoxide. We have investigated the catalytic properties of amorphous microporous materials based on SiO{sub 2}, TiO{sub 2}, ZrO{sub 2} and Al{sub 2}O{sub 3} as matrix material and studied the effects of surface polarity on the oxidation of 1-octene and cyclohexane. (orig.)
Mixed signals: The effect of conflicting reward- and goal-driven biases on selective attention.
Preciado, Daniel; Munneke, Jaap; Theeuwes, Jan
2017-07-01
Attentional selection depends on the interaction between exogenous (stimulus-driven), endogenous (goal-driven), and selection history (experience-driven) factors. While endogenous and exogenous biases have been widely investigated, less is known about their interplay with value-driven attention. The present study investigated the interaction between reward-history and goal-driven biases on perceptual sensitivity (d') and response time (RT) in a modified cueing paradigm presenting two coloured cues, followed by sinusoidal gratings. Participants responded to the orientation of one of these gratings. In Experiment 1, one cue signalled reward availability but was otherwise task irrelevant. In Experiment 2, the same cue signalled reward, and indicated the target's most likely location at the opposite side of the display. This design introduced a conflict between reward-driven biases attracting attention and goal-driven biases directing it away. Attentional effects were examined comparing trials in which cue and target appeared at the same versus opposite locations. Two interstimulus interval (ISI) levels were used to probe the time course of attentional effects. Experiment 1 showed performance benefits at the location of the reward-signalling cue and costs at the opposite for both ISIs, indicating value-driven capture. Experiment 2 showed performance benefits only for the long ISI when the target was at the opposite to the reward-associated cue. At the short ISI, only performance costs were observed. These results reveal the time course of these biases, indicating that reward-driven effects influence attention early but can be overcome later by goal-driven control. This suggests that reward-driven biases are integrated as attentional priorities, just as exogenous and endogenous factors.
Directory of Open Access Journals (Sweden)
Salces Judit
2011-08-01
Full Text Available Abstract Background Reference genes with stable expression are required to normalize expression differences of target genes in qPCR experiments. Several procedures and companion software have been proposed to find the most stable genes. Model based procedures are attractive because they provide a solid statistical framework. NormFinder, a widely used software, uses a model based method. The pairwise comparison procedure implemented in GeNorm is a simpler procedure but one of the most extensively used. In the present work a statistical approach based in Maximum Likelihood estimation under mixed models was tested and compared with NormFinder and geNorm softwares. Sixteen candidate genes were tested in whole blood samples from control and heat stressed sheep. Results A model including gene and treatment as fixed effects, sample (animal, gene by treatment, gene by sample and treatment by sample interactions as random effects with heteroskedastic residual variance in gene by treatment levels was selected using goodness of fit and predictive ability criteria among a variety of models. Mean Square Error obtained under the selected model was used as indicator of gene expression stability. Genes top and bottom ranked by the three approaches were similar; however, notable differences for the best pair of genes selected for each method and the remaining genes of the rankings were shown. Differences among the expression values of normalized targets for each statistical approach were also found. Conclusions Optimal statistical properties of Maximum Likelihood estimation joined to mixed model flexibility allow for more accurate estimation of expression stability of genes under many different situations. Accurate selection of reference genes has a direct impact over the normalized expression values of a given target gene. This may be critical when the aim of the study is to compare expression rate differences among samples under different environmental
Albuquerque, M G E; Concas, S; Bengtsson, S; Reis, M A M
2010-09-01
Polyhydroxyalkanoates (PHAs) are promising biodegradable polymers. The use of mixed microbial cultures (MMC) and low cost feedstocks have a positive impact on the cost-effectiveness of the process. It has typically been carried out in Sequencing Batch Reactors (SBR). In this study, a 2-stage CSTR system (under Feast and Famine conditions) was used to effectively select for PHA-storing organisms using fermented molasses as feedstock. The effect of influent substrate concentration (60-120 Cmmol VFA/L) and HRT ratio between the reactors (0.2-0.5h/h) on the system's selection efficiency was assessed. It was shown that Feast reactor residual substrate concentration impacted on the selective pressure for PHA storage (due to substrate-dependent kinetic limitation). Moreover, a residual substrate concentration coming from the Feast to the Famine reactor did not jeopardize the physiological adaptation required for enhanced PHA storage. The culture reached a maximum PHA content of 61%. This success opens new perspectives to the use of wastewater treatment infrastructure for PHA production, thus valorizing either excess sludge or wastewaters. Copyright 2010 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Saion, E.B.; Watt, D.E. (Saint Andrews Univ. (UK). Dept. of Physics); East, B.W. (Scottish Universities Research and Reactor Centre, Glasgow (UK)); Colautti, P. (Istituto Nazionale di Fisica Nucleare, Padua (Italy))
1990-01-01
A new low pressure tissue-equivalent proportional counter (TEPC) in a coaxial double cylindrical form has been developed to measure separately the microdose spectrum from any desired energy band of neutrons in the presence of mixed fields of faster neutrons, by selecting the thickness of the common TE dividing wall to be equivalent to the corresponding maximum proton ranges and by appropriate use of coincidence/anti-coincidence pulse arrangements. This thickness ensures charged particle equilibrium for the relevant neutron energy. Event spectra due to recoils generated by faster neutrons which interact with both the counters are removed completely by anti-coincidence techniques, thereby optimising the sensitivity of the inner microdosemeter to the event spectra of interest. The ability of this counter to discriminate in favour of events due to neutrons of energy <850 keV was achieved in microdosimetric measurements from mixed fields of a nuclear reactor. Mean values of lineal energy and quality factor for neutrons of energy <850 keV from a nuclear reactor were determined from the anti-coincidence spectrum. Good discrimination against {gamma} ray induced events is also achieved for the spectrum recorded in the anti-coincidence mode. This is an advantageous feature for other applications and requires further investigation. (author).
International Nuclear Information System (INIS)
Saion, E.B.; Watt, D.E.; Colautti, P.
1990-01-01
A new low pressure tissue-equivalent proportional counter (TEPC) in a coaxial double cylindrical form has been developed to measure separately the microdose spectrum from any desired energy band of neutrons in the presence of mixed fields of faster neutrons, by selecting the thickness of the common TE dividing wall to be equivalent to the corresponding maximum proton ranges and by appropriate use of coincidence/anti-coincidence pulse arrangements. This thickness ensures charged particle equilibrium for the relevant neutron energy. Event spectra due to recoils generated by faster neutrons which interact with both the counters are removed completely by anti-coincidence techniques, thereby optimising the sensitivity of the inner microdosemeter to the event spectra of interest. The ability of this counter to discriminate in favour of events due to neutrons of energy <850 keV was achieved in microdosimetric measurements from mixed fields of a nuclear reactor. Mean values of lineal energy and quality factor for neutrons of energy <850 keV from a nuclear reactor were determined from the anti-coincidence spectrum. Good discrimination against γ ray induced events is also achieved for the spectrum recorded in the anti-coincidence mode. This is an advantageous feature for other applications and requires further investigation. (author)
International Nuclear Information System (INIS)
Chou, Jui-Sheng; Ongkowijoyo, Citra Satria
2015-01-01
Corporate competitiveness is heavily influenced by the information acquired, processed, utilized and transferred by professional staff involved in the supply chain. This paper develops a decision aid for selecting on-site ready-mix concrete (RMC) unloading type in decision making situations involving multiple stakeholders and evaluation criteria. The uncertainty of criteria weights set by expert judgment can be transformed in random ways based on the probabilistic virtual-scale method within a prioritization matrix. The ranking is performed by grey relational grade systems considering stochastic criteria weight based on individual preference. Application of the decision aiding model in actual RMC case confirms that the method provides a robust and effective tool for facilitating decision making under uncertainty. - Highlights: • This study models decision aiding method to assess ready-mix concrete unloading type. • Applying Monte Carlo simulation to virtual-scale method achieves a reliable process. • Individual preference ranking method enhances the quality of global decision making. • Robust stochastic superiority and inferiority ranking obtains reasonable results
Viana, Fernando; Gil, José V; Genovés, Salvador; Vallés, Salvador; Manzanares, Paloma
2008-09-01
Thirty-eight yeast strains belonging to the genera Candida, Hanseniaspora, Pichia, Torulaspora and Zygosaccharomyces were screened for ester formation on synthetic microbiological medium. The genera Hanseniaspora and Pichia stood out as the best acetate ester producers. Based on the ester profile Hanseniaspora guilliermondii 11027 and 11102, Hanseniaspora osmophila 1471 and Pichia membranifaciens 10113 and 10550 were selected for further characterization of enological traits. When growing on must H. osmophila 1471, which displayed a glucophilic nature and was able to consume more than 90% of initial must sugars, produced levels of acetic acid, medium chain fatty acids and ethyl acetate, within the ranges described for wine. On the other hand, it was found to be a strong producer of 2-phenylethyl acetate. Our data suggest that H. osmophila 1471 is a good candidate for mixed starters, although the possible interactions with S. cerevisiae deserve further research.
DEFF Research Database (Denmark)
Andersen, Bo Sølgaard; Ulrich, Clara; Eigaard, Ole Ritzau
2012-01-01
fishers in a mixed fishery make use of a number of decision variables used previously in the literature, but also a number of decision parameters rarely explicitly accounted for, such as price, weather, and management regulation. The seasonal availability of individual target species and within...... variables were identified from the survey questionnaire, and relevant proxies for the decision function were identified based on available landings and effort information. Additional variables from the survey questionnaire were further used to validate and verify the outcome of the choice model. Commercial......The study presents a short-term effort allocation modelling approach based on a discrete choice random utility model combined with a survey questionnaire to examine the selection of métiers (a combination of fishing area and target species) in the Danish North Sea gillnet fishery. Key decision...
Walker, Josephine G.; Ofithile, Mphoeng; Tavolaro, F. Marina; van Wyk, Jan A.; Evans, Kate; Morgan, Eric R.
2015-01-01
Due to the threat of anthelmintic resistance, livestock farmers worldwide are encouraged to selectively apply treatments against gastrointestinal nematodes (GINs). Targeted selective treatment (TST) of individual animals would be especially useful for smallholder farmers in low-income economies, where cost-effective and sustainable intervention strategies will improve livestock productivity and food security. Supporting research has focused mainly on refining technical indicators for treatment, and much less on factors influencing uptake and effectiveness. We used a mixed method approach, whereby qualitative and quantitative approaches are combined, to develop, implement and validate a TST system for GINs in small ruminants, most commonly goats, among smallholder farmers in the Makgadikgadi Pans region of Botswana, and to seek better understanding of system performance within a cultural context. After the first six months of the study, 42 out of 47 enrolled farmers were followed up; 52% had monitored their animals using the taught inspection criteria and 26% applied TST during this phase. Uptake level showed little correlation with farmer characteristics, such as literacy and size of farm. Herd health significantly improved in those herds where anthelmintic treatment was applied: anaemia, as assessed using the five-point FAMACHA© scale, was 0.44–0.69 points better (95% confidence interval) and body condition score was 0.18–0.36 points better (95% C.I., five-point scale) in treated compared with untreated herds. Only targeting individuals in greatest need led to similar health improvements compared to treating the entire herd, leading to dose savings ranging from 36% to 97%. This study demonstrates that TST against nematodes can be implemented effectively by resource-poor farmers using a community-led approach. The use of mixed methods provides a promising system to integrate technical and social aspects of TST programmes for maximum uptake and effect. PMID
Engineering two-photon high-dimensional states through quantum interference
Zhang, Yingwen; Roux, Filippus S.; Konrad, Thomas; Agnew, Megan; Leach, Jonathan; Forbes, Andrew
2016-01-01
Many protocols in quantum science, for example, linear optical quantum computing, require access to large-scale entangled quantum states. Such systems can be realized through many-particle qubits, but this approach often suffers from scalability problems. An alternative strategy is to consider a lesser number of particles that exist in high-dimensional states. The spatial modes of light are one such candidate that provides access to high-dimensional quantum states, and thus they increase the storage and processing potential of quantum information systems. We demonstrate the controlled engineering of two-photon high-dimensional states entangled in their orbital angular momentum through Hong-Ou-Mandel interference. We prepare a large range of high-dimensional entangled states and implement precise quantum state filtering. We characterize the full quantum state before and after the filter, and are thus able to determine that only the antisymmetric component of the initial state remains. This work paves the way for high-dimensional processing and communication of multiphoton quantum states, for example, in teleportation beyond qubits. PMID:26933685
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix
Hu, Zongliang
2017-09-27
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-09-21
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-01-01
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data
Directory of Open Access Journals (Sweden)
Hongchao Song
2017-01-01
Full Text Available Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE and an ensemble k-nearest neighbor graphs- (K-NNG- based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.
Directory of Open Access Journals (Sweden)
Nicola Koper
2012-03-01
Full Text Available Resource selection functions (RSF are often developed using satellite (ARGOS or Global Positioning System (GPS telemetry datasets, which provide a large amount of highly correlated data. We discuss and compare the use of generalized linear mixed-effects models (GLMM and generalized estimating equations (GEE for using this type of data to develop RSFs. GLMMs directly model differences among caribou, while GEEs depend on an adjustment of the standard error to compensate for correlation of data points within individuals. Empirical standard errors, rather than model-based standard errors, must be used with either GLMMs or GEEs when developing RSFs. There are several important differences between these approaches; in particular, GLMMs are best for producing parameter estimates that predict how management might influence individuals, while GEEs are best for predicting how management might influence populations. As the interpretation, value, and statistical significance of both types of parameter estimates differ, it is important that users select the appropriate analytical method. We also outline the use of k-fold cross validation to assess fit of these models. Both GLMMs and GEEs hold promise for developing RSFs as long as they are used appropriately.
Directory of Open Access Journals (Sweden)
Toshiya Yoshida
2017-11-01
Full Text Available The objective of forest management has become broader, and it is essential to harmonize timber production with conservation of the forest ecosystem. Selection cutting is recognized as a major alternative of clear-cutting, because it can maintain the complexity and heterogeneity of a natural forest; however, its long-term evaluations are limited. This study compared various attributes of stand structures, which are indicators of biodiversity and ecosystem carbon stock between managed and unmanaged blocks (12.6 ha area in total in a natural mixed forest in Hokkaido, the northernmost island of Japan. We found that 30 years’ implementation of single-tree selection did not affect the volume, size structure, species diversity nor spatial distribution of overstory trees in the managed stands. Also, the total carbon stock in the managed stands was almost equal to that of the unmanaged stands. In contrast, several structural attributes and indicator elements that are significant for biodiversity (such as large-diameter live trees, dead trees, cavities, epiphytic bryophytes, and some avian guilds showed marked decrease in the managed stands. We conclude that it is required to leave these structures and elements to some extent for deriving the merit of the management as an alternative silvicultural regime in the region.
Crocker, Joanna C; Beecham, Emma; Kelly, Paula; Dinsdale, Andrew P; Hemsley, June; Jones, Louise; Bluebond-Langner, Myra
2015-03-01
Recruitment to paediatric palliative care research is challenging, with high rates of non-invitation of eligible families by clinicians. The impact on sample characteristics is unknown. To investigate, using mixed methods, non-invitation of eligible families and ensuing selection bias in an interview study about parents' experiences of advance care planning (ACP). We examined differences between eligible families invited and not invited to participate by clinicians using (1) field notes of discussions with clinicians during the invitation phase and (2) anonymised information from the service's clinical database. Families were eligible for the ACP study if their child was receiving care from a UK-based tertiary palliative care service (Group A; N = 519) or had died 6-10 months previously having received care from the service (Group B; N = 73). Rates of non-invitation to the ACP study were high. A total of 28 (5.4%) Group A families and 21 (28.8%) Group B families (p research findings. Non-invitation and selection bias should be considered, assessed and reported in palliative care studies. © The Author(s) 2014.
Directory of Open Access Journals (Sweden)
Datta Susmita
2010-08-01
Full Text Available Abstract Background Generally speaking, different classifiers tend to work well for certain types of data and conversely, it is usually not known a priori which algorithm will be optimal in any given classification application. In addition, for most classification problems, selecting the best performing classification algorithm amongst a number of competing algorithms is a difficult task for various reasons. As for example, the order of performance may depend on the performance measure employed for such a comparison. In this work, we present a novel adaptive ensemble classifier constructed by combining bagging and rank aggregation that is capable of adaptively changing its performance depending on the type of data that is being classified. The attractive feature of the proposed classifier is its multi-objective nature where the classification results can be simultaneously optimized with respect to several performance measures, for example, accuracy, sensitivity and specificity. We also show that our somewhat complex strategy has better predictive performance as judged on test samples than a more naive approach that attempts to directly identify the optimal classifier based on the training data performances of the individual classifiers. Results We illustrate the proposed method with two simulated and two real-data examples. In all cases, the ensemble classifier performs at the level of the best individual classifier comprising the ensemble or better. Conclusions For complex high-dimensional datasets resulting from present day high-throughput experiments, it may be wise to consider a number of classification algorithms combined with dimension reduction techniques rather than a fixed standard algorithm set a priori.
Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.
Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela
2016-12-01
Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.
Linear stability theory as an early warning sign for transitions in high dimensional complex systems
International Nuclear Information System (INIS)
Piovani, Duccio; Grujić, Jelena; Jensen, Henrik Jeldtoft
2016-01-01
We analyse in detail a new approach to the monitoring and forecasting of the onset of transitions in high dimensional complex systems by application to the Tangled Nature model of evolutionary ecology and high dimensional replicator systems with a stochastic element. A high dimensional stability matrix is derived in the mean field approximation to the stochastic dynamics. This allows us to determine the stability spectrum about the observed quasi-stable configurations. From overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean field approximation, we are able to construct a good early-warning indicator of the transitions occurring intermittently. (paper)
Fickler, Robert; Lapkiewicz, Radek; Huber, Marcus; Lavery, Martin P J; Padgett, Miles J; Zeilinger, Anton
2014-07-30
Photonics has become a mature field of quantum information science, where integrated optical circuits offer a way to scale the complexity of the set-up as well as the dimensionality of the quantum state. On photonic chips, paths are the natural way to encode information. To distribute those high-dimensional quantum states over large distances, transverse spatial modes, like orbital angular momentum possessing Laguerre Gauss modes, are favourable as flying information carriers. Here we demonstrate a quantum interface between these two vibrant photonic fields. We create three-dimensional path entanglement between two photons in a nonlinear crystal and use a mode sorter as the quantum interface to transfer the entanglement to the orbital angular momentum degree of freedom. Thus our results show a flexible way to create high-dimensional spatial mode entanglement. Moreover, they pave the way to implement broad complex quantum networks where high-dimensionally entangled states could be distributed over distant photonic chips.
Directory of Open Access Journals (Sweden)
John B Hopkins
Full Text Available Past research indicates that whitebark pine seeds are a critical food source for Threatened grizzly bears (Ursus arctos in the Greater Yellowstone Ecosystem (GYE. In recent decades, whitebark pine forests have declined markedly due to pine beetle infestation, invasive blister rust, and landscape-level fires. To date, no study has reliably estimated the contribution of whitebark pine seeds to the diets of grizzlies through time. We used stable isotope ratios (expressed as δ13C, δ15N, and δ34S values measured in grizzly bear hair and their major food sources to estimate the diets of grizzlies sampled in Cooke City Basin, Montana. We found that stable isotope mixing models that included different combinations of stable isotope values for bears and their foods generated similar proportional dietary contributions. Estimates generated by our top model suggest that whitebark pine seeds (35±10% and other plant foods (56±10% were more important than meat (9±8% to grizzly bears sampled in the study area. Stable isotope values measured in bear hair collected elsewhere in the GYE and North America support our conclusions about plant-based foraging. We recommend that researchers consider model selection when estimating the diets of animals using stable isotope mixing models. We also urge researchers to use the new statistical framework described here to estimate the dietary responses of grizzlies to declines in whitebark pine seeds and other important food sources through time in the GYE (e.g., cutthroat trout, as such information could be useful in predicting how the population will adapt to future environmental change.
Hopkins, John B; Ferguson, Jake M; Tyers, Daniel B; Kurle, Carolyn M
2017-01-01
Past research indicates that whitebark pine seeds are a critical food source for Threatened grizzly bears (Ursus arctos) in the Greater Yellowstone Ecosystem (GYE). In recent decades, whitebark pine forests have declined markedly due to pine beetle infestation, invasive blister rust, and landscape-level fires. To date, no study has reliably estimated the contribution of whitebark pine seeds to the diets of grizzlies through time. We used stable isotope ratios (expressed as δ13C, δ15N, and δ34S values) measured in grizzly bear hair and their major food sources to estimate the diets of grizzlies sampled in Cooke City Basin, Montana. We found that stable isotope mixing models that included different combinations of stable isotope values for bears and their foods generated similar proportional dietary contributions. Estimates generated by our top model suggest that whitebark pine seeds (35±10%) and other plant foods (56±10%) were more important than meat (9±8%) to grizzly bears sampled in the study area. Stable isotope values measured in bear hair collected elsewhere in the GYE and North America support our conclusions about plant-based foraging. We recommend that researchers consider model selection when estimating the diets of animals using stable isotope mixing models. We also urge researchers to use the new statistical framework described here to estimate the dietary responses of grizzlies to declines in whitebark pine seeds and other important food sources through time in the GYE (e.g., cutthroat trout), as such information could be useful in predicting how the population will adapt to future environmental change.
Binder, Harald; Porzelius, Christine; Schumacher, Martin
2011-03-01
Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Julien, Clavel; Leandro, Aristide; Hélène, Morlon
2018-06-19
Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.
Poša, Mihalj; Tepavčević, Vesna
2011-09-01
The formation of mixed micelles built of 7,12-dioxolithocholic and the following hydrophobic bile acids was examined by conductometric method: cholic (C), deoxycholic (D), chenodeoxycholic (CD), 12-oxolithocholic (12-oxoL), 7-oxolithocholic (7-oxoL), ursodeoxycholic (UD) and hiodeoxycholic (HD). Interaction parameter (β) in the studied binary mixed micelles had negative value, suggesting synergism between micelle building units. Based on β value, the hydrophobic bile acids formed two groups: group I (C, D and CD) and group II (12-oxoL, 7-oxoL, UD and HD). Bile acids from group II had more negative β values than bile acids from group I. Also, bile acids from group II formed intermolecular hydrogen bonds in aggregates with both smaller (2) and higher (4) aggregation numbers, according to the analysis of their stereochemical (conformational) structures and possible structures of mixed micelles built of these bile acids and 7,12-dioxolithocholic acid. Haemolytic potential and partition coefficient of nitrazepam were higher in mixed micelles built of the more hydrophobic bile acids (C, D, CD) and 7,12-dioxolithocholic acid than in micelles built only of 7,12-dioxolithocholic acid. On the other hand, these mixed micelles still had lower values of haemolytic potential than micelles built of C, D or CD. The mixed micelles that included bile acids: 12-oxoL, 7-oxoL, UD or HD did not significantly differ from the micelles of 7,12-dioxolithocholic acid, observing the values of their haemolytic potential. Copyright © 2011 Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
Kandzia, Claudia; Kosonen, Risto; Melikov, Arsen Krikor
In this guidebook most of the known and used in practice methods for achieving mixing air distribution are discussed. Mixing ventilation has been applied to many different spaces providing fresh air and thermal comfort to the occupants. Today, a design engineer can choose from large selection...
Directory of Open Access Journals (Sweden)
Thenmozhi Srinivasan
2015-01-01
Full Text Available Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM, with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
Energy Technology Data Exchange (ETDEWEB)
Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)
2016-07-15
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
The validation and assessment of machine learning: a game of prediction from high-dimensional data
DEFF Research Database (Denmark)
Pers, Tune Hannes; Albrechtsen, A; Holst, C
2009-01-01
In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often...... the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively....
International Nuclear Information System (INIS)
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-01-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-09-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Energy Technology Data Exchange (ETDEWEB)
Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu
2016-09-15
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Selective Reduction of CO2 to CH4 by Tandem Hydrosilylation with Mixed Al/B Catalysts
Chen, Jiawei
2016-04-04
This contribution reports the first example of highly selective reduction of CO2 into CH4 via tandem hydrosilylation with mixed main-group organo-Lewis acid (LA) catalysts [Al(C6F5)3 + B(C6F5)3] {[Al] + [B]}. As shown by this comprehensive experimental and computational study, in this unique tandem catalytic process, [Al] effectively mediates the first step of the overall reduction cycle, namely the fixation of CO2 into HCOOSiEt3 (1) via the LA-mediated C=O activation, while [B] is incapable of promoting the same transformation. On the other hand, [B] is shown to be an excellent catalyst for the subsequent reduction steps 2–4, namely the hydrosilylation of the more basic intermediates [1 to H2C(OSiEt3)2 (2) to H3COSiEt3 (3) and finally to CH4] through the frustrated-Lewis-pair (FLP)-type Si–H activation. Hence, with the required combination of [Al] and [B], a highly selective hydrosilylative reduction of CO2 system has been developed, achieving high CH4 production yield up to 94%. The remarkably different catalytic behaviors between [Al] and [B] are attributed to the higher overall Lewis acidity of [Al] derived from two conflicting factors (electronic and steric effects), which renders the higher tendency of [Al] to form stable [Al]–substrate (intermediate) adducts with CO2 as well as subsequent intermediates 1, 2 and 3. Overall, the roles of [Al] and [B] are not only complementary but also synergistic in the total reduction of CO2, which render both [Al]-mediated first reduction step and [B]-mediated subsequent steps catalytic.
Selective Reduction of CO2 to CH4 by Tandem Hydrosilylation with Mixed Al/B Catalysts
Chen, Jiawei; Falivene, Laura; Caporaso, Lucia; Cavallo, Luigi; Chen, Eugene Y.-X.
2016-01-01
This contribution reports the first example of highly selective reduction of CO2 into CH4 via tandem hydrosilylation with mixed main-group organo-Lewis acid (LA) catalysts [Al(C6F5)3 + B(C6F5)3] {[Al] + [B]}. As shown by this comprehensive experimental and computational study, in this unique tandem catalytic process, [Al] effectively mediates the first step of the overall reduction cycle, namely the fixation of CO2 into HCOOSiEt3 (1) via the LA-mediated C=O activation, while [B] is incapable of promoting the same transformation. On the other hand, [B] is shown to be an excellent catalyst for the subsequent reduction steps 2–4, namely the hydrosilylation of the more basic intermediates [1 to H2C(OSiEt3)2 (2) to H3COSiEt3 (3) and finally to CH4] through the frustrated-Lewis-pair (FLP)-type Si–H activation. Hence, with the required combination of [Al] and [B], a highly selective hydrosilylative reduction of CO2 system has been developed, achieving high CH4 production yield up to 94%. The remarkably different catalytic behaviors between [Al] and [B] are attributed to the higher overall Lewis acidity of [Al] derived from two conflicting factors (electronic and steric effects), which renders the higher tendency of [Al] to form stable [Al]–substrate (intermediate) adducts with CO2 as well as subsequent intermediates 1, 2 and 3. Overall, the roles of [Al] and [B] are not only complementary but also synergistic in the total reduction of CO2, which render both [Al]-mediated first reduction step and [B]-mediated subsequent steps catalytic.
Wu, Qun; Ling, Jie
2014-01-01
Selection of a starter culture with excellent viability and metabolic activity is important for inoculated fermentation of traditional food. To obtain a suitable starter culture for making Chinese sesame-flavored liquor, the yeast and bacterium community structures were investigated during spontaneous and solid-state fermentations of this type of liquor. Five dominant species in spontaneous fermentation were identified: Saccharomyces cerevisiae, Pichia membranaefaciens, Issatchenkia orientalis, Bacillus licheniformis, and Bacillus amyloliquefaciens. The metabolic activity of each species in mixed and inoculated fermentations of liquor was investigated in 14 different cocultures that used different combinations of these species. The relationships between the microbial species and volatile metabolites were analyzed by partial least-squares (PLS) regression analysis. We found that S. cerevisiae was positively correlated to nonanal, and B. licheniformis was positively associated with 2,3-butanediol, isobutyric acid, guaiacol, and 4-vinyl guaiacol, while I. orientalis was positively correlated to butyric acid, isovaleric acid, hexanoic acid, and 2,3-butanediol. These three species are excellent flavor producers for Chinese liquor. Although P. membranaefaciens and B. amyloliquefaciens were not efficient flavor producers, the addition of them alleviated competition among the other three species and altered their growth rates and flavor production. As a result, the coculture of all five dominant species produced the largest amount of flavor compounds. The result indicates that flavor producers and microbial interaction regulators are important for inoculated fermentation of Chinese sesame-flavored liquor. PMID:24814798
International Nuclear Information System (INIS)
Khan, Mohd Shariq; Lee, Sanggyu; Rangaiah, G.P.; Lee, Moonyong
2013-01-01
Highlights: • Practical method for finding optimum refrigerant composition is proposed for LNG plant. • Knowledge of boiling point differences in refrigerant component is employed. • Implementation of process knowledge notably makes LNG process energy efficient. • Optimization of LNG plant is more transparent using process knowledge. - Abstract: Mixed refrigerant (MR) systems are used in many industrial applications because of their high energy efficiency, compact design and energy-efficient heat transfer compared to other processes operating with pure refrigerants. The performance of MR systems depends strongly on the optimum refrigerant composition, which is difficult to obtain. This paper proposes a simple and practical method for selecting the appropriate refrigerant composition, which was inspired by (i) knowledge of the boiling point difference in MR components, and (ii) their specific refrigeration effect in bringing a MR system close to reversible operation. A feasibility plot and composite curves were used for full enforcement of the approach temperature. The proposed knowledge-based optimization approach was described and applied to a single MR and a propane precooled MR system for natural gas liquefaction. Maximization of the heat exchanger exergy efficiency was considered as the optimization objective to achieve an energy efficient design goal. Several case studies on single MR and propane precooled MR processes were performed to show the effectiveness of the proposed method. The application of the proposed method is not restricted to liquefiers, and can be applied to any refrigerator and cryogenic cooler where a MR is involved
DEFF Research Database (Denmark)
Andersen, Anders Holst; Korsgaard, Inge Riis; Jensen, Just
2002-01-01
In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed...... or random effects). In the different models, expressions are given (when these can be found - otherwise unbiased estimates are given) for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non...... Gaussian traits are generalisations of the well-known formulas for Gaussian traits - and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part...
An irregular grid approach for pricing high-dimensional American options
Berridge, S.J.; Schumacher, J.M.
2008-01-01
We propose and test a new method for pricing American options in a high-dimensional setting. The method is centered around the approximation of the associated complementarity problem on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE
CSIR Research Space (South Africa)
Giovannini, D
2013-06-01
Full Text Available : QELS_Fundamental Science, San Jose, California United States, 9-14 June 2013 Reconstruction of High-Dimensional States Entangled in Orbital Angular Momentum Using Mutually Unbiased Measurements D. Giovannini1, ⇤, J. Romero1, 2, J. Leach3, A...
Global communication schemes for the numerical solution of high-dimensional PDEs
DEFF Research Database (Denmark)
Hupp, Philipp; Heene, Mario; Jacob, Riko
2016-01-01
The numerical treatment of high-dimensional partial differential equations is among the most compute-hungry problems and in urgent need for current and future high-performance computing (HPC) systems. It is thus also facing the grand challenges of exascale computing such as the requirement...
High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps
International Nuclear Information System (INIS)
Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; Chen, Xiao
2017-01-01
This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. It relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.
Ferdosi, Bilkis J.; Buddelmeijer, Hugo; Trager, Scott; Wilkinson, Michael H.F.; Roerdink, Jos B.T.M.
2010-01-01
Data sets in astronomy are growing to enormous sizes. Modern astronomical surveys provide not only image data but also catalogues of millions of objects (stars, galaxies), each object with hundreds of associated parameters. Exploration of this very high-dimensional data space poses a huge challenge.
High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm
Cai, Li
2010-01-01
A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…
Estimating the effect of a variable in a high-dimensional regression model
DEFF Research Database (Denmark)
Jensen, Peter Sandholt; Wurtz, Allan
assume that the effect is identified in a high-dimensional linear model specified by unconditional moment restrictions. We consider properties of the following methods, which rely on lowdimensional models to infer the effect: Extreme bounds analysis, the minimum t-statistic over models, Sala...
Multi-Scale Factor Analysis of High-Dimensional Brain Signals
Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain
2017-01-01
In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive
Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization
Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)
2016-01-01
textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main
Berridge, S.J.; Schumacher, J.M.
2004-01-01
We propose a method for pricing high-dimensional American options on an irregular grid; the method involves using quadratic functions to approximate the local effect of the Black-Scholes operator.Once such an approximation is known, one can solve the pricing problem by time stepping in an explicit
Multigrid for high dimensional elliptic partial differential equations on non-equidistant grids
bin Zubair, H.; Oosterlee, C.E.; Wienands, R.
2006-01-01
This work presents techniques, theory and numbers for multigrid in a general d-dimensional setting. The main focus is the multigrid convergence for high-dimensional partial differential equations (PDEs). As a model problem we have chosen the anisotropic diffusion equation, on a unit hypercube. We
An Irregular Grid Approach for Pricing High-Dimensional American Options
Berridge, S.J.; Schumacher, J.M.
2004-01-01
We propose and test a new method for pricing American options in a high-dimensional setting.The method is centred around the approximation of the associated complementarity problem on an irregular grid.We approximate the partial differential operator on this grid by appealing to the SDE
Pricing and hedging high-dimensional American options : an irregular grid approach
Berridge, S.; Schumacher, H.
2002-01-01
We propose and test a new method for pricing American options in a high dimensional setting. The method is centred around the approximation of the associated variational inequality on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE
International Nuclear Information System (INIS)
Xue Gao; Jian Song; Yong Heng Xing; Feng Ying Bai; Li Xian Sun; Zhan Shi
2016-01-01
Four uranyl complexes (UO_2)_2(pht)_2(Hpac)_2(H_2O)_2 (pht = phthalic acid and Hpac = nicotinic acid) (1), (UO_2)(pac)_2(H_2O)_2 (2), [(UO_2)(CMA)_3][H_2N(CH_3)_2] (CMA = cinnamic acid) (3) and (UO_2)_2(C_2O_4)(μ_2- OH)_2(H_2O)_2H_2O (4) were synthesized by the reaction of UO_2(CH_3COO)_2.2H_2O as the metal source with phthalic acid, nicotinic acid, cinnamic acid or oxalic acid as the ligand. They were characterized by elemental analysis, IR, UV-vis, XRD, single crystal X-ray diffraction and thermal gravimetric analysis. The structural analysis showed that complexes 1, 2 and 3 were discrete structures, and by hydrogen bonding interactions, the adjacent molecular units are connected to form a three-dimensional (3D) supramolecular network structure for complex 1 and one-dimensional (1D) chains for complexes 2 and 3. Meanwhile, in the structure of complex 4, a tetrameric SBU (UO_2)_4(μ_2-OH)_4(H_2O)_4 is linked to a 2D layer through a bridging oxalic acid ligand, and furthermore extends the 2D layer into a 3D supramolecular architecture by hydrogen bonding interactions. In order to extend their functional properties, their photoluminescence, surface photovoltage and mixed-dye selective adsorption properties have been studied for the first time. Through experiments, we found that the adsorption performance of complex 3 was better than others, and the amount of adsorbed RhB was 4.22 mg.g"-"1. (authors)
Than, Kyu Kyu; Tin, Khaing Nwe; La, Thazin; Thant, Kyaw Soe; Myint, Theingi; Beeson, James G; Luchters, Stanley; Morgan, Alison
2018-01-03
An estimated 282 women die for every 100,000 live births in Myanmar, most due to preventable causes. Auxiliary Midwives (AMWs) in Myanmar are responsible for providing a package of care during pregnancy and childbirth to women in rural hard to reach areas where skilled birth attendants (Midwives) are not accessible. This study aims to examine the role of AMWs in Myanmar and to assess the current practices of three proposed essential maternal interventions (oral supplement distribution to pregnant women; administration of misoprostol to prevent postpartum haemorrhage; management of puerperal sepsis with oral antibiotics) in order to facilitate a formal integration of these tasks to AMWs in Myanmar. A mixed methods study was conducted in Magwe Region, Myanmar involving a survey of 262 AMWs, complemented by 15 focus group discussions with midwives (MWs), AMWs, mothers and community members, and 10 key informant interviews with health care providers at different levels within the health care system. According to current government policy, AMWs are responsible for identifying pregnant women, screening for danger signs and facilitating early referral, provision of counselling on nutrition and birth preparedness for women in hard-to-reach areas. AMWs also assist at normal deliveries and help MWs provide immunization services. In practice, they also provide oral supplements to pregnant women (84%), provide antibiotics to mothers during the puerperium (43%), and provide misoprostol to prevent postpartum haemorrhage (41%). The current practices of AMWs demonstrate the potential for task shifting on selected essential maternal interventions. However, to integrate these interventions into formal practice they must be complemented with appropriate training, clear guidelines on drug use, systematic recording and reporting, supportive monitoring and supervision and a clear political commitment towards task shifting. With the current national government's commitment towards one
Directory of Open Access Journals (Sweden)
S. H. Jafari
2013-01-01
Full Text Available Polylactic acid (PLA/linear low density polyethylene (LLDPE blend nanocomposites based on two different commercial-grade nanoclays, Cloisite® 30B and Cloisite® 15A, were produced via different melt mixing procedures in a counter-rotating twin screw extruder. The effects of mixing sequence and clay type on morphological and rheological behaviors as well as degradation properties of the blends were investigated. The X-ray diffraction (XRD results showed that generally the level of exfoliation in 30B based nanocomposites was better than 15A based nanocomposites. In addition, due to difference in hydrophilicity and kind of modifiers in these two clays, the effect of 30B on refinement of dispersed phase and enhancement of biodegradability of PLA/LLDPE blend was much more remarkable than that of 15A nanoclay. Unlike the one step mixing process, preparation of nanocomposites via a two steps mixing process improved the morphology. Based on the XRD and TEM (transmission electron microscopic results, it is found that the mixing sequence has a remarkable influence on dispersion and localization of the major part of 30B nanoclay in the PLA matrix. Owing to the induced selective localization of nanoclays in PLA phase, the nanocomposites prepared through a two steps mixing sequence exhibited extraordinary biodegradability, refiner morphology and better melt elasticity.
Bit-Table Based Biclustering and Frequent Closed Itemset Mining in High-Dimensional Binary Data
Directory of Open Access Journals (Sweden)
András Király
2014-01-01
Full Text Available During the last decade various algorithms have been developed and proposed for discovering overlapping clusters in high-dimensional data. The two most prominent application fields in this research, proposed independently, are frequent itemset mining (developed for market basket data and biclustering (applied to gene expression data analysis. The common limitation of both methodologies is the limited applicability for very large binary data sets. In this paper we propose a novel and efficient method to find both frequent closed itemsets and biclusters in high-dimensional binary data. The method is based on simple but very powerful matrix and vector multiplication approaches that ensure that all patterns can be discovered in a fast manner. The proposed algorithm has been implemented in the commonly used MATLAB environment and freely available for researchers.
Characterization of discontinuities in high-dimensional stochastic problems on adaptive sparse grids
International Nuclear Information System (INIS)
Jakeman, John D.; Archibald, Richard; Xiu Dongbin
2011-01-01
In this paper we present a set of efficient algorithms for detection and identification of discontinuities in high dimensional space. The method is based on extension of polynomial annihilation for discontinuity detection in low dimensions. Compared to the earlier work, the present method poses significant improvements for high dimensional problems. The core of the algorithms relies on adaptive refinement of sparse grids. It is demonstrated that in the commonly encountered cases where a discontinuity resides on a small subset of the dimensions, the present method becomes 'optimal', in the sense that the total number of points required for function evaluations depends linearly on the dimensionality of the space. The details of the algorithms will be presented and various numerical examples are utilized to demonstrate the efficacy of the method.
Non-intrusive low-rank separated approximation of high-dimensional stochastic models
Doostan, Alireza; Validi, AbdoulAhad; Iaccarino, Gianluca
2013-01-01
This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.
Non-intrusive low-rank separated approximation of high-dimensional stochastic models
Doostan, Alireza
2013-08-01
This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.
Su, Yapeng; Shi, Qihui; Wei, Wei
2017-02-01
New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features, and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube
Zou, Shuzhi; Zhao, Li; Hu, Kongfa
The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.
Cai, T Tony; Zhang, Anru
2016-09-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*
Cai, T. Tony; Zhang, Anru
2016-01-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471
Distribution of high-dimensional entanglement via an intra-city free-space link.
Steinlechner, Fabian; Ecker, Sebastian; Fink, Matthias; Liu, Bo; Bavaresco, Jessica; Huber, Marcus; Scheidl, Thomas; Ursin, Rupert
2017-07-24
Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links.
Controlling chaos in low and high dimensional systems with periodic parametric perturbations
International Nuclear Information System (INIS)
Mirus, K.A.; Sprott, J.C.
1998-06-01
The effect of applying a periodic perturbation to an accessible parameter of various chaotic systems is examined. Numerical results indicate that perturbation frequencies near the natural frequencies of the unstable periodic orbits of the chaotic systems can result in limit cycles for relatively small perturbations. Such perturbations can also control or significantly reduce the dimension of high-dimensional systems. Initial application to the control of fluctuations in a prototypical magnetic fusion plasma device will be reviewed
A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem
Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša
2014-01-01
Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...
Preface [HD3-2015: International meeting on high-dimensional data-driven science
International Nuclear Information System (INIS)
2016-01-01
A never-ending series of innovations in measurement technology and evolutions in information and communication technologies have led to the ongoing generation and accumulation of large quantities of high-dimensional data every day. While detailed data-centric approaches have been pursued in respective research fields, situations have been encountered where the same mathematical framework of high-dimensional data analysis can be found in a wide variety of seemingly unrelated research fields, such as estimation on the basis of undersampled Fourier transform in nuclear magnetic resonance spectroscopy in chemistry, in magnetic resonance imaging in medicine, and in astronomical interferometry in astronomy. In such situations, bringing diverse viewpoints together therefore becomes a driving force for the creation of innovative developments in various different research fields. This meeting focuses on “Sparse Modeling” (SpM) as a methodology for creation of innovative developments through the incorporation of a wide variety of viewpoints in various research fields. The objective of this meeting is to offer a forum where researchers with interest in SpM can assemble and exchange information on the latest results and newly established methodologies, and discuss future directions of the interdisciplinary studies for High-Dimensional Data-Driven science (HD 3 ). The meeting was held in Kyoto from 14-17 December 2015. We are pleased to publish 22 papers contributed by invited speakers in this volume of Journal of Physics: Conference Series. We hope that this volume will promote further development of High-Dimensional Data-Driven science. (paper)
Reinforcement learning on slow features of high-dimensional input streams.
Directory of Open Access Journals (Sweden)
Robert Legenstein
Full Text Available Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.
High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.
Andras, Peter
2018-02-01
Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.
Salinas, Octavio
2017-01-13
Ethylene is typically produced by steam cracking of various hydrocarbon feedstocks. The gaseous products are then separated in a demethanizer followed by a deethanizer unit and finally sent to a C splitter for the final purification step. Cryogenic distillation of ethylene from ethane is the most energy-intensive unit operation process in the chemical industry. Therefore, the development of more energy-efficient processes for ethylene purification is highly desirable. Membrane-based separation has been proposed as an alternative option for replacement or debottlenecking of C splitters but current polymer membrane materials exhibit insufficient mixed-gas CH/CH selectivity (<7) to be technically and economically attractive. In this work, a highly selective carbon molecular sieve (CMS) membrane derived from a novel spirobisindane-based polyimide of intrinsic microporosity (PIM-6FDA) was developed and characterized. PIM-6FDA showed a single-stage degradation process under an inert nitrogen atmosphere which commenced at ∼480 °C. The CMS formed by pyrolysis at 800 °C had a diffusion/size-sieving-controlled morphology with a mixed-gas (50% CH/50% CH) ethylene/ethane selectivity of 15.6 at 20 bar feed pressure at 35 °C. The mixed-gas ethylene/ethane selectivity is the highest reported value for CMS-type membranes to date.
Directory of Open Access Journals (Sweden)
Malgorzata Nowicka
2017-05-01
Full Text Available High dimensional mass and flow cytometry (HDCyto experiments have become a method of choice for high throughput interrogation and characterization of cell populations.Here, we present an R-based pipeline for differential analyses of HDCyto data, largely based on Bioconductor packages. We computationally define cell populations using FlowSOM clustering, and facilitate an optional but reproducible strategy for manual merging of algorithm-generated clusters. Our workflow offers different analysis paths, including association of cell type abundance with a phenotype or changes in signaling markers within specific subpopulations, or differential analyses of aggregated signals. Importantly, the differential analyses we show are based on regression frameworks where the HDCyto data is the response; thus, we are able to model arbitrary experimental designs, such as those with batch effects, paired designs and so on. In particular, we apply generalized linear mixed models to analyses of cell population abundance or cell-population-specific analyses of signaling markers, allowing overdispersion in cell count or aggregated signals across samples to be appropriately modeled. To support the formal statistical analyses, we encourage exploratory data analysis at every step, including quality control (e.g. multi-dimensional scaling plots, reporting of clustering results (dimensionality reduction, heatmaps with dendrograms and differential analyses (e.g. plots of aggregated signals.
DEFF Research Database (Denmark)
Burniol Figols, Anna; Varrone, Cristiano; Le, Simone Balzer
2018-01-01
in the supernatant by means of mixed microbial consortia selection strategies. The process showed highly reproducible results in terms of PHA yield, 0.99 ± 0.07 Cmol PHA/Cmol S (0.84 g COD PHA/g COD S), PHA content (76 ± 3.1 g PHA/100 g TSS) and 1,3-PDO recovery (99 ± 2.1%). The combined process had an ultimate...
Synergy effects in mixed Bi2O3, MoO3 and V2O5 catalysts for selective oxidation of propylene
DEFF Research Database (Denmark)
Nguyen, Tien The; Le, Thang Minh; Truong, Duc Duc
2012-01-01
% Bi2Mo3O12 and 78.57 mol% BiVO4), corresponding to the compound Bi1-x/3V1-xMoxO4 with x = 0.45 (Bi0.85V0.55Mo0.45O4), exhibited the highest activity for the selective oxidation of propylene to acrolein. The mixed sample prepared chemically by a sol–gel method possessed higher activity than...
International Nuclear Information System (INIS)
Kripylo, P.; Ritter, D.; Hahn, H.; Spiess, H.; Kraak, P.
1981-01-01
Whereas MoO 3 and phosphate stabilize the low valence states of vanadium in the phase structure of V-Mo mixed catalysts, CoO influences the activity only, but not the selectivity. The catalysts show maxima of activity and selectivity at V/Mo ratios of 4 to 6. Ageing is caused by phase separation connected with the appearance of an MoO 3 phase and an increase of the V/Mo ratio in the phase of the active component
On-chip generation of high-dimensional entangled quantum states and their coherent control.
Kues, Michael; Reimer, Christian; Roztocki, Piotr; Cortés, Luis Romero; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T; Little, Brent E; Moss, David J; Caspani, Lucia; Azaña, José; Morandotti, Roberto
2017-06-28
Optical quantum states based on entangled photons are essential for solving questions in fundamental physics and are at the heart of quantum information science. Specifically, the realization of high-dimensional states (D-level quantum systems, that is, qudits, with D > 2) and their control are necessary for fundamental investigations of quantum mechanics, for increasing the sensitivity of quantum imaging schemes, for improving the robustness and key rate of quantum communication protocols, for enabling a richer variety of quantum simulations, and for achieving more efficient and error-tolerant quantum computation. Integrated photonics has recently become a leading platform for the compact, cost-efficient, and stable generation and processing of non-classical optical states. However, so far, integrated entangled quantum sources have been limited to qubits (D = 2). Here we demonstrate on-chip generation of entangled qudit states, where the photons are created in a coherent superposition of multiple high-purity frequency modes. In particular, we confirm the realization of a quantum system with at least one hundred dimensions, formed by two entangled qudits with D = 10. Furthermore, using state-of-the-art, yet off-the-shelf telecommunications components, we introduce a coherent manipulation platform with which to control frequency-entangled states, capable of performing deterministic high-dimensional gate operations. We validate this platform by measuring Bell inequality violations and performing quantum state tomography. Our work enables the generation and processing of high-dimensional quantum states in a single spatial mode.
Covariance Method of the Tunneling Radiation from High Dimensional Rotating Black Holes
Li, Hui-Ling; Han, Yi-Wen; Chen, Shuai-Ru; Ding, Cong
2018-04-01
In this paper, Angheben-Nadalini-Vanzo-Zerbini (ANVZ) covariance method is used to study the tunneling radiation from the Kerr-Gödel black hole and Myers-Perry black hole with two independent angular momentum. By solving the Hamilton-Jacobi equation and separating the variables, the radial motion equation of a tunneling particle is obtained. Using near horizon approximation and the distance of the proper pure space, we calculate the tunneling rate and the temperature of Hawking radiation. Thus, the method of ANVZ covariance is extended to the research of high dimensional black hole tunneling radiation.
Efficient and accurate nearest neighbor and closest pair search in high-dimensional space
Tao, Yufei
2010-07-01
Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial
DEFF Research Database (Denmark)
Ding, Yunhong; Bacco, Davide; Dalgaard, Kjeld
2017-01-01
is intrinsically limited to 1 bit/photon. Here we propose and experimentally demonstrate, for the first time, a high-dimensional quantum key distribution protocol based on space division multiplexing in multicore fiber using silicon photonic integrated lightwave circuits. We successfully realized three mutually......-dimensional quantum states, and enables breaking the information efficiency limit of traditional quantum key distribution protocols. In addition, the silicon photonic circuits used in our work integrate variable optical attenuators, highly efficient multicore fiber couplers, and Mach-Zehnder interferometers, enabling...
High-dimensional chaos from self-sustained collisions of solitons
Energy Technology Data Exchange (ETDEWEB)
Yildirim, O. Ozgur, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Cavium, Inc., 600 Nickerson Rd., Marlborough, Massachusetts 01752 (United States); Ham, Donhee, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Harvard University, 33 Oxford St., Cambridge, Massachusetts 02138 (United States)
2014-06-16
We experimentally demonstrate chaos generation based on collisions of electrical solitons on a nonlinear transmission line. The nonlinear line creates solitons, and an amplifier connected to it provides gain to these solitons for their self-excitation and self-sustenance. Critically, the amplifier also provides a mechanism to enable and intensify collisions among solitons. These collisional interactions are of intrinsically nonlinear nature, modulating the phase and amplitude of solitons, thus causing chaos. This chaos generated by the exploitation of the nonlinear wave phenomena is inherently high-dimensional, which we also demonstrate.
Inferring biological tasks using Pareto analysis of high-dimensional data.
Hart, Yuval; Sheftel, Hila; Hausser, Jean; Szekely, Pablo; Ben-Moshe, Noa Bossel; Korem, Yael; Tendler, Avichai; Mayo, Avraham E; Alon, Uri
2015-03-01
We present the Pareto task inference method (ParTI; http://www.weizmann.ac.il/mcb/UriAlon/download/ParTI) for inferring biological tasks from high-dimensional biological data. Data are described as a polytope, and features maximally enriched closest to the vertices (or archetypes) allow identification of the tasks the vertices represent. We demonstrate that human breast tumors and mouse tissues are well described by tetrahedrons in gene expression space, with specific tumor types and biological functions enriched at each of the vertices, suggesting four key tasks.
Computing and visualizing time-varying merge trees for high-dimensional data
Energy Technology Data Exchange (ETDEWEB)
Oesterling, Patrick [Univ. of Leipzig (Germany); Heine, Christian [Univ. of Kaiserslautern (Germany); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Morozov, Dmitry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scheuermann, Gerik [Univ. of Leipzig (Germany)
2017-06-03
We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.
Kong, Shengchun; Nan, Bin
2014-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.
International Nuclear Information System (INIS)
Mitu, L.; Tigae, C.
2009-01-01
Four electrodes with liquid membrane, Cu/sup 2+/ -selective and Ni/sup 2+/ -selective, not previously described in the literature, were prepared and characterized. Electrodes 1 and 2 are based on mixed complexes of Cu(II) and Ni(II) with isonicotinoylhydrazone-2-aldehyde pyrrole (INH2AP= HL/sup 1/) as ligand and electrodes 3 and 4 are based on the mixed complexes with isonicotinoyl- hydrazone-2-hydroxy-l-naphthaldehyde (INH2HNA = H/sub 2/L/sup 2/ ) Cu/sup 2+/. selective and Ni/sup 2+/ -selective electrodes have been used to determine the copper and nickel ions in aqueous solutions, by both direct potentiometric and potentiometric titration with EDTA. They have also been used for determining the Cu/sup 2+/ and Ni/sup 2+/ ions in industrial waters by direct potentiometry. The analytical results obtained have been checked by the standard addition method and by comparison with determinations through atomic absorption spectrometry. (author)
Directory of Open Access Journals (Sweden)
Jensen Just
2002-05-01
Full Text Available Abstract In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed or random effects. In the different models, expressions are given (when these can be found – otherwise unbiased estimates are given for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non Gaussian traits are generalisations of the well-known formulas for Gaussian traits – and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part of the model (heritability on the normally distributed level of the model or a generalised version of heritability plays a central role in these formulas.
Directory of Open Access Journals (Sweden)
Raftery Adrian E
2009-02-01
Full Text Available Abstract Background Microarray technology is increasingly used to identify potential biomarkers for cancer prognostics and diagnostics. Previously, we have developed the iterative Bayesian Model Averaging (BMA algorithm for use in classification. Here, we extend the iterative BMA algorithm for application to survival analysis on high-dimensional microarray data. The main goal in applying survival analysis to microarray data is to determine a highly predictive model of patients' time to event (such as death, relapse, or metastasis using a small number of selected genes. Our multivariate procedure combines the effectiveness of multiple contending models by calculating the weighted average of their posterior probability distributions. Our results demonstrate that our iterative BMA algorithm for survival analysis achieves high prediction accuracy while consistently selecting a small and cost-effective number of predictor genes. Results We applied the iterative BMA algorithm to two cancer datasets: breast cancer and diffuse large B-cell lymphoma (DLBCL data. On the breast cancer data, the algorithm selected a total of 15 predictor genes across 84 contending models from the training data. The maximum likelihood estimates of the selected genes and the posterior probabilities of the selected models from the training data were used to divide patients in the test (or validation dataset into high- and low-risk categories. Using the genes and models determined from the training data, we assigned patients from the test data into highly distinct risk groups (as indicated by a p-value of 7.26e-05 from the log-rank test. Moreover, we achieved comparable results using only the 5 top selected genes with 100% posterior probabilities. On the DLBCL data, our iterative BMA procedure selected a total of 25 genes across 3 contending models from the training data. Once again, we assigned the patients in the validation set to significantly distinct risk groups (p
Atuo, Fidelis Akunke; O'Connell, Timothy John
2017-08-01
Sympatric predators are predicted to partition resources, especially under conditions of food limitation. Spatial heterogeneity that influences prey availability might play an important role in the scales at which potential competitors select habitat. We assessed potential mechanisms for coexistence by examining the role of heterogeneity in resource partitioning between sympatric raptors overwintering in the southern Great Plains. We conducted surveys for wintering Red-tailed hawk ( Buteo jamaicensis ) and Northern Harrier ( Circus cyanea ) at two state wildlife management areas in Oklahoma, USA. We used information from repeated distance sampling to project use locations in a GIS. We applied resource selection functions to model habitat selection at three scales and analyzed for niche partitioning using the outlying mean index. Habitat selection of the two predators was mediated by spatial heterogeneity. The two predators demonstrated significant fine-scale discrimination in habitat selection in homogeneous landscapes, but were more sympatric in heterogeneous landscapes. Red-tailed hawk used a variety of cover types in heterogeneous landscapes but specialized on riparian forest in homogeneous landscapes. Northern Harrier specialized on upland grasslands in homogeneous landscapes but selected more cover types in heterogeneous landscapes. Our study supports the growing body of evidence that landscapes can affect animal behaviors. In the system we studied, larger patches of primary land cover types were associated with greater allopatry in habitat selection between two potentially competing predators. Heterogeneity within the scale of raptor home ranges was associated with greater sympatry in use and less specialization in land cover types selected.
International Nuclear Information System (INIS)
Onishi, Y.; Hudson, J.D.
1996-01-01
This preliminary assessment documents a set of analyses that were performed to determine the potential for Hanford waste Tank 241-SY-102 waste properties to be adversely affected by mixing the current tank contents or by injecting additional diluent into the tank during sludge mobilization. As a part of this effort, the effects of waste heating that will occur as a result of mixer pump operations are also examined. Finally, the predicted transport behavior of the resulting slurries is compared with the waste acceptance criteria for the Cross-Site Transfer System (CSTS). This work is being performed by Pacific Northwest National Laboratory in support of Westinghouse Hanford Company's W-211 Retrieval Project. We applied the equilibrium chemical code, GMIN, to predict potential chemical reactions. We examined the potential effects of mixing the current tank contents (sludge and supernatant liquid) at a range of temperatures and, separately, of adding pure water at a volume ratio of 1:2:2 (sludge:supernatant liquid:water) as an example of further diluting the current tank contents. The main conclusion of the chemical modeling is that mixing the sludge and the supernate (with or without additional water) in Tank 241-SY-102 dissolves all sodium-containing solids (i.e., NaNO 3 (s), thenardite, NaF(s), and halite), but does not significantly affect the amorphous Cr(OH) 3 and calcite phase distribution. A very small amount of gibbsite [Al(OH) 3 (s)] might precipitate at 25 degrees C, but a somewhat larger amount of gibbsite is predicted to dissolve at the higher temperatures. In concurrence with the reported tank data, the model affirmed that the interstitial solution within the sludge is saturated with respect to many of the solids species in the sludge, but that the supernatant liquid is not in saturation with many of major solids species in sludge. This indicates that a further evaluation of the sludge mixing could prove beneficial
Ghosts in high dimensional non-linear dynamical systems: The example of the hypercycle
International Nuclear Information System (INIS)
Sardanyes, Josep
2009-01-01
Ghost-induced delayed transitions are analyzed in high dimensional non-linear dynamical systems by means of the hypercycle model. The hypercycle is a network of catalytically-coupled self-replicating RNA-like macromolecules, and has been suggested to be involved in the transition from non-living to living matter in the context of earlier prebiotic evolution. It is demonstrated that, in the vicinity of the saddle-node bifurcation for symmetric hypercycles, the persistence time before extinction, T ε , tends to infinity as n→∞ (being n the number of units of the hypercycle), thus suggesting that the increase in the number of hypercycle units involves a longer resilient time before extinction because of the ghost. Furthermore, by means of numerical analysis the dynamics of three large hypercycle networks is also studied, focusing in their extinction dynamics associated to the ghosts. Such networks allow to explore the properties of the ghosts living in high dimensional phase space with n = 5, n = 10 and n = 15 dimensions. These hypercyclic networks, in agreement with other works, are shown to exhibit self-maintained oscillations governed by stable limit cycles. The bifurcation scenarios for these hypercycles are analyzed, as well as the effect of the phase space dimensionality in the delayed transition phenomena and in the scaling properties of the ghosts near bifurcation threshold
High-dimensional free-space optical communications based on orbital angular momentum coding
Zou, Li; Gu, Xiaofan; Wang, Le
2018-03-01
In this paper, we propose a high-dimensional free-space optical communication scheme using orbital angular momentum (OAM) coding. In the scheme, the transmitter encodes N-bits information by using a spatial light modulator to convert a Gaussian beam to a superposition mode of N OAM modes and a Gaussian mode; The receiver decodes the information through an OAM mode analyser which consists of a MZ interferometer with a rotating Dove prism, a photoelectric detector and a computer carrying out the fast Fourier transform. The scheme could realize a high-dimensional free-space optical communication, and decodes the information much fast and accurately. We have verified the feasibility of the scheme by exploiting 8 (4) OAM modes and a Gaussian mode to implement a 256-ary (16-ary) coding free-space optical communication to transmit a 256-gray-scale (16-gray-scale) picture. The results show that a zero bit error rate performance has been achieved.
Energy Efficient MAC Scheme for Wireless Sensor Networks with High-Dimensional Data Aggregate
Directory of Open Access Journals (Sweden)
Seokhoon Kim
2015-01-01
Full Text Available This paper presents a novel and sustainable medium access control (MAC scheme for wireless sensor network (WSN systems that process high-dimensional aggregated data. Based on a preamble signal and buffer threshold analysis, it maximizes the energy efficiency of the wireless sensor devices which have limited energy resources. The proposed group management MAC (GM-MAC approach not only sets the buffer threshold value of a sensor device to be reciprocal to the preamble signal but also sets a transmittable group value to each sensor device by using the preamble signal of the sink node. The primary difference between the previous and the proposed approach is that existing state-of-the-art schemes use duty cycle and sleep mode to save energy consumption of individual sensor devices, whereas the proposed scheme employs the group management MAC scheme for sensor devices to maximize the overall energy efficiency of the whole WSN systems by minimizing the energy consumption of sensor devices located near the sink node. Performance evaluations show that the proposed scheme outperforms the previous schemes in terms of active time of sensor devices, transmission delay, control overhead, and energy consumption. Therefore, the proposed scheme is suitable for sensor devices in a variety of wireless sensor networking environments with high-dimensional data aggregate.
Arif, Muhammad
2012-06-01
In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.
High-dimensional quantum key distribution with the entangled single-photon-added coherent state
Energy Technology Data Exchange (ETDEWEB)
Wang, Yang [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Wan-Su, E-mail: 2010thzz@sina.com [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China)
2017-04-25
High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.
Using High-Dimensional Image Models to Perform Highly Undetectable Steganography
Pevný, Tomáš; Filler, Tomáš; Bas, Patrick
This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.
Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.
Balfer, Jenny; Hu, Ye; Bajorath, Jürgen
2014-08-01
Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Quantum secret sharing based on modulated high-dimensional time-bin entanglement
International Nuclear Information System (INIS)
Takesue, Hiroki; Inoue, Kyo
2006-01-01
We propose a scheme for quantum secret sharing (QSS) that uses a modulated high-dimensional time-bin entanglement. By modulating the relative phase randomly by {0,π}, a sender with the entanglement source can randomly change the sign of the correlation of the measurement outcomes obtained by two distant recipients. The two recipients must cooperate if they are to obtain the sign of the correlation, which is used as a secret key. We show that our scheme is secure against intercept-and-resend (IR) and beam splitting attacks by an outside eavesdropper thanks to the nonorthogonality of high-dimensional time-bin entangled states. We also show that a cheating attempt based on an IR attack by one of the recipients can be detected by changing the dimension of the time-bin entanglement randomly and inserting two 'vacant' slots between the packets. Then, cheating attempts can be detected by monitoring the count rate in the vacant slots. The proposed scheme has better experimental feasibility than previously proposed entanglement-based QSS schemes
Similarity measurement method of high-dimensional data based on normalized net lattice subspace
Institute of Scientific and Technical Information of China (English)
Li Wenfa; Wang Gongming; Li Ke; Huang Su
2017-01-01
The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.
Yu, Hualong; Ni, Jun
2014-01-01
Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.
High-dimensional quantum key distribution with the entangled single-photon-added coherent state
International Nuclear Information System (INIS)
Wang, Yang; Bao, Wan-Su; Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei
2017-01-01
High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.
High-Dimensional Single-Photon Quantum Gates: Concepts and Experiments.
Babazadeh, Amin; Erhard, Manuel; Wang, Feiran; Malik, Mehul; Nouroozi, Rahman; Krenn, Mario; Zeilinger, Anton
2017-11-03
Transformations on quantum states form a basic building block of every quantum information system. From photonic polarization to two-level atoms, complete sets of quantum gates for a variety of qubit systems are well known. For multilevel quantum systems beyond qubits, the situation is more challenging. The orbital angular momentum modes of photons comprise one such high-dimensional system for which generation and measurement techniques are well studied. However, arbitrary transformations for such quantum states are not known. Here we experimentally demonstrate a four-dimensional generalization of the Pauli X gate and all of its integer powers on single photons carrying orbital angular momentum. Together with the well-known Z gate, this forms the first complete set of high-dimensional quantum gates implemented experimentally. The concept of the X gate is based on independent access to quantum states with different parities and can thus be generalized to other photonic degrees of freedom and potentially also to other quantum systems.
Zhu, Lingxue; Lei, Jing; Devlin, Bernie; Roeder, Kathryn
2017-09-01
Scientists routinely compare gene expression levels in cases versus controls in part to determine genes associated with a disease. Similarly, detecting case-control differences in co-expression among genes can be critical to understanding complex human diseases; however statistical methods have been limited by the high dimensional nature of this problem. In this paper, we construct a sparse-Leading-Eigenvalue-Driven (sLED) test for comparing two high-dimensional covariance matrices. By focusing on the spectrum of the differential matrix, sLED provides a novel perspective that accommodates what we assume to be common, namely sparse and weak signals in gene expression data, and it is closely related with Sparse Principal Component Analysis. We prove that sLED achieves full power asymptotically under mild assumptions, and simulation studies verify that it outperforms other existing procedures under many biologically plausible scenarios. Applying sLED to the largest gene-expression dataset obtained from post-mortem brain tissue from Schizophrenia patients and controls, we provide a novel list of genes implicated in Schizophrenia and reveal intriguing patterns in gene co-expression change for Schizophrenia subjects. We also illustrate that sLED can be generalized to compare other gene-gene "relationship" matrices that are of practical interest, such as the weighted adjacency matrices.
Tao, Chenyang; Nichols, Thomas E; Hua, Xue; Ching, Christopher R K; Rolls, Edmund T; Thompson, Paul M; Feng, Jianfeng
2017-01-01
We propose a generalized reduced rank latent factor regression model (GRRLF) for the analysis of tensor field responses and high dimensional covariates. The model is motivated by the need from imaging-genetic studies to identify genetic variants that are associated with brain imaging phenotypes, often in the form of high dimensional tensor fields. GRRLF identifies from the structure in the data the effective dimensionality of the data, and then jointly performs dimension reduction of the covariates, dynamic identification of latent factors, and nonparametric estimation of both covariate and latent response fields. After accounting for the latent and covariate effects, GRLLF performs a nonparametric test on the remaining factor of interest. GRRLF provides a better factorization of the signals compared with common solutions, and is less susceptible to overfitting because it exploits the effective dimensionality. The generality and the flexibility of GRRLF also allow various statistical models to be handled in a unified framework and solutions can be efficiently computed. Within the field of neuroimaging, it improves the sensitivity for weak signals and is a promising alternative to existing approaches. The operation of the framework is demonstrated with both synthetic datasets and a real-world neuroimaging example in which the effects of a set of genes on the structure of the brain at the voxel level were measured, and the results compared favorably with those from existing approaches. Copyright © 2016. Published by Elsevier Inc.
Challenges and Approaches to Statistical Design and Inference in High Dimensional Investigations
Garrett, Karen A.; Allison, David B.
2015-01-01
Summary Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other “omic” data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology, and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative. PMID:19588106
Challenges and approaches to statistical design and inference in high-dimensional investigations.
Gadbury, Gary L; Garrett, Karen A; Allison, David B
2009-01-01
Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other "omic" data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative.
Tikhonov, Mikhail; Monasson, Remi
2018-01-01
Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.
A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification
Directory of Open Access Journals (Sweden)
Yongjun Piao
2015-01-01
Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.
Doubly sparse factor models for unifying feature transformation and feature selection
International Nuclear Information System (INIS)
Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato; Matsumoto, Narihisa; Sugase-Miyamoto, Yasuko
2010-01-01
A number of unsupervised learning methods for high-dimensional data are largely divided into two groups based on their procedures, i.e., (1) feature selection, which discards irrelevant dimensions of the data, and (2) feature transformation, which constructs new variables by transforming and mixing over all dimensions. We propose a method that both selects and transforms features in a common Bayesian inference procedure. Our method imposes a doubly automatic relevance determination (ARD) prior on the factor loading matrix. We propose a variational Bayesian inference for our model and demonstrate the performance of our method on both synthetic and real data.
Doubly sparse factor models for unifying feature transformation and feature selection
Energy Technology Data Exchange (ETDEWEB)
Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato [ERATO, Okanoya Emotional Information Project, Japan Science Technology Agency, Saitama (Japan); Matsumoto, Narihisa; Sugase-Miyamoto, Yasuko, E-mail: okada@k.u-tokyo.ac.j [Human Technology Research Institute, National Institute of Advanced Industrial Science and Technology, Ibaraki (Japan)
2010-06-01
A number of unsupervised learning methods for high-dimensional data are largely divided into two groups based on their procedures, i.e., (1) feature selection, which discards irrelevant dimensions of the data, and (2) feature transformation, which constructs new variables by transforming and mixing over all dimensions. We propose a method that both selects and transforms features in a common Bayesian inference procedure. Our method imposes a doubly automatic relevance determination (ARD) prior on the factor loading matrix. We propose a variational Bayesian inference for our model and demonstrate the performance of our method on both synthetic and real data.
Jing, Qiang; Zhang, Mian; Huang, Xiang; Ren, Xiaoming; Wang, Peng; Lu, Zhenda
2017-06-08
In recent years, there has been an unprecedented rise in the research of halide perovskites because of their important optoelectronic applications, including photovoltaic cells, light-emitting diodes, photodetectors and lasers. The most pressing question concerns the stability of these materials. Here faster degradation and PL quenching are observed at higher iodine content for mixed-halide perovskite CsPb(Br x I 1-x ) 3 nanocrystals, and a simple yet effective method is reported to significantly enhance their stability. After selective etching with acetone, surface iodine is partially etched away to form a bromine-rich surface passivation layer on mixed-halide perovskite nanocrystals. This passivation layer remarkably stabilizes the nanocrystals, making their PL intensity improved by almost three orders of magnitude. It is expected that a similar passivation layer can also be applied to various other kinds of perovskite materials with poor stability issues.
Yu, Wenbao; Park, Taesung
2014-01-01
Motivation It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. Results We propose an AUC-based approach u...
Oliveira, D G; Rocha, M M; Damasceno-Silva, K J; Sá, F V; Lima, L R L; Resende, M D V
2017-08-17
The aim of this study was to estimate the genotypic gain with simultaneous selection of production, nutrition, and culinary traits in cowpea crosses and backcrosses and to compare different selection indexes. Eleven cowpea populations were evaluated in a randomized complete block design with four replications. Fourteen traits were evaluated, and the following parameters were estimated: genotypic variation coefficient, genotypic determination coefficient, experimental quality indicator and selection reliability, estimated genotypic values - BLUE, genotypic correlation coefficient among traits, and genotypic gain with simultaneous selection of all traits. The genotypic gain was estimated based on tree selection indexes: classical, multiplicative, and the sum of ranks. The genotypic variation coefficient was higher than the environmental variation coefficient for the number of days to start flowering, plant type, the weight of one hundred grains, grain index, and protein concentration. The majority of the traits presented genotypic determination coefficient from medium to high magnitude. The identification of increases in the production components is associated with decreases in protein concentration, and the increase in precocity leads to decreases in protein concentration and cooking time. The index based on the sum of ranks was the best alternative for simultaneous selection of traits in the cowpea segregating populations resulting from the crosses and backcrosses evaluated, with emphasis on the F 4 BC 12 , F 4 C 21 , and F 4 C 12 populations, which had the highest genotypic gains.
International Nuclear Information System (INIS)
Pendleton, P.; Taylor, D.
1976-01-01
Propene + 18 0 2 reactions have been studied in a static reaction system on bismuth molybdate and mixed oxides of tin and antimony and of uranium and antimony. The [ 16 0] acrolein content of the total acrolein formed and the proportion of 16 0 in the oxygen of the carbon dioxide by-product have been determined. The results indicate that for each catalyst the lattice is the only direct source of the oxygen in the aldehyde, and that lattice and/or gas phase oxygen is used in carbon dioxide formation. Oxygen anion mobility appears to be greater in the molybdate catalyst than in the other two. (author)
High dimensional biological data retrieval optimization with NoSQL technology
2014-01-01
Background High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. Results In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. Conclusions The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data
High dimensional biological data retrieval optimization with NoSQL technology.
Wang, Shicai; Pandis, Ioannis; Wu, Chao; He, Sijin; Johnson, David; Emam, Ibrahim; Guitton, Florian; Guo, Yike
2014-01-01
High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating
International Nuclear Information System (INIS)
Thien, Mike G.; Barnes, Steve M.
2013-01-01
The Hanford Tank Operations Contractor (TOC) and the Hanford Waste Treatment and Immobilization Plant (WTP) contractor are both engaged in demonstrating mixing, sampling, and transfer system capabilities using simulated Hanford High-Level Waste (HLW) formulations. This represents one of the largest remaining technical issues with the high-level waste treatment mission at Hanford. Previous testing has focused on very specific TOC or WTP test objectives and consequently the simulants were narrowly focused on those test needs. A key attribute in the Defense Nuclear Facilities Safety Board (DNFSB) Recommendation 2010-2 is to ensure testing is performed with a simulant that represents the broad spectrum of Hanford waste. The One System Integrated Project Team is a new joint TOC and WTP organization intended to ensure technical integration of specific TOC and WTP systems and testing. A new approach to simulant definition has been mutually developed that will meet both TOC and WTP test objectives for the delivery and receipt of HLW. The process used to identify critical simulant characteristics, incorporate lessons learned from previous testing, and identify specific simulant targets that ensure TOC and WTP testing addresses the broad spectrum of Hanford waste characteristics that are important to mixing, sampling, and transfer performance are described. (authors)
Energy Technology Data Exchange (ETDEWEB)
Thien, Mike G. [Washington River Protection Solutions, LLC, P.O Box 850, Richland WA, 99352 (United States); Barnes, Steve M. [Waste Treatment Plant, 2435 Stevens Center Place, Richland WA 99354 (United States)
2013-07-01
The Hanford Tank Operations Contractor (TOC) and the Hanford Waste Treatment and Immobilization Plant (WTP) contractor are both engaged in demonstrating mixing, sampling, and transfer system capabilities using simulated Hanford High-Level Waste (HLW) formulations. This represents one of the largest remaining technical issues with the high-level waste treatment mission at Hanford. Previous testing has focused on very specific TOC or WTP test objectives and consequently the simulants were narrowly focused on those test needs. A key attribute in the Defense Nuclear Facilities Safety Board (DNFSB) Recommendation 2010-2 is to ensure testing is performed with a simulant that represents the broad spectrum of Hanford waste. The One System Integrated Project Team is a new joint TOC and WTP organization intended to ensure technical integration of specific TOC and WTP systems and testing. A new approach to simulant definition has been mutually developed that will meet both TOC and WTP test objectives for the delivery and receipt of HLW. The process used to identify critical simulant characteristics, incorporate lessons learned from previous testing, and identify specific simulant targets that ensure TOC and WTP testing addresses the broad spectrum of Hanford waste characteristics that are important to mixing, sampling, and transfer performance are described. (authors)
International Nuclear Information System (INIS)
Thien, Mike G.; Barnes, Steve M.
2013-01-01
The Hanford Tank Operations Contractor (TOC) and the Hanford Waste Treatment and Immobilization Plant (WTP) contractor are both engaged in demonstrating mixing, sampling, and transfer system capabilities using simulated Hanford High-Level Waste (HLW) formulations. This represents one of the largest remaining technical issues with the high-level waste treatment mission at Hanford. Previous testing has focused on very specific TOC or WTP test objectives and consequently the simulants were narrowly focused on those test needs. A key attribute in the Defense Nuclear Facilities Safety Board (DNFSB) Recommendation 2010-2 is to ensure testing is performed with a simulant that represents the broad spectrum of Hanford waste. The One System Integrated Project Team is a new joint TOC and WTP organization intended to ensure technical integration of specific TOC and WTP systems and testing. A new approach to simulant definition has been mutually developed that will meet both TOC and WTP test objectives for the delivery and receipt of HLW. The process used to identify critical simulant characteristics, incorporate lessons learned from previous testing, and identify specific simulant targets that ensure TOC and WTP testing addresses the broad spectrum of Hanford waste characteristics that are important to mixing, sampling, and transfer performance are described
DEFF Research Database (Denmark)
Krag, Ludvig Ahm
have been published in scientific journals and Paper 3 has been submitted to Fisheries Research. This review will take a broader perspective and will examine the capturing process, which is the basis for the selection process. Moreover, it discusses the existing methods and knowledge in the fields...... different species, including cod, are caught together. Demersal trawling is the predominant fishing method in Denmark, as measured by both catch value and volume. Demersal trawls also account for the highest discard rates of juvenile fish, including cod. The focus of this work was on improving......, and openings. The results show that the morphology-based simulations of size selectivity of cod can be used to explain a large part of both the within-haul and the between-haul variations previously reported from sea trials. The method can further predict the selection parameters (L50 and SR) for cod...
Wang, Haohan; Aragam, Bryon; Xing, Eric P
2018-04-26
A fundamental and important challenge in modern datasets of ever increasing dimensionality is variable selection, which has taken on renewed interest recently due to the growth of biological and medical datasets with complex, non-i.i.d. structures. Naïvely applying classical variable selection methods such as the Lasso to such datasets may lead to a large number of false discoveries. Motivated by genome-wide association studies in genetics, we study the problem of variable selection for datasets arising from multiple subpopulations, when this underlying population structure is unknown to the researcher. We propose a unified framework for sparse variable selection that adaptively corrects for population structure via a low-rank linear mixed model. Most importantly, the proposed method does not require prior knowledge of sample structure in the data and adaptively selects a covariance structure of the correct complexity. Through extensive experiments, we illustrate the effectiveness of this framework over existing methods. Further, we test our method on three different genomic datasets from plants, mice, and human, and discuss the knowledge we discover with our method. Copyright © 2018. Published by Elsevier Inc.
Penalized estimation for competing risks regression with applications to high-dimensional covariates
DEFF Research Database (Denmark)
Ambrogi, Federico; Scheike, Thomas H.
2016-01-01
of competing events. The direct binomial regression model of Scheike and others (2008. Predicting cumulative incidence probability by direct binomial regression. Biometrika 95: (1), 205-220) is reformulated in a penalized framework to possibly fit a sparse regression model. The developed approach is easily...... Research 19: (1), 29-51), the research regarding competing risks is less developed (Binder and others, 2009. Boosting for high-dimensional time-to-event data with competing risks. Bioinformatics 25: (7), 890-896). The aim of this work is to consider how to do penalized regression in the presence...... implementable using existing high-performance software to do penalized regression. Results from simulation studies are presented together with an application to genomic data when the endpoint is progression-free survival. An R function is provided to perform regularized competing risks regression according...
Energy Technology Data Exchange (ETDEWEB)
Tahira, Rabia; Ikram, Manzoor; Zubairy, M Suhail [Centre for Quantum Physics, COMSATS Institute of Information Technology, Islamabad (Pakistan); Bougouffa, Smail [Department of Physics, Faculty of Science, Taibah University, PO Box 30002, Madinah (Saudi Arabia)
2010-02-14
We investigate the phenomenon of sudden death of entanglement in a high-dimensional bipartite system subjected to dissipative environments with an arbitrary initial pure entangled state between two fields in the cavities. We find that in a vacuum reservoir, the presence of the state where one or more than one (two) photons in each cavity are present is a necessary condition for the sudden death of entanglement. Otherwise entanglement remains for infinite time and decays asymptotically with the decay of individual qubits. For pure two-qubit entangled states in a thermal environment, we observe that sudden death of entanglement always occurs. The sudden death time of the entangled states is related to the number of photons in the cavities, the temperature of the reservoir and the initial preparation of the entangled states.
International Nuclear Information System (INIS)
Tahira, Rabia; Ikram, Manzoor; Zubairy, M Suhail; Bougouffa, Smail
2010-01-01
We investigate the phenomenon of sudden death of entanglement in a high-dimensional bipartite system subjected to dissipative environments with an arbitrary initial pure entangled state between two fields in the cavities. We find that in a vacuum reservoir, the presence of the state where one or more than one (two) photons in each cavity are present is a necessary condition for the sudden death of entanglement. Otherwise entanglement remains for infinite time and decays asymptotically with the decay of individual qubits. For pure two-qubit entangled states in a thermal environment, we observe that sudden death of entanglement always occurs. The sudden death time of the entangled states is related to the number of photons in the cavities, the temperature of the reservoir and the initial preparation of the entangled states.
Time–energy high-dimensional one-side device-independent quantum key distribution
International Nuclear Information System (INIS)
Bao Hai-Ze; Bao Wan-Su; Wang Yang; Chen Rui-Ke; Ma Hong-Xin; Zhou Chun; Li Hong-Wei
2017-01-01
Compared with full device-independent quantum key distribution (DI-QKD), one-side device-independent QKD (1sDI-QKD) needs fewer requirements, which is much easier to meet. In this paper, by applying recently developed novel time–energy entropic uncertainty relations, we present a time–energy high-dimensional one-side device-independent quantum key distribution (HD-QKD) and provide the security proof against coherent attacks. Besides, we connect the security with the quantum steering. By numerical simulation, we obtain the secret key rate for Alice’s different detection efficiencies. The results show that our protocol can performance much better than the original 1sDI-QKD. Furthermore, we clarify the relation among the secret key rate, Alice’s detection efficiency, and the dispersion coefficient. Finally, we simply analyze its performance in the optical fiber channel. (paper)
A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2011-01-01
Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....
Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data
Hu, Zongliang; Tong, Tiejun; Genton, Marc G.
2017-01-01
We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling's tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.
Wang, Zhiping; Chen, Jinyu; Yu, Benli
2017-02-20
We investigate the two-dimensional (2D) and three-dimensional (3D) atom localization behaviors via spontaneously generated coherence in a microwave-driven four-level atomic system. Owing to the space-dependent atom-field interaction, it is found that the detecting probability and precision of 2D and 3D atom localization behaviors can be significantly improved via adjusting the system parameters, the phase, amplitude, and initial population distribution. Interestingly, the atom can be localized in volumes that are substantially smaller than a cubic optical wavelength. Our scheme opens a promising way to achieve high-precision and high-efficiency atom localization, which provides some potential applications in high-dimensional atom nanolithography.
Characterization of differentially expressed genes using high-dimensional co-expression networks
DEFF Research Database (Denmark)
Coelho Goncalves de Abreu, Gabriel; Labouriau, Rodrigo S.
2010-01-01
We present a technique to characterize differentially expressed genes in terms of their position in a high-dimensional co-expression network. The set-up of Gaussian graphical models is used to construct representations of the co-expression network in such a way that redundancy and the propagation...... that allow to make effective inference in problems with high degree of complexity (e.g. several thousands of genes) and small number of observations (e.g. 10-100) as typically occurs in high throughput gene expression studies. Taking advantage of the internal structure of decomposable graphical models, we...... construct a compact representation of the co-expression network that allows to identify the regions with high concentration of differentially expressed genes. It is argued that differentially expressed genes located in highly interconnected regions of the co-expression network are less informative than...
Makarios-Lahham, Lina; Roseau, Suzanne M; Fromentin, Gilles; Tome, Daniel; Even, Patrick C
2004-03-01
Rats that are allowed to select their diets [dietary self- selection (DSS)] often ingest >30% of their daily energy in the form of protein. Such an intake may seem unhealthy, but the consistency of this choice suggests that it is motivated by physiologic drives. To gain a clearer understanding of how protein selection is structured during DSS, we adapted 12 rats to a standard diet (14% Protein) and then allowed them to choose between two diets, i.e., total milk protein (P) and a mix of carbohydrates and lipids (FC). The protein intake during DSS rose above 40%; assuming an intermeal interval of 10 min, 70% of the energy intake occurred with meals that included both P and FC, with the sequence of FC followed by P preferred to the sequence of P followed by FC (70 vs. 30%, P energy intake during the light period was reduced to only 10% of the daily energy intake [vs. 30% with the control P14 diet or a with a high-protein diet (50%)], and 90% of the intake was in the form of pure protein meals. In complementary studies, we verified that the high protein intake also occurred when rats were offered casein and whey and was not due to the high palatability of the milk protein. We conclude that a specific feeding pattern accompanies high protein intake in rats allowed DSS. The mechanisms underlying this behavior and its potential beneficial/adverse consequences over the long term still must be clarified.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.
Dazard, Jean-Eudes; Rao, J Sunil
2012-07-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.
DEFF Research Database (Denmark)
Varrone, Cristiano; Heggeset, T. M. B.; Le, S. B.
2015-01-01
Objective of this study was the selection and adaptation of mixed microbial cultures (MMCs), able to ferment crude glycerol generated from animal fat-based biodiesel and produce building-blocks and green chemicals. Various adaptation strategies have been investigated for the enrichment of suitable...... Control. The adaptation of activated sludge inoculum was performed successfully and continued unhindered for several months. The best results showed a substrate degradation efficiency of almost 100% (about 10 g/L glycerol in 21 h) and different dominant metabolic products were obtained, depending...... on the selection strategy (mainly 1,3-propanediol, ethanol, or butyrate). On the other hand, anaerobic sludge exhibited inactivation after a few transfers. To circumvent this problem, fed-batch mode was used as an alternative adaptation strategy, which led to effective substrate degradation and high 1...
Bonk, Fabian; Bastidas-Oyanedel, Juan-Rodrigo; Yousef, Ahmed F; Schmidt, Jens Ejbye
2017-08-01
Carboxylic acid production from food waste by mixed culture fermentation is an important future waste management option. Obstacles for its implementation are the need of pH control, and a broad fermentation product spectrum leading to increased product separation costs. To overcome these obstacles, the selective production of lactic acid (LA) from model food waste by uncontrolled pH fermentation was tested using different reactor configurations. Batch experiments, semi-continuously fed reactors and a percolation system reached LA concentrations of 32, 16 and 15gCOD LA /L, respectively, with selectivities of 93%, 84% and 75% on COD base, respectively. The semi-continuous reactor was dominated by Lactobacillales. Our techno-economic analysis suggests that LA production from food waste can be economically feasible, with LA recovery and low yields remaining as major obstacles. To solve both problems, we successfully applied in-situ product extraction using activated carbon. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Williams, R.E.; Dustin, D.F.
1994-01-01
When plutonium production operations were halted at the Rocky Flats Plant, there remained a volume of material that was retained in order that its plutonium content could be reclaimed. This material, known as residue, is transuranic and mixed transuranic material with a plutonium content above what was called the ''economic discard limit,'' or EDL. The EDL was defined in terms of each type of residue material, and each type of material is given an Item Description Code, or IDC. Residue IDCs have been grouped into general category descriptions which include plutonium (Pu) nitrate solutions, Pu chloride solutions, salts, ash, metal, filters, combustibles, graphite, crucibles, glass, resins, gloves, firebrick, and sludges. Similar material exists both below and above the EDL, with material with the (previous) economic potential for reclamation of plutonium classified as residue
Energy Technology Data Exchange (ETDEWEB)
Dai, Jun [Institute of Applied Chemistry, Henan Polytechnic University, Jiaozuo 454003 (China); State Key Laboratory Cultivation Base for Gas Geology and Gas Control, School of Safety Science and Engineering, Henan Polytechnic University, Jiaozuo 454003 (China); Yang, Juan, E-mail: yangjuanhpu@yahoo.com [Institute of Applied Chemistry, Henan Polytechnic University, Jiaozuo 454003 (China); Wang, Xiaohan; Zhang, Lei; Li, Yingjie [Institute of Applied Chemistry, Henan Polytechnic University, Jiaozuo 454003 (China)
2015-09-15
Graphical abstract: Visible-light photocatalytic activities for selective oxidation of amines into imines are greatly affected by the crystal structure of TiO{sub 2} catalysts and mixed-phase TiO{sub 2}(B)/anatase possess higher photoactivity because of the moderate adsorption ability and efficient charge separation. - Highlights: • Visible-light photocatalytic oxidation of amines to imines is studied over different TiO{sub 2}. • Photocatalytic activities are greatly affected by the crystal structure of TiO{sub 2} nanowires. • Mixed-phase TiO{sub 2}(B)/anatase exhibits higher catalytic activity than single-phase TiO{sub 2}. • Enhanced activity is ascribed to efficient adsorption ability and interfacial charge separation. • Photoinduced charge transfer mechanism on TiO{sub 2}(B)/anatase catalysts is also proposed. - Abstract: Wirelike catalysts of mixed-phase TiO{sub 2}(B)/anatase TiO{sub 2}, bare anatase TiO{sub 2} and TiO{sub 2}(B) are synthesized via calcining precursor hydrogen titanate obtained from hydrothermal process at different temperatures between 450 and 700 °C. Under visible light irradiation, mixed-phase TiO{sub 2}(B)/anatase TiO{sub 2} catalysts exhibit enhanced photocatalytic activity in comparison with pure TiO{sub 2}(B) and anatase TiO{sub 2} toward selective oxidation of benzylamines into imines and the highest photocatalytic activity is achieved by TW-550 sample consisting of 65% TiO{sub 2}(B) and 35% anatase. The difference in photocatalytic activities of TiO{sub 2} samples can be attributed to the different adsorption abilities resulted from their crystal structures and interfacial charge separation driven by surface-phase junctions between TiO{sub 2}(B) and anatase TiO{sub 2}. Moreover, the photoinduced charge transfer mechanism of surface complex is also proposed over mixed-phase TiO{sub 2}(B)/anatase TiO{sub 2} catalysts. Advantages of this photocatalytic system include efficient utilization of solar light, general suitability to
International Nuclear Information System (INIS)
Dai, Jun; Yang, Juan; Wang, Xiaohan; Zhang, Lei; Li, Yingjie
2015-01-01
Graphical abstract: Visible-light photocatalytic activities for selective oxidation of amines into imines are greatly affected by the crystal structure of TiO 2 catalysts and mixed-phase TiO 2 (B)/anatase possess higher photoactivity because of the moderate adsorption ability and efficient charge separation. - Highlights: • Visible-light photocatalytic oxidation of amines to imines is studied over different TiO 2 . • Photocatalytic activities are greatly affected by the crystal structure of TiO 2 nanowires. • Mixed-phase TiO 2 (B)/anatase exhibits higher catalytic activity than single-phase TiO 2 . • Enhanced activity is ascribed to efficient adsorption ability and interfacial charge separation. • Photoinduced charge transfer mechanism on TiO 2 (B)/anatase catalysts is also proposed. - Abstract: Wirelike catalysts of mixed-phase TiO 2 (B)/anatase TiO 2 , bare anatase TiO 2 and TiO 2 (B) are synthesized via calcining precursor hydrogen titanate obtained from hydrothermal process at different temperatures between 450 and 700 °C. Under visible light irradiation, mixed-phase TiO 2 (B)/anatase TiO 2 catalysts exhibit enhanced photocatalytic activity in comparison with pure TiO 2 (B) and anatase TiO 2 toward selective oxidation of benzylamines into imines and the highest photocatalytic activity is achieved by TW-550 sample consisting of 65% TiO 2 (B) and 35% anatase. The difference in photocatalytic activities of TiO 2 samples can be attributed to the different adsorption abilities resulted from their crystal structures and interfacial charge separation driven by surface-phase junctions between TiO 2 (B) and anatase TiO 2 . Moreover, the photoinduced charge transfer mechanism of surface complex is also proposed over mixed-phase TiO 2 (B)/anatase TiO 2 catalysts. Advantages of this photocatalytic system include efficient utilization of solar light, general suitability to amines, reusability and facile separation of nanowires catalysts
International Nuclear Information System (INIS)
Dodson, M.G.; Ozanich, R.M.; Bailey, S.A.
1999-01-01
This document will describe the functions and requirements of the at-tank analysis system concept developed by the Robotics Technology Development Program (RTDP) and Berkeley Instruments. It will discuss commercially available at-tank analysis equipment, and compare those that meet the stated functions and requirements. This is followed by a discussion of the considerations used in the selection of instrumentation for the concept design, and an overall description of the proposed at-tank analysis system
Directory of Open Access Journals (Sweden)
Ali A. Rostami
2016-08-01
Full Text Available Concerns have been raised in the literature for the potential of secondhand exposure from e-vapor product (EVP use. It would be difficult to experimentally determine the impact of various factors on secondhand exposure including, but not limited to, room characteristics (indoor space size, ventilation rate, device specifications (aerosol mass delivery, e-liquid composition, and use behavior (number of users and usage frequency. Therefore, a well-mixed computational model was developed to estimate the indoor levels of constituents from EVPs under a variety of conditions. The model is based on physical and thermodynamic interactions between aerosol, vapor, and air, similar to indoor air models referred to by the Environmental Protection Agency. The model results agree well with measured indoor air levels of nicotine from two sources: smoking machine-generated aerosol and aerosol exhaled from EVP use. Sensitivity analysis indicated that increasing air exchange rate reduces room air level of constituents, as more material is carried away. The effect of the amount of aerosol released into the space due to variability in exhalation was also evaluated. The model can estimate the room air level of constituents as a function of time, which may be used to assess the level of non-user exposure over time.
Prospective Validation of a High Dimensional Shape Model for Organ Motion in Intact Cervical Cancer
Energy Technology Data Exchange (ETDEWEB)
Williamson, Casey W.; Green, Garrett; Noticewala, Sonal S.; Li, Nan; Shen, Hanjie [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California (United States); Vaida, Florin [Division of Biostatistics and Bioinformatics, Department of Family Medicine and Public Health, University of California, San Diego, La Jolla, California (United States); Mell, Loren K., E-mail: lmell@ucsd.edu [Department of Radiation Medicine and Applied Sciences, University of California, San Diego, La Jolla, California (United States)
2016-11-15
Purpose: Validated models are needed to justify strategies to define planning target volumes (PTVs) for intact cervical cancer used in clinical practice. Our objective was to independently validate a previously published shape model, using data collected prospectively from clinical trials. Methods and Materials: We analyzed 42 patients with intact cervical cancer treated with daily fractionated pelvic intensity modulated radiation therapy and concurrent chemotherapy in one of 2 prospective clinical trials. We collected online cone beam computed tomography (CBCT) scans before each fraction. Clinical target volume (CTV) structures from the planning computed tomography scan were cast onto each CBCT scan after rigid registration and manually redrawn to account for organ motion and deformation. We applied the 95% isodose cloud from the planning computed tomography scan to each CBCT scan and computed any CTV outside the 95% isodose cloud. The primary aim was to determine the proportion of CTVs that were encompassed within the 95% isodose volume. A 1-sample t test was used to test the hypothesis that the probability of complete coverage was different from 95%. We used mixed-effects logistic regression to assess effects of time and patient variability. Results: The 95% isodose line completely encompassed 92.3% of all CTVs (95% confidence interval, 88.3%-96.4%), not significantly different from the 95% probability anticipated a priori (P=.19). The overall proportion of missed CTVs was small: the grand mean of covered CTVs was 99.9%, and 95.2% of misses were located in the anterior body of the uterus. Time did not affect coverage probability (P=.71). Conclusions: With the clinical implementation of a previously proposed PTV definition strategy based on a shape model for intact cervical cancer, the probability of CTV coverage was high and the volume of CTV missed was low. This PTV expansion strategy is acceptable for clinical trials and practice; however, we recommend daily
Directory of Open Access Journals (Sweden)
Shouheng Tuo
Full Text Available Harmony Search (HS and Teaching-Learning-Based Optimization (TLBO as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.
International Nuclear Information System (INIS)
Ghanty, Tapan K.
2016-01-01
In recent years, considerable attention has been given to understand the coordination chemistry of trivalent lanthanide (Ln) and actinide (An) with various ligands because of its close link with the nuclear waste management processes. It is well known that lanthanide-actinide separation is a challenging and difficult task because of very similar chemical properties of these two series of ions, which are associated with similar ionic radii and coordination numbers. Recently, we have introduced a new concept, 'intra-ligand synergism', where hard donor atom, such as, oxygen preferentially binds to trivalent actinides (An(III)) as compared to the valence iso-electronic trivalent lanthanides (Ln(III)) in presence of another soft donor centre. In the present work, the conventional concept of selective complexation of actinides with soft donor ligands (either S or N donor) has been modified through exploiting this concept, and thereby the higher selectivity of 1,10-phenanthroline-2,9-dicarboxylamide (PDAM) based ligands, namely PDAM and its isobutyl and decyl derivatives towards Am(III) ion has been predicted theoretically through density functional calculations. Subsequently, several such amide derivatives have been synthesized to optimize the solubility of the ligands in organic phase. Finally, solvent extraction experiments have been carried out to validate the theoretical prediction on the selectivity of oxygen donor ligands towards Am(III) as compared to Eu(III), and a maximum separation factor of about 51 has been achieved experimentally using 2,9-bis(N-decylaminocarbonyl)-1,10-phenanthroline ligand. The separation factor is increased with the decrease in pH, which is very interesting since extraction of the Am 3+ ion is considered to be important under highly acidic conditions from the nuclear waste management point of view. (author)
Corrêa, A M; Pereira, M I S; de Abreu, H K A; Sharon, T; de Melo, C L P; Ito, M A; Teodoro, P E; Bhering, L L
2016-10-17
The common bean, Phaseolus vulgaris, is predominantly grown on small farms and lacks accurate genotype recommendations for specific micro-regions in Brazil. This contributes to a low national average yield. The aim of this study was to use the methods of the harmonic mean of the relative performance of genetic values (HMRPGV) and the centroid, for selecting common bean genotypes with high yield, adaptability, and stability for the Cerrado/Pantanal ecotone region in Brazil. We evaluated 11 common bean genotypes in three trials carried out in the dry season in Aquidauana in 2013, 2014, and 2015. A likelihood ratio test detected a significant interaction between genotype x year, contributing 54% to the total phenotypic variation in grain yield. The three genotypes selected by the joint analysis of genotypic values in all years (Carioca Precoce, BRS Notável, and CNFC 15875) were the same as those recommended by the HMRPGV method. Using the centroid method, genotypes BRS Notável and CNFC 15875 were considered ideal genotypes based on their high stability to unfavorable environments and high responsiveness to environmental improvement. We identified a high association between the methods of adaptability and stability used in this study. However, the use of centroid method provided a more accurate and precise recommendation of the behavior of the evaluated genotypes.
Oliveira, Catarina S S; Silva, Carlos E; Carvalho, Gilda; Reis, Maria A
2017-07-25
Production of polyhydroxyalkanoates (PHAs) by open mixed microbial cultures (MMCs) has been attracting increasing interest as an alternative technology to PHA production by pure cultures, due to the potential for lower costs associated with the use of open systems (eliminating the requirement for sterile conditions) and the utilisation of cheap feedstock (industrial and agricultural wastes). Such technology relies on the efficient selection of an MMC enriched in PHA-accumulating organisms. Fermented cheese whey, a protein-rich complex feedstock, has been used previously to produce PHA using the feast and famine regime for selection of PHA accumulating cultures. While this selection strategy was found efficient when operated at relatively low organic loading rate (OLR, 2g-CODL -1 d -1 ), great instability and low selection efficiency of PHA accumulating organisms were observed when higher OLR (ca. 6g-CODL -1 d -1 ) was applied. High organic loading is desirable as a means to enhance PHA productivity. In the present study, a new selection strategy was tested with the aim of improving selection for high OLR. It was based on uncoupling carbon and nitrogen supply and was implemented and compared with the conventional feast and famine strategy. For this, two selection reactors were fed with fermented cheese whey applying an OLR of ca. 8.5g-CODL -1 (with 3.8g-CODL -1 resulting from organic acids and ethanol), and operated in parallel under similar conditions, except for the timing of nitrogen supplementation. Whereas in the conventional strategy nitrogen and carbon substrates were added simultaneously at the beginning of the cycle, in the uncoupled substrates strategy, nitrogen addition was delayed to the end of the feast phase (i.e. after exogenous carbon was exhausted). The two different strategies selected different PHA-storing microbial communities, dominated by Corynebacterium and a Xantomonadaceae, respectively with the conventional and the new approaches. The new
DEFF Research Database (Denmark)
Pham, Ninh Dang; Pagh, Rasmus
2012-01-01
projection-based technique that is able to estimate the angle-based outlier factor for all data points in time near-linear in the size of the data. Also, our approach is suitable to be performed in parallel environment to achieve a parallel speedup. We introduce a theoretical analysis of the quality...... neighbor are deteriorated in high-dimensional data. Following up on the work of Kriegel et al. (KDD '08), we investigate the use of angle-based outlier factor in mining high-dimensional outliers. While their algorithm runs in cubic time (with a quadratic time heuristic), we propose a novel random......Outlier mining in d-dimensional point sets is a fundamental and well studied data mining task due to its variety of applications. Most such applications arise in high-dimensional domains. A bottleneck of existing approaches is that implicit or explicit assessments on concepts of distance or nearest...
Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.
Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver
2018-02-15
Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R
Directory of Open Access Journals (Sweden)
Svetlana Begel
2013-06-01
Full Text Available The selectivity of the cryptands [2.2.bpy] and [2.bpy.bpy] for the endohedral complexation of alkali, alkaline-earth and earth metal ions was predicted on the basis of the DFT (B3LYP/LANL2DZp calculated structures and complex-formation energies. The cavity size in both cryptands lay between that for [2.2.2] and [bpy.bpy.bpy], such that the complexation of K+, Sr2+ and Tl3+ is most favorable. While the [2.2.bpy] is moderately larger, preferring Rb+ complexation and demonstrating equal priority for Sr2+ and Ba2+, the slightly smaller [2.bpy.bpy] yields more stable cryptates with Na+ and Ca2+. Although the CH2-units containing molecular bars fixed at the bridgehead nitrogen atoms determine the flexibility of the cryptands, the twist angles associated with the bipyridine and glycol building blocks also contribute considerably.
DEFF Research Database (Denmark)
Condie, Harriet M.; Dolder, Paul J.; Catchpole, Thomas L.
Reforms of EU Common Fisheries Policy will make fundamental changes to European fisheries management, including a discard ban with catch quotas for regulated species and management to achieve MSY. We evaluate the impact of these changes on revenue of North Sea demersal finfish fleets and fish...... stocks. With no change in behaviour, revenue is reduced by a mean of 31% compared to current management in the first year, but partly recovers by year 3, as fishing mortality is reduced and stocks increase. There are large differences in revenue changes between fleets, varying from -99% to +36...... in revenue create a strong incentive to avoid catching the limiting species, particularly if it is not a primary target. Selectivity changes that avoid 30% cod catch reduced the economic impact for some fleets in moving to catch quotas. Increased flexibility will therefore be important in maintaining...
Directory of Open Access Journals (Sweden)
Kevin eDemeure
2014-09-01
Full Text Available The search for clinically useful protein biomarkers using advanced mass spectrometry approaches represents a major focus in cancer research. However, the direct analysis of human samples may be challenging due to limited availability, the absence of appropriate control samples, or the large background variability observed in patient material. As an alternative approach, human tumors orthotopically implanted into a different species (xenografts are clinically relevant models that have proven their utility in pre-clinical research. Patient derived xenografts for glioblastoma have been extensively characterized in our laboratory and have been shown to retain the characteristics of the parental tumor at the phenotypic and genetic level. Such models were also found to adequately mimic the behavior and treatment response of human tumors. The reproducibility of such xenograft models, the possibility to identify their host background and perform tumor-host interaction studies, are major advantages over the direct analysis of human samples.At the proteome level, the analysis of xenograft samples is challenged by the presence of proteins from two different species which, depending on tumor size, type or location, often appear at variable ratios. Any proteomics approach aimed at quantifying proteins within such samples must consider the identification of species specific peptides in order to avoid biases introduced by the host proteome. Here, we present an in-house methodology and tool developed to select peptides used as surrogates for protein candidates from a defined proteome (e.g., human in a host proteome background (e.g., mouse, rat suited for a mass spectrometry analysis. The tools presented here are applicable to any species specific proteome, provided a protein database is available. By linking the information from both proteomes, PeptideManager significantly facilitates and expedites the selection of peptides used as surrogates to analyze
Directory of Open Access Journals (Sweden)
Nils Ternès
2017-05-01
Full Text Available Abstract Background Thanks to the advances in genomics and targeted treatments, more and more prediction models based on biomarkers are being developed to predict potential benefit from treatments in a randomized clinical trial. Despite the methodological framework for the development and validation of prediction models in a high-dimensional setting is getting more and more established, no clear guidance exists yet on how to estimate expected survival probabilities in a penalized model with biomarker-by-treatment interactions. Methods Based on a parsimonious biomarker selection in a penalized high-dimensional Cox model (lasso or adaptive lasso, we propose a unified framework to: estimate internally the predictive accuracy metrics of the developed model (using double cross-validation; estimate the individual survival probabilities at a given timepoint; construct confidence intervals thereof (analytical or bootstrap; and visualize them graphically (pointwise or smoothed with spline. We compared these strategies through a simulation study covering scenarios with or without biomarker effects. We applied the strategies to a large randomized phase III clinical trial that evaluated the effect of adding trastuzumab to chemotherapy in 1574 early breast cancer patients, for which the expression of 462 genes was measured. Results In our simulations, penalized regression models using the adaptive lasso estimated the survival probability of new patients with low bias and standard error; bootstrapped confidence intervals had empirical coverage probability close to the nominal level across very different scenarios. The double cross-validation performed on the training data set closely mimicked the predictive accuracy of the selected models in external validation data. We also propose a useful visual representation of the expected survival probabilities using splines. In the breast cancer trial, the adaptive lasso penalty selected a prediction model with 4
Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data
Hu, Zongliang
2017-10-27
We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling\\'s tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.
A sparse grid based method for generative dimensionality reduction of high-dimensional data
Bohn, Bastian; Garcke, Jochen; Griebel, Michael
2016-03-01
Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.
International Nuclear Information System (INIS)
Snyder, Abigail C.; Jiao, Yu
2010-01-01
Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.
Directory of Open Access Journals (Sweden)
Enkelejda Miho
2018-02-01
Full Text Available The adaptive immune system recognizes antigens via an immense array of antigen-binding antibodies and T-cell receptors, the immune repertoire. The interrogation of immune repertoires is of high relevance for understanding the adaptive immune response in disease and infection (e.g., autoimmunity, cancer, HIV. Adaptive immune receptor repertoire sequencing (AIRR-seq has driven the quantitative and molecular-level profiling of immune repertoires, thereby revealing the high-dimensional complexity of the immune receptor sequence landscape. Several methods for the computational and statistical analysis of large-scale AIRR-seq data have been developed to resolve immune repertoire complexity and to understand the dynamics of adaptive immunity. Here, we review the current research on (i diversity, (ii clustering and network, (iii phylogenetic, and (iv machine learning methods applied to dissect, quantify, and compare the architecture, evolution, and specificity of immune repertoires. We summarize outstanding questions in computational immunology and propose future directions for systems immunology toward coupling AIRR-seq with the computational discovery of immunotherapeutics, vaccines, and immunodiagnostics.
Construction of high-dimensional neural network potentials using environment-dependent atom pairs.
Jose, K V Jovan; Artrith, Nongnuch; Behler, Jörg
2012-05-21
An accurate determination of the potential energy is the crucial step in computer simulations of chemical processes, but using electronic structure methods on-the-fly in molecular dynamics (MD) is computationally too demanding for many systems. Constructing more efficient interatomic potentials becomes intricate with increasing dimensionality of the potential-energy surface (PES), and for numerous systems the accuracy that can be achieved is still not satisfying and far from the reliability of first-principles calculations. Feed-forward neural networks (NNs) have a very flexible functional form, and in recent years they have been shown to be an accurate tool to construct efficient PESs. High-dimensional NN potentials based on environment-dependent atomic energy contributions have been presented for a number of materials. Still, these potentials may be improved by a more detailed structural description, e.g., in form of atom pairs, which directly reflect the atomic interactions and take the chemical environment into account. We present an implementation of an NN method based on atom pairs, and its accuracy and performance are compared to the atom-based NN approach using two very different systems, the methanol molecule and metallic copper. We find that both types of NN potentials provide an excellent description of both PESs, with the pair-based method yielding a slightly higher accuracy making it a competitive alternative for addressing complex systems in MD simulations.
Xia, Yin; Cai, Tianxi; Cai, T Tony
2018-01-01
Motivated by applications in genomics, we consider in this paper global and multiple testing for the comparisons of two high-dimensional linear regression models. A procedure for testing the equality of the two regression vectors globally is proposed and shown to be particularly powerful against sparse alternatives. We then introduce a multiple testing procedure for identifying unequal coordinates while controlling the false discovery rate and false discovery proportion. Theoretical justifications are provided to guarantee the validity of the proposed tests and optimality results are established under sparsity assumptions on the regression coefficients. The proposed testing procedures are easy to implement. Numerical properties of the procedures are investigated through simulation and data analysis. The results show that the proposed tests maintain the desired error rates under the null and have good power under the alternative at moderate sample sizes. The procedures are applied to the Framingham Offspring study to investigate the interactions between smoking and cardiovascular related genetic mutations important for an inflammation marker.
A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem
Directory of Open Access Journals (Sweden)
Zekić-Sušac Marijana
2014-09-01
Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.
Energy Technology Data Exchange (ETDEWEB)
Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer; Michael Pernice; Robert Nourgaliev
2013-05-01
The next generation of methodologies for nuclear reactor Probabilistic Risk Assessment (PRA) explicitly accounts for the time element in modeling the probabilistic system evolution and uses numerical simulation tools to account for possible dependencies between failure events. The Monte-Carlo (MC) and the Dynamic Event Tree (DET) approaches belong to this new class of dynamic PRA methodologies. A challenge of dynamic PRA algorithms is the large amount of data they produce which may be difficult to visualize and analyze in order to extract useful information. We present a software tool that is designed to address these goals. We model a large-scale nuclear simulation dataset as a high-dimensional scalar function defined over a discrete sample of the domain. First, we provide structural analysis of such a function at multiple scales and provide insight into the relationship between the input parameters and the output. Second, we enable exploratory analysis for users, where we help the users to differentiate features from noise through multi-scale analysis on an interactive platform, based on domain knowledge and data characterization. Our analysis is performed by exploiting the topological and geometric properties of the domain, building statistical models based on its topological segmentations and providing interactive visual interfaces to facilitate such explorations. We provide a user’s guide to our software tool by highlighting its analysis and visualization capabilities, along with a use case involving dataset from a nuclear reactor safety simulation.
Schran, Christoph; Uhl, Felix; Behler, Jörg; Marx, Dominik
2018-03-01
The design of accurate helium-solute interaction potentials for the simulation of chemically complex molecules solvated in superfluid helium has long been a cumbersome task due to the rather weak but strongly anisotropic nature of the interactions. We show that this challenge can be met by using a combination of an effective pair potential for the He-He interactions and a flexible high-dimensional neural network potential (NNP) for describing the complex interaction between helium and the solute in a pairwise additive manner. This approach yields an excellent agreement with a mean absolute deviation as small as 0.04 kJ mol-1 for the interaction energy between helium and both hydronium and Zundel cations compared with coupled cluster reference calculations with an energetically converged basis set. The construction and improvement of the potential can be performed in a highly automated way, which opens the door for applications to a variety of reactive molecules to study the effect of solvation on the solute as well as the solute-induced structuring of the solvent. Furthermore, we show that this NNP approach yields very convincing agreement with the coupled cluster reference for properties like many-body spatial and radial distribution functions. This holds for the microsolvation of the protonated water monomer and dimer by a few helium atoms up to their solvation in bulk helium as obtained from path integral simulations at about 1 K.
Multi-Scale Factor Analysis of High-Dimensional Brain Signals
Ting, Chee-Ming
2017-05-18
In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive spatio-temporal data defined over the complex networks into a finite set of regional clusters. To achieve further dimension reduction, we represent the signals in each cluster by a small number of latent factors. The correlation matrix for all nodes in the network are approximated by lower-dimensional sub-structures derived from the cluster-specific factors. To estimate regional connectivity between numerous nodes (within each cluster), we apply principal components analysis (PCA) to produce factors which are derived as the optimal reconstruction of the observed signals under the squared loss. Then, we estimate global connectivity (between clusters or sub-networks) based on the factors across regions using the RV-coefficient as the cross-dependence measure. This gives a reliable and computationally efficient multi-scale analysis of both regional and global dependencies of the large networks. The proposed novel approach is applied to estimate brain connectivity networks using functional magnetic resonance imaging (fMRI) data. Results on resting-state fMRI reveal interesting modular and hierarchical organization of human brain networks during rest.
Meng, Xi; Nguyen, Bao D; Ridge, Clark; Shaka, A J
2009-01-01
High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to "reduced-dimensionality" strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the filter diagonalization method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra-dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths.
Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.
Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen
2017-12-01
In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.
Multi-SOM: an Algorithm for High-Dimensional, Small Size Datasets
Directory of Open Access Journals (Sweden)
Shen Lu
2013-04-01
Full Text Available Since it takes time to do experiments in bioinformatics, biological datasets are sometimes small but with high dimensionality. From probability theory, in order to discover knowledge from a set of data, we have to have a sufficient number of samples. Otherwise, the error bounds can become too large to be useful. For the SOM (Self- Organizing Map algorithm, the initial map is based on the training data. In order to avoid the bias caused by the insufficient training data, in this paper we present an algorithm, called Multi-SOM. Multi-SOM builds a number of small self-organizing maps, instead of just one big map. Bayesian decision theory is used to make the final decision among similar neurons on different maps. In this way, we can better ensure that we can get a real random initial weight vector set, the map size is less of consideration and errors tend to average out. In our experiments as applied to microarray datasets which are highly intense data composed of genetic related information, the precision of Multi-SOMs is 10.58% greater than SOMs, and its recall is 11.07% greater than SOMs. Thus, the Multi-SOMs algorithm is practical.
Kaleas, Kimberly A; Schmelzer, Charles H; Pizarro, Shelly A
2010-01-08
Mixed-mode chromatography resins are gaining popularity as effective purification tools for challenging feedstocks. This study presents the development of an industrial application to selectively capture recombinant human vascular endothelial growth factor (rhVEGF) on Capto MMC from an alkaline feedstock. Capto MMC resin contains a ligand that has the potential to participate in ionic, hydrophobic, and hydrogen boding interactions with proteins and is coupled to a highly cross-linked agarose bead matrix. VEGF is a key growth factor involved in angiogenesis and has therapeutic applications for wound healing. In this process, it is expressed in Escherichia coli as inclusion bodies. Solids are harvested from the cell lysate, and the rhVEGF is solubilized and refolded at pH 9.8 in the presence of urea and redox reagents. The unique mixed-mode characteristics of Capto MMC enabled capture of this basic protein with minimal load conditioning and delivered a concentrated pool for downstream processing with >95% yields while reducing host cell protein content to study explores the impact of loading conditions and residence time on the dynamic binding capacity as well as the development of elution conditions for optimal purification performance. After evaluating various elution buffers, l-arginine HCl was shown to be an effective eluting agent for rhVEGF desorption from the Capto MMC mixed-mode resin since it successfully disrupted the multiple interactions between the resin and rhVEGF. The lab scale effort produced a robust chromatography step that was successfully implemented at commercial manufacturing scale. Copyright 2009 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Yousry M. Issa
2010-01-01
Full Text Available Triprolidine hydrochloride (TpCl ion-selective carbon paste electrodes were constructed using Tp-TPB/Tp-CoN and Tp-TPB/Tp-PTA as ion-exchangers. The two electrodes revealed Nernstian responses with slopes of 58.4 and 58.1 mV decade−1 at 25 °C in the ranges 6 × 10−6–1 × 10−2 and 2 × 10−5–1 × 10−2 M for Tp-TPB/Tp-CoN and Tp-TPB/Tp-PTA, respectively. The potentials of these electrodes were independent of pH in the ranges of 2.5–7.0 and 4.5–7.0, and detection limits were 6 × 10−6 and 1 × 10−5 M for Tp-TPB/Tp-CoN and Tp-TPB/Tp-PTA, respectively. The electrodes showed a very good selectivity for TpCl with respect to a large number of inorganic cations and compounds. The standard addition, potentiometric titration methods and FIA were applied to the determination of TpCl in pure solutions and pharmaceutical preparations. The results obtained were in close agreement with those found by the official method. The mean recovery values were 100.91% and 97.92% with low coefficient of variation values of 0.94%, and 0.56% in pure solutions, 99.82% and 98.53% with coefficient of variation values of 2.20%, and 0.73% for Actifed tablet and Actifed syrup, respectively, using the Tp-TPB/Tp-CoN electrode, and 98.85%, and 99.18% with coefficient of variation values of 0.48% and 0.85% for Actifed tablet and Actifed syrup, respectively, using the Tp-TPB/Tp-PTA electrode.
Directory of Open Access Journals (Sweden)
Kohtaro Miyazawa
Full Text Available In our previous study, we demonstrated the propagation of mouse-passaged scrapie isolates with long incubation periods (L-type derived from natural Japanese sheep scrapie cases in murine hypothalamic GT1-7 cells, along with disease-associated prion protein (PrPSc accumulation. We here analyzed the susceptibility of GT1-7 cells to scrapie prions by exposure to infected mouse brains at different passages, following interspecies transmission. Wild-type mice challenged with a natural sheep scrapie case (Kanagawa exhibited heterogeneity of transmitted scrapie prions in early passages, and this mixed population converged upon one with a short incubation period (S-type following subsequent passages. However, when GT1-7 cells were challenged with these heterologous samples, L-type prions became dominant. This study demonstrated that the susceptibility of GT1-7 cells to L-type prions was at least 105 times higher than that to S-type prions and that L-type prion-specific biological characteristics remained unchanged after serial passages in GT1-7 cells. This suggests that a GT1-7 cell culture model would be more useful for the economical and stable amplification of L-type prions at the laboratory level. Furthermore, this cell culture model might be used to selectively propagate L-type scrapie prions from a mixed prion population.
Energy Technology Data Exchange (ETDEWEB)
Stoeckmann, M.
2000-07-01
Phenol was to be produced by direct oxidation of benzene with environment-friendly oxidants like hydrogen peroxide, oxygen, or ozone. Catalysts were amorphous microporous mixed oxides whose properties can be selected directly in the sol-gel synthesis process. Apart from benzene, also cyclohexane was oxidized with ozone using AMM catalysts in order to get more information on the potential of ozone as oxidant in heterogeneously catalyzed reactions. [German] Ziel dieser Arbeit war die Herstellung von Phenol durch die Direktoxidation von Benzol mit umweltfreundlichen Oxidationsmitteln wie Wasserstoffperoxid, Sauerstoff oder Ozon. Als Katalysatoren dienten amorphe mikroporoese Mischoxide, da deren Eigenschaften direkt in der Synthese durch den Sol-Gel-Prozess gezielt eingestellt werden koennen. Neben Benzol wurde auch Cyclohexan mit Ozon unter der Verwendung von AMM-Katalysatoren oxidiert, um das Potential von Ozon als Oxiationsmittel in heterogen katalysierten Reaktionen naeher zu untersuchen. (orig.)
International Nuclear Information System (INIS)
Taki, Tomihiro; Komoto, Shigetoshi; Otomura, Keiichiro; Takenaka, Toshihide; Sato, Nobuaki; Fujino, Takeo.
1996-01-01
Selective extraction of uranium from a phosphate ore was studied by using the mixed gas of Cl 2 and O 2 . On heating the ore and carbon mixture in Cl 2 , the volatilized chloride of uranium is accompanied by iron, aluminum, phosphorus and silicon chlorides. The addition of O 2 gas effectively lowered the volatilization ratios of aluminum, phosphorus and silicon. The ratio decreased with increasing oxygen flow rate up to 50 ml/min at 1,223 K (Cl 2 : 100 ml/min, O 2 +N 2 : 400 ml/min). The volatilization ratio of uranium was almost unchanged at 90% up to 50 ml/min O 2 (carbon amount: 15 wt%). The effect of the other parameters, i.e. Cl 2 flow rate, carbon amount, reaction temperature and time was examined. (author)
Bhadra, Anindya; Mallick, Bani K.
2013-01-01
our method to an expression quantitative trait loci (eQTL) analysis on publicly available single nucleotide polymorphism (SNP) and gene expression data for humans where the primary interest lies in finding the significant associations between the sets
Liu, Wen
2015-08-03
Incomplete oxidation of (N-di-tert-butylphosphino)-6-(2-methyl-2’H-benzoimidazole)-2-aminepyridine dichlorocobalt (PN3CoCl2) in DMF results in a unique co-crystal I formed with three parts including DMF, unit A and unit B complex with Co1 and Co2, respectively, (PN3 ligand in unit A: (N-di-tert-butylphosphino)-6-(2’-methyl-2’H-benzoimidazole)-2-aminepyridine, and O=PN3 ligand in unit B: (N-di-tert-butylphosphinoxide)-6-(2’-methyl-2’H-benzoimidazole)-2-aminepyridine) with 1:1:1 molar ratio. Co1 and Co2 complexes both display a five-coordinated distorted-square-pyramidal geometry around the metal center. The Co1 center is coordinated with PN3 ligand via two N atoms from pyridine, benzoimidazole moiety as well as one P atom, and the Co2 center is coordinated with the oxidized ligandO=PN3 via two N atoms from pyridine, benzoimidazole moiety as well as one O atom from DMF molecule, while the oxidized phosphine moiety (O=P) being excluded from the coordination sphere. Activated with AlEt2Cl, the co-crystallized complexes I are able to actively convert butadiene to polybutadiene, affording cis-1,4 polybutadiene with cis-1,4 unit up to 95.5-97.8% and number average molecular weight of cal. 105g/mol. The high cis-1,4 selectivity and monomodal GPC curve of resultant polymer imply that the identical active species generated from two distinctive cobalt centers.
Liu, Wen; Pan, Weijing; Wang, Peng; Li, Wei; Mu, Jingshan; Weng, Gengsheng; Jia, Xiaoyu; Gong, Dirong; Huang, Kuo-Wei
2015-01-01
Incomplete oxidation of (N-di-tert-butylphosphino)-6-(2-methyl-2’H-benzoimidazole)-2-aminepyridine dichlorocobalt (PN3CoCl2) in DMF results in a unique co-crystal I formed with three parts including DMF, unit A and unit B complex with Co1 and Co2, respectively, (PN3 ligand in unit A: (N-di-tert-butylphosphino)-6-(2’-methyl-2’H-benzoimidazole)-2-aminepyridine, and O=PN3 ligand in unit B: (N-di-tert-butylphosphinoxide)-6-(2’-methyl-2’H-benzoimidazole)-2-aminepyridine) with 1:1:1 molar ratio. Co1 and Co2 complexes both display a five-coordinated distorted-square-pyramidal geometry around the metal center. The Co1 center is coordinated with PN3 ligand via two N atoms from pyridine, benzoimidazole moiety as well as one P atom, and the Co2 center is coordinated with the oxidized ligandO=PN3 via two N atoms from pyridine, benzoimidazole moiety as well as one O atom from DMF molecule, while the oxidized phosphine moiety (O=P) being excluded from the coordination sphere. Activated with AlEt2Cl, the co-crystallized complexes I are able to actively convert butadiene to polybutadiene, affording cis-1,4 polybutadiene with cis-1,4 unit up to 95.5-97.8% and number average molecular weight of cal. 105g/mol. The high cis-1,4 selectivity and monomodal GPC curve of resultant polymer imply that the identical active species generated from two distinctive cobalt centers.
Alaslai, Nasser Y.; Ghanem, Bader; Alghunaimi, Fahd; Litwiller, Eric; Pinnau, Ingo
2016-01-01
The effect of hydroxyl functionalization on the m-phenylene diamine moiety of 6FDA dianhydride-based polyimides was investigated for gas separation applications. Pure-gas permeability coefficients of He, H2, N2, O2, CH4, and CO2 were measured at 35 °C and 2 atm. The introduction of hydroxyl groups in the diamine moiety of 6FDA-diaminophenol (DAP) and 6FDA-diamino resorcinol (DAR) polyimides tightened the overall polymer structure due to increased charge transfer complex formation compared to unfunctionalized 6FDA-m-phenylene diamine (mPDA). The BET surface areas based on nitrogen adsorption of 6FDA-DAP (54 m2g−1) and of 6FDA-DAR (45 m2g−1) were ~18% and 32% lower than that of 6FDA-mPDA (66 m2g−1). 6FDA-mPDA had a pure-gas CO2 permeability of 14 Barrer and CO2/CH4 selectivity of 70. The hydroxyl-functionalized polyimides 6FDA-DAP and 6FDA-DAR exhibited very high pure-gas CO2/CH4 selectivities of 92 and 94 with moderate CO2 permeability of 11 and 8 Barrer, respectively. It was demonstrated that hydroxyl-containing polyimide membranes maintained very high CO2/CH4 selectivity (~ 75 at CO2 partial pressure of 10 atm) due to CO2 plasticization resistance when tested under high-pressure mixed-gas conditions. Functionalization with hydroxyl groups may thus be a promising strategy towards attaining highly selective polyimides for economical membrane-based natural gas sweetening.
Alaslai, Nasser Y.
2016-01-05
The effect of hydroxyl functionalization on the m-phenylene diamine moiety of 6FDA dianhydride-based polyimides was investigated for gas separation applications. Pure-gas permeability coefficients of He, H2, N2, O2, CH4, and CO2 were measured at 35 °C and 2 atm. The introduction of hydroxyl groups in the diamine moiety of 6FDA-diaminophenol (DAP) and 6FDA-diamino resorcinol (DAR) polyimides tightened the overall polymer structure due to increased charge transfer complex formation compared to unfunctionalized 6FDA-m-phenylene diamine (mPDA). The BET surface areas based on nitrogen adsorption of 6FDA-DAP (54 m2g−1) and of 6FDA-DAR (45 m2g−1) were ~18% and 32% lower than that of 6FDA-mPDA (66 m2g−1). 6FDA-mPDA had a pure-gas CO2 permeability of 14 Barrer and CO2/CH4 selectivity of 70. The hydroxyl-functionalized polyimides 6FDA-DAP and 6FDA-DAR exhibited very high pure-gas CO2/CH4 selectivities of 92 and 94 with moderate CO2 permeability of 11 and 8 Barrer, respectively. It was demonstrated that hydroxyl-containing polyimide membranes maintained very high CO2/CH4 selectivity (~ 75 at CO2 partial pressure of 10 atm) due to CO2 plasticization resistance when tested under high-pressure mixed-gas conditions. Functionalization with hydroxyl groups may thus be a promising strategy towards attaining highly selective polyimides for economical membrane-based natural gas sweetening.
International Nuclear Information System (INIS)
Mahdavi, Vahid; Soleimani, Shima
2014-01-01
Graphical abstract: Oxidation of various alcohols is studied in the liquid phase over new composite mixed oxide (V 2 O 5 /OMS-2) catalyst using tert-butyl hydroperoxide (TBHP). The activity of V 2 O 5 /OMS-2 samples was considerably increased with respect to OMS-2 catalyst and these samples are found to be suitable for the selective oxidation of alcohols. - Highlights: • V 2 O 5 /K-OMS-2 with different V/Mn molar ratios prepared by the impregnation method. • Oxidation of alcohols was studied in the liquid phase over V 2 O 5 /K-OMS-2 catalyst. • V 2 O 5 /K-OMS-2 catalyst had excellent activity for alcohol oxidation. • Benzyl alcohol oxidation using excess TBHP followed a pseudo-first order kinetic. • The selected catalyst was reused without significant loss of activity. - Abstract: This work reports the synthesis and characterization of mixed oxide vanadium–manganese V 2 O 5 /K-OMS-2 at various V/Mn molar ratios and prepared by the impregnation method. Characterization of these new composite materials was made by elemental analysis, BET, XRD, FT-IR, SEM and TEM techniques. Results of these analyses showed that vanadium impregnated samples contained mixed phases of cryptomelane and crystalline V 2 O 5 species. Oxidation of various alcohols was studied in the liquid phase over the V 2 O 5 /K-OMS-2 catalyst using tert-butyl hydroperoxide (TBHP) and H 2 O 2 as the oxidant. Activity of the V 2 O 5 /K-OMS-2 samples was increased considerably with respect to K-OMS-2 catalyst due to the interaction of manganese oxide and V 2 O 5 . The kinetic of benzyl alcohol oxidation using excess TBHP over V 2 O 5 /K-OMS-2 catalyst was investigated at different temperatures and a pseudo-first order reaction was determined with respect to benzyl alcohol. The effects of reaction time, oxidant/alcohol molar ratio, reaction temperature, solvents, catalyst recycling potential and leaching were investigated
Directory of Open Access Journals (Sweden)
Laurent Berge
2012-01-01
Full Text Available This paper presents the R package HDclassif which is devoted to the clustering and the discriminant analysis of high-dimensional data. The classification methods proposed in the package result from a new parametrization of the Gaussian mixture model which combines the idea of dimension reduction and model constraints on the covariance matrices. The supervised classification method using this parametrization is called high dimensional discriminant analysis (HDDA. In a similar manner, the associated clustering method iscalled high dimensional data clustering (HDDC and uses the expectation-maximization algorithm for inference. In order to correctly t the data, both methods estimate the specific subspace and the intrinsic dimension of the groups. Due to the constraints on the covariance matrices, the number of parameters to estimate is significantly lower than other model-based methods and this allows the methods to be stable and efficient in high dimensions. Two introductory examples illustrated with R codes allow the user to discover the hdda and hddc functions. Experiments on simulated and real datasets also compare HDDC and HDDA with existing classification methods on high-dimensional datasets. HDclassif is a free software and distributed under the general public license, as part of the R software project.
International Nuclear Information System (INIS)
Guerrieri, A.
2009-01-01
In this report the largest Lyapunov characteristic exponent of a high dimensional atmospheric global circulation model of intermediate complexity has been estimated numerically. A sensitivity analysis has been carried out by varying the equator-to-pole temperature difference, the space resolution and the value of some parameters employed by the model. Chaotic and non-chaotic regimes of circulation have been found. [it
International Nuclear Information System (INIS)
Langrene, Nicolas
2014-01-01
This thesis deals with the numerical solution of general stochastic control problems, with notable applications for electricity markets. We first propose a structural model for the price of electricity, allowing for price spikes well above the marginal fuel price under strained market conditions. This model allows to price and partially hedge electricity derivatives, using fuel forwards as hedging instruments. Then, we propose an algorithm, which combines Monte-Carlo simulations with local basis regressions, to solve general optimal switching problems. A comprehensive rate of convergence of the method is provided. Moreover, we manage to make the algorithm parsimonious in memory (and hence suitable for high dimensional problems) by generalizing to this framework a memory reduction method that avoids the storage of the sample paths. We illustrate this on the problem of investments in new power plants (our structural power price model allowing the new plants to impact the price of electricity). Finally, we study more general stochastic control problems (the control can be continuous and impact the drift and volatility of the state process), the solutions of which belong to the class of fully nonlinear Hamilton-Jacobi-Bellman equations, and can be handled via constrained Backward Stochastic Differential Equations, for which we develop a backward algorithm based on control randomization and parametric optimizations. A rate of convergence between the constraPned BSDE and its discrete version is provided, as well as an estimate of the optimal control. This algorithm is then applied to the problem of super replication of options under uncertain volatilities (and correlations). (author)
Evaluation of a new high-dimensional miRNA profiling platform
Directory of Open Access Journals (Sweden)
Lamblin Anne-Francoise
2009-08-01
Full Text Available Abstract Background MicroRNAs (miRNAs are a class of approximately 22 nucleotide long, widely expressed RNA molecules that play important regulatory roles in eukaryotes. To investigate miRNA function, it is essential that methods to quantify their expression levels be available. Methods We evaluated a new miRNA profiling platform that utilizes Illumina's existing robust DASL chemistry as the basis for the assay. Using total RNA from five colon cancer patients and four cell lines, we evaluated the reproducibility of miRNA expression levels across replicates and with varying amounts of input RNA. The beta test version was comprised of 735 miRNA targets of Illumina's miRNA profiling application. Results Reproducibility between sample replicates within a plate was good (Spearman's correlation 0.91 to 0.98 as was the plate-to-plate reproducibility replicates run on different days (Spearman's correlation 0.84 to 0.98. To determine whether quality data could be obtained from a broad range of input RNA, data obtained from amounts ranging from 25 ng to 800 ng were compared to those obtained at 200 ng. No effect across the range of RNA input was observed. Conclusion These results indicate that very small amounts of starting material are sufficient to allow sensitive miRNA profiling using the Illumina miRNA high-dimensional platform. Nonlinear biases were observed between replicates, indicating the need for abundance-dependent normalization. Overall, the performance characteristics of the Illumina miRNA profiling system were excellent.
Multivariate linear regression of high-dimensional fMRI data with multiple target variables.
Valente, Giancarlo; Castellanos, Agustin Lage; Vanacore, Gianluca; Formisano, Elia
2014-05-01
Multivariate regression is increasingly used to study the relation between fMRI spatial activation patterns and experimental stimuli or behavioral ratings. With linear models, informative brain locations are identified by mapping the model coefficients. This is a central aspect in neuroimaging, as it provides the sought-after link between the activity of neuronal populations and subject's perception, cognition or behavior. Here, we show that mapping of informative brain locations using multivariate linear regression (MLR) may lead to incorrect conclusions and interpretations. MLR algorithms for high dimensional data are designed to deal with targets (stimuli or behavioral ratings, in fMRI) separately, and the predictive map of a model integrates information deriving from both neural activity patterns and experimental design. Not accounting explicitly for the presence of other targets whose associated activity spatially overlaps with the one of interest may lead to predictive maps of troublesome interpretation. We propose a new model that can correctly identify the spatial patterns associated with a target while achieving good generalization. For each target, the training is based on an augmented dataset, which includes all remaining targets. The estimation on such datasets produces both maps and interaction coefficients, which are then used to generalize. The proposed formulation is independent of the regression algorithm employed. We validate this model on simulated fMRI data and on a publicly available dataset. Results indicate that our method achieves high spatial sensitivity and good generalization and that it helps disentangle specific neural effects from interaction with predictive maps associated with other targets. Copyright © 2013 Wiley Periodicals, Inc.
Gomez, Luis J; Yücel, Abdulkadir C; Hernandez-Garcia, Luis; Taylor, Stephan F; Michielssen, Eric
2015-01-01
A computational framework for uncertainty quantification in transcranial magnetic stimulation (TMS) is presented. The framework leverages high-dimensional model representations (HDMRs), which approximate observables (i.e., quantities of interest such as electric (E) fields induced inside targeted cortical regions) via series of iteratively constructed component functions involving only the most significant random variables (i.e., parameters that characterize the uncertainty in a TMS setup such as the position and orientation of TMS coils, as well as the size, shape, and conductivity of the head tissue). The component functions of HDMR expansions are approximated via a multielement probabilistic collocation (ME-PC) method. While approximating each component function, a quasi-static finite-difference simulator is used to compute observables at integration/collocation points dictated by the ME-PC method. The proposed framework requires far fewer simulations than traditional Monte Carlo methods for providing highly accurate statistical information (e.g., the mean and standard deviation) about the observables. The efficiency and accuracy of the proposed framework are demonstrated via its application to the statistical characterization of E-fields generated by TMS inside cortical regions of an MRI-derived realistic head model. Numerical results show that while uncertainties in tissue conductivities have negligible effects on TMS operation, variations in coil position/orientation and brain size significantly affect the induced E-fields. Our numerical results have several implications for the use of TMS during depression therapy: 1) uncertainty in the coil position and orientation may reduce the response rates of patients; 2) practitioners should favor targets on the crest of a gyrus to obtain maximal stimulation; and 3) an increasing scalp-to-cortex distance reduces the magnitude of E-fields on the surface and inside the cortex.
Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per
2011-01-01
Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher
From Ambiguities to Insights: Query-based Comparisons of High-Dimensional Data
Kowalski, Jeanne; Talbot, Conover; Tsai, Hua L.; Prasad, Nijaguna; Umbricht, Christopher; Zeiger, Martha A.
2007-11-01
Genomic technologies will revolutionize drag discovery and development; that much is universally agreed upon. The high dimension of data from such technologies has challenged available data analytic methods; that much is apparent. To date, large-scale data repositories have not been utilized in ways that permit their wealth of information to be efficiently processed for knowledge, presumably due in large part to inadequate analytical tools to address numerous comparisons of high-dimensional data. In candidate gene discovery, expression comparisons are often made between two features (e.g., cancerous versus normal), such that the enumeration of outcomes is manageable. With multiple features, the setting becomes more complex, in terms of comparing expression levels of tens of thousands transcripts across hundreds of features. In this case, the number of outcomes, while enumerable, become rapidly large and unmanageable, and scientific inquiries become more abstract, such as "which one of these (compounds, stimuli, etc.) is not like the others?" We develop analytical tools that promote more extensive, efficient, and rigorous utilization of the public data resources generated by the massive support of genomic studies. Our work innovates by enabling access to such metadata with logically formulated scientific inquires that define, compare and integrate query-comparison pair relations for analysis. We demonstrate our computational tool's potential to address an outstanding biomedical informatics issue of identifying reliable molecular markers in thyroid cancer. Our proposed query-based comparison (QBC) facilitates access to and efficient utilization of metadata through logically formed inquires expressed as query-based comparisons by organizing and comparing results from biotechnologies to address applications in biomedicine.
Atafu, Asmamaw; Kwon, Soonman
2018-05-20
Since 2010, the Ethiopian government introduced different measures to implement community-based health insurance (CBHI) schemes to improve access to health service and reduce the catastrophic effect of health care costs. The aim of this study was to examine the determinants of enrollment in CBHI in Northwest Ethiopia. In this study, we utilized a mix of quantitative (multivariate logistic regression applied to population survey linked with health facility survey) and qualitative (focus group discussion and in-depth interview) methods to better understand the factors that affect CBHI enrollment. The study revealed important factors, such as household, informal association, and health facility, as barriers to CBHI enrollment. Age and educational status, self-rated health status, perceived quality of health services, knowledge, and information (awareness) about CBHI were among the characteristics of individual household head, affecting enrollment. Household size and participation in an informal association, such as local credit associations, were also positively associated with CBHI enrollment. Additionally, health facility factors like unavailability of laboratory tests were the main factor that hinders CBHI enrollment. This study showed a possibility of adverse selection in CBHI enrollment. Additionally, perceived quality of health services, knowledge, and information (awareness) are positively associated with CBHI enrollment. Therefore, policy interventions to mitigate adverse selection as well as provision of social marketing activities are crucial to increase enrollment in CBHI. Furthermore, policy interventions that enhance the capacity of health facilities and schemes to provide the promised services are necessary. Copyright © 2018 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
C. Varrone
2015-01-01
Full Text Available Objective of this study was the selection and adaptation of mixed microbial cultures (MMCs, able to ferment crude glycerol generated from animal fat-based biodiesel and produce building-blocks and green chemicals. Various adaptation strategies have been investigated for the enrichment of suitable and stable MMC, trying to overcome inhibition problems and enhance substrate degradation efficiency, as well as generation of soluble fermentation products. Repeated transfers in small batches and fed-batch conditions have been applied, comparing the use of different inoculum, growth media, and Kinetic Control. The adaptation of activated sludge inoculum was performed successfully and continued unhindered for several months. The best results showed a substrate degradation efficiency of almost 100% (about 10 g/L glycerol in 21 h and different dominant metabolic products were obtained, depending on the selection strategy (mainly 1,3-propanediol, ethanol, or butyrate. On the other hand, anaerobic sludge exhibited inactivation after a few transfers. To circumvent this problem, fed-batch mode was used as an alternative adaptation strategy, which led to effective substrate degradation and high 1,3-propanediol and butyrate production. Changes in microbial composition were monitored by means of Next Generation Sequencing, revealing a dominance of glycerol consuming species, such as Clostridium, Klebsiella, and Escherichia.
Varrone, C; Heggeset, T M B; Le, S B; Haugen, T; Markussen, S; Skiadas, I V; Gavala, H N
2015-01-01
Objective of this study was the selection and adaptation of mixed microbial cultures (MMCs), able to ferment crude glycerol generated from animal fat-based biodiesel and produce building-blocks and green chemicals. Various adaptation strategies have been investigated for the enrichment of suitable and stable MMC, trying to overcome inhibition problems and enhance substrate degradation efficiency, as well as generation of soluble fermentation products. Repeated transfers in small batches and fed-batch conditions have been applied, comparing the use of different inoculum, growth media, and Kinetic Control. The adaptation of activated sludge inoculum was performed successfully and continued unhindered for several months. The best results showed a substrate degradation efficiency of almost 100% (about 10 g/L glycerol in 21 h) and different dominant metabolic products were obtained, depending on the selection strategy (mainly 1,3-propanediol, ethanol, or butyrate). On the other hand, anaerobic sludge exhibited inactivation after a few transfers. To circumvent this problem, fed-batch mode was used as an alternative adaptation strategy, which led to effective substrate degradation and high 1,3-propanediol and butyrate production. Changes in microbial composition were monitored by means of Next Generation Sequencing, revealing a dominance of glycerol consuming species, such as Clostridium, Klebsiella, and Escherichia.
Integrating high dimensional bi-directional parsing models for gene mention tagging.
Hsu, Chun-Nan; Chang, Yu-Ming; Kuo, Cheng-Ju; Lin, Yu-Shi; Huang, Han-Shen; Chung, I-Fang
2008-07-01
Tagging gene and gene product mentions in scientific text is an important initial step of literature mining. In this article, we describe in detail our gene mention tagger participated in BioCreative 2 challenge and analyze what contributes to its good performance. Our tagger is based on the conditional random fields model (CRF), the most prevailing method for the gene mention tagging task in BioCreative 2. Our tagger is interesting because it accomplished the highest F-scores among CRF-based methods and second over all. Moreover, we obtained our results by mostly applying open source packages, making it easy to duplicate our results. We first describe in detail how we developed our CRF-based tagger. We designed a very high dimensional feature set that includes most of information that may be relevant. We trained bi-directional CRF models with the same set of features, one applies forward parsing and the other backward, and integrated two models based on the output scores and dictionary filtering. One of the most prominent factors that contributes to the good performance of our tagger is the integration of an additional backward parsing model. However, from the definition of CRF, it appears that a CRF model is symmetric and bi-directional parsing models will produce the same results. We show that due to different feature settings, a CRF model can be asymmetric and the feature setting for our tagger in BioCreative 2 not only produces different results but also gives backward parsing models slight but constant advantage over forward parsing model. To fully explore the potential of integrating bi-directional parsing models, we applied different asymmetric feature settings to generate many bi-directional parsing models and integrate them based on the output scores. Experimental results show that this integrated model can achieve even higher F-score solely based on the training corpus for gene mention tagging. Data sets, programs and an on-line service of our gene
Greedy algorithms for high-dimensional non-symmetric linear problems***
Directory of Open Access Journals (Sweden)
Cancès E.
2013-12-01
Full Text Available In this article, we present a family of numerical approaches to solve high-dimensional linear non-symmetric problems. The principle of these methods is to approximate a function which depends on a large number of variates by a sum of tensor product functions, each term of which is iteratively computed via a greedy algorithm ? . There exists a good theoretical framework for these methods in the case of (linear and nonlinear symmetric elliptic problems. However, the convergence results are not valid any more as soon as the problems under consideration are not symmetric. We present here a review of the main algorithms proposed in the literature to circumvent this difficulty, together with some new approaches. The theoretical convergence results and the practical implementation of these algorithms are discussed. Their behaviors are illustrated through some numerical examples. Dans cet article, nous présentons une famille de méthodes numériques pour résoudre des problèmes linéaires non symétriques en grande dimension. Le principe de ces approches est de représenter une fonction dépendant d’un grand nombre de variables sous la forme d’une somme de fonctions produit tensoriel, dont chaque terme est calculé itérativement via un algorithme glouton ? . Ces méthodes possèdent de bonnes propriétés théoriques dans le cas de problèmes elliptiques symétriques (linéaires ou non linéaires, mais celles-ci ne sont plus valables dès lors que les problèmes considérés ne sont plus symétriques. Nous présentons une revue des principaux algorithmes proposés dans la littérature pour contourner cette difficulté ainsi que de nouvelles approches que nous proposons. Les résultats de convergence théoriques et la mise en oeuvre pratique de ces algorithmes sont détaillés et leur comportement est illustré au travers d’exemples numériques.
Energy Technology Data Exchange (ETDEWEB)
Rachuri, Yadagiri; Bisht, Kamal Kumar [Analytical Discipline and Centralized Instrument Facility, CSIR–Central Salt and Marine Chemicals Research Institute, Council of Scientific and Industrial Research, G. B. Marg, Bhavnagar 364002, Gujarat (India); Academy of Scientific and Innovative Research (AcSIR), CSIR–Central Salt and Marine Chemicals Research Institute, Council of Scientific and Industrial Research, G. B. Marg, Bhavnagar 364002, Gujarat (India); Parmar, Bhavesh [Analytical Discipline and Centralized Instrument Facility, CSIR–Central Salt and Marine Chemicals Research Institute, Council of Scientific and Industrial Research, G. B. Marg, Bhavnagar 364002, Gujarat (India); Suresh, Eringathodi, E-mail: esuresh@csmcri.org [Analytical Discipline and Centralized Instrument Facility, CSIR–Central Salt and Marine Chemicals Research Institute, Council of Scientific and Industrial Research, G. B. Marg, Bhavnagar 364002, Gujarat (India); Academy of Scientific and Innovative Research (AcSIR), CSIR–Central Salt and Marine Chemicals Research Institute, Council of Scientific and Industrial Research, G. B. Marg, Bhavnagar 364002, Gujarat (India)
2015-03-15
Two CPs ([Cd{sub 3}(BTC){sub 2}(TIB){sub 2}(H{sub 2}O){sub 4}].(H{sub 2}O){sub 2}){sub n} (1) and ([Zn{sub 3}(BTC){sub 2}(TIB){sub 2}].(H{sub 2}O){sub 6}){sub n} (2) composed of tripodal linkers BTC (1,3,5-benzenetricarboxylate) and TIB (1,3,5-tris(imidazol-1-ylmethyl)benzene) were synthesized via solvothermal route and structurally characterized. Single crystal structural analysis reveals 1 possesses a novel 3D framework structure, whereas 2 represents a previously established compound. Owing to the d{sup 10} configuration of metal nodes and robust 3D frameworks, 1 and 2 exhibit excellent fluorescence properties which have been exploited to sense organic nitro compounds in vapor phase. Compound 1 demonstrates selective sensing of nitromethane over structurally similar methanol with ca. 70 and 43% fluorescence quenching in case of former and later. Similarly, 58% fluorescence quenching was observed in case of nitrobenzene over the structurally resembling toluene for which 30% quenching was observed. Compound 2 did not show any preference for nitro compounds and exhibited comparable fluorescence quenching when exposed to the vapors of nitro or other geometrically resembling organic molecules. Furthermore, adsorption experiments revealed that 1 and 2 can uptake 2.74 and 14.14 wt% molecular iodine respectively in vapor phase which can be released in organic solvents such as hexane and acetonitrile. The maximal iodine uptake in case of 1 and 2 corresponds to 0.15 and 0.80 molecules of iodine per formula unit of respective frameworks. Comprehensive structural description, thermal stability and luminescence behavior for both CPs has also been presented. - Graphical abstract: Two 3D luminescent CPs comprising mixed tripodal ligands have been hydrothermally synthesized and structurally characterized. Iodine encapsulation capacity of synthesized CPs is evaluated and their fluorescence quenching in presence of small organic molecules is exploited for sensing of nitro
Energy Technology Data Exchange (ETDEWEB)
Kaur, Kamaljot; Chaudhary, Savita; Singh, Sukhjinder; Mehta, S.K., E-mail: skmehta@pu.ac.in
2015-04-15
A simple salicylaldehyde derived Schiff base N, N′- bis (p-chloro salicylidene)-1, 2- ethylenediamine (L) was synthesized and characterized. The receptor demonstrates simultaneous dual channel chromogenic and fluorogenic signaling towards HSO{sub 3}{sup −} and Zn{sup 2+} in mixed aqueous media. Solvatochromism was employed systematically for modulating its optoelectronic properties. The probe was successfully assessed to monitor HSO{sub 3}{sup −} detection via UV–vis spectroscopy. DFT calculations and {sup 1}H NMR spectroscopy further support the results based on shifting of equilibrium. Moreover, the sensor showed large fluorescence enhancement with blue-shift of 48 nm after addition of Zn{sup 2+}. The probe exhibits high selectivity over other competitive ions with high detection limit of 6.54×10{sup −5} M and 3.21×10{sup −6} M for HSO{sub 3}{sup −} and Zn{sup 2+}, respectively. Importantly, this is one of the rare reports in which Schiff base was utilized for the fabrication of chromogenic or fluorogenic sensor using solvent effect for multianalyte detection. - Highlights: • Easy synthesis of highly selective and sensitive Salicylideneaniline moiety. • Solvatochromism induced tautomerism between the enol-imine and keto-amine forms. • Computational studies revealing the effect of solvent on stability of NH form. • Discriminative detection of HSO{sub 3}{sup −} and Zn{sup 2+} by different spectroscopic techniques. • Optical feedbacks as absorption transitions with HSO{sub 3}{sup −} on bisulphite adduct formation. • Fluorescence enhancement for Zn{sup 2+} based on imine binding mechanism.
Shaffer, Patrick; Valsson, Omar; Parrinello, Michele
2016-02-02
The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin.
Shaffer, Patrick; Valsson, Omar; Parrinello, Michele
2016-01-01
The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin. PMID:26787868
Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks.
Vlachas, Pantelis R; Byeon, Wonmin; Wan, Zhong Y; Sapsis, Themistoklis P; Koumoutsakos, Petros
2018-05-01
We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.
Chen, Apeng; Lu, Joann J; Gu, Congying; Zhang, Min; Lynch, Kyle B; Liu, Shaorong
2015-08-05
Toward developing a micro HPLC cartridge, we have recently built a high-pressure electroosmotic pump (EOP). However, we do not recommend people to use this pump to deliver an organic solvent directly, because it often makes the pump rate unstable. We have experimented several approaches to address this issue, but none of them are satisfactory. Here, we develop an innovative approach to address this issue. We first create an abruption (a dead-volume) within a fluid conduit. We then utilize an EOP to withdraw, via a selection valve, a train of eluent solutions having decreasing eluting power into the fluid conduit. When these solutions are further aspirated through the dead-volume, these solutions are partially mixed, smoothening concentration transitions between two adjacent eluent solutions. As these solutions are pushed back, through the dead-volume again, a smooth gradient profile is formed. In this work, we characterize this scheme for gradient formation, and we incorporate this approach with a high-pressure EOP, a nanoliter injection valve, and a capillary column, yielding a micro HPLC system. We then couple this micro HPLC with an electrospray ionization - mass spectrometer for peptide and protein separations and identifications. Copyright © 2015 Elsevier B.V. All rights reserved.
Starzynska-Janiszewska, A; Stodolak, B; Dulinski, R; Mickowska, B
2012-04-01
Tempeh is a popular Indonesian product obtained from legume seeds by solid-state fermentation with Rhizopus sp. The aim of this research was to study the effect of simultaneous mixed-culture fermentation of grass pea seeds on selected parameters of products as compared to traditional tempeh. The inoculum contained different ratios of Rhizopus oligosporus and Aspergillus oryzae spores. The simultaneous fermentation of grass pea seeds with inoculum consisting of 1.2 × 10(6) R. oligosporus and 0.6 × 10(6) A. oryzae spores (per 60 g of seeds) resulted in a product of improved quality, as compared with traditionally made tempeh (obtained after inoculation with 1.2 × 10(6) R. oligosporus spores), at the same fermentation time. This product had radical scavenging ability 70% higher than the one obtained with pure R. oligosporus culture and contained 2.23 g/kg dm of soluble phenols. The thiamin and riboflavin levels were above three (340 µg/g dm) and two (50.50 µg/g dm) folds higher than in traditionally made tempeh, respectively. The product had 65% in vitro bioavailability of proteins and 33% in vitro bioavailability of sugars. It also contained 40% less 3-N-oxalyl-L-2, 3-diaminopropionic acid (0.074 g/kg dm), as compared to traditional tempeh.
International Nuclear Information System (INIS)
Oganesian, A.G.
1998-01-01
A method is proposed for estimating unknown vacuum expectation values of high-dimensional operators. The method is based on the idea that the factorization hypothesis is self-consistent. Results are obtained for all vacuum expectation values of dimension-7 operators, and some estimates for dimension-10 operators are presented as well. The resulting values are used to compute corrections of higher dimensions to the Bjorken and Ellis-Jaffe sum rules
Nam, Julia EunJu; Mueller, Klaus
2013-02-01
Gaining a true appreciation of high-dimensional space remains difficult since all of the existing high-dimensional space exploration techniques serialize the space travel in some way. This is not so foreign to us since we, when traveling, also experience the world in a serial fashion. But we typically have access to a map to help with positioning, orientation, navigation, and trip planning. Here, we propose a multivariate data exploration tool that compares high-dimensional space navigation with a sightseeing trip. It decomposes this activity into five major tasks: 1) Identify the sights: use a map to identify the sights of interest and their location; 2) Plan the trip: connect the sights of interest along a specifyable path; 3) Go on the trip: travel along the route; 4) Hop off the bus: experience the location, look around, zoom into detail; and 5) Orient and localize: regain bearings in the map. We describe intuitive and interactive tools for all of these tasks, both global navigation within the map and local exploration of the data distributions. For the latter, we describe a polygonal touchpad interface which enables users to smoothly tilt the projection plane in high-dimensional space to produce multivariate scatterplots that best convey the data relationships under investigation. Motion parallax and illustrative motion trails aid in the perception of these transient patterns. We describe the use of our system within two applications: 1) the exploratory discovery of data configurations that best fit a personal preference in the presence of tradeoffs and 2) interactive cluster analysis via cluster sculpting in N-D.
Directory of Open Access Journals (Sweden)
T E Nottidge
2017-03-01
Full Text Available Background. Self-directed learning (SDL is the essential mechanism of lifelong learning, which, in turn, is required for medical professionals to maintain competency because of advancing technology and constantly evolving disease care and contexts. Yet, most Nigerian medical schools do not actively promote SDL skills for medical students. Objective. To evaluate the status of SDL behaviour among final-year students, and the perceptions of faculty leadership towards SDL in a Nigerian medical school. Methods. A mixed research method was used, with a survey consisting of a validated Likert-based self-rating scale for SDL (SRSSDL to assess students’ SDL behaviour. Focus group discussions with selected faculty leaders were thematically analysed to assess their perceptions of SDL. Results. The medical students reported moderate SDL behaviour, contrary to faculty, who considered their students’ SDL behaviour to be low. Faculty leadership further defined SDL as the self-motivated student demonstrating initiative in learning under the guidance of teachers, who use interactive forums for teaching. Furthermore, teachers and students should partner towards the goal of ensuring that student learning takes place. Teachers expressed concerns about SDL methods in medical schools owing to the fear that this will require medical students to teach themselves medicine without expert guidance from teachers. Conclusion. This study suggests that final-year students have a low to moderate level of SDL behaviour. The index faculty are willing to develop teacherguided self-motivated learning for their students, rather than strict SDL. Faculty should be concerned about this behaviour and should encourage SDL in such a way that students realise its benefits to become lifelong learners. Further study of the perceptions about self-regulated learning are recommended.
High Dimensional Spectral Graph Theory and Non-backtracking Random Walks on Graphs
Kempton, Mark
This thesis has two primary areas of focus. First we study connection graphs, which are weighted graphs in which each edge is associated with a d-dimensional rotation matrix for some fixed dimension d, in addition to a scalar weight. Second, we study non-backtracking random walks on graphs, which are random walks with the additional constraint that they cannot return to the immediately previous state at any given step. Our work in connection graphs is centered on the notion of consistency, that is, the product of rotations moving from one vertex to another is independent of the path taken, and a generalization called epsilon-consistency. We present higher dimensional versions of the combinatorial Laplacian matrix and normalized Laplacian matrix from spectral graph theory, and give results characterizing the consistency of a connection graph in terms of the spectra of these matrices. We generalize several tools from classical spectral graph theory, such as PageRank and effective resistance, to apply to connection graphs. We use these tools to give algorithms for sparsification, clustering, and noise reduction on connection graphs. In non-backtracking random walks, we address the question raised by Alon et. al. concerning how the mixing rate of a non-backtracking random walk to its stationary distribution compares to the mixing rate for an ordinary random walk. Alon et. al. address this question for regular graphs. We take a different approach, and use a generalization of Ihara's Theorem to give a new proof of Alon's result for regular graphs, and to extend the result to biregular graphs. Finally, we give a non-backtracking version of Polya's Random Walk Theorem for 2-dimensional grids.
Turner, Melanie; Barber, Mark; Dodds, Hazel; Dennis, Martin; Langhorne, Peter; Macleod, Mary Joan
2015-03-01
Randomised trials indicate that stroke unit care reduces morbidity and mortality after stroke. Similar results have been seen in observational studies but many have not corrected for selection bias or independent predictors of outcome. We evaluated the effect of stroke unit compared with general ward care on outcomes after stroke in Scotland, adjusting for case mix by incorporating the six simple variables (SSV) model, also taking into account selection bias and stroke subtype. We used routine data from National Scottish datasets for acute stroke patients admitted between 2005 and 2011. Patients who died within 3 days of admission were excluded from analysis. The main outcome measures were survival and discharge home. Multivariable logistic regression was used to estimate the OR for survival, and adjustment was made for the effect of the SSV model and for early mortality. Cox proportional hazards model was used to estimate the hazard of death within 365 days. There were 41 692 index stroke events; 79% were admitted to a stroke unit at some point during their hospital stay and 21% were cared for in a general ward. Using the SSV model, we obtained a receiver operated curve of 0.82 (SE 0.002) for mortality at 6 months. The adjusted OR for survival at 7 days was 3.11 (95% CI 2.71 to 3.56) and at 1 year 1.43 (95% CI 1.34 to 1.54) while the adjusted OR for being discharged home was 1.19 (95% CI 1.11 to 1.28) for stroke unit care. In routine practice, stroke unit admission is associated with a greater likelihood of discharge home and with lower mortality up to 1 year, after correcting for known independent predictors of outcome, and excluding early non-modifiable mortality. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Hultqvist, Adam; Aitola, Kerttu; Sveinbjörnsson, Kári; Saki, Zahra; Larsson, Fredrik; Törndahl, Tobias; Johansson, Erik; Boschloo, Gerrit; Edoff, Marika
2017-09-06
The compatibility of atomic layer deposition directly onto the mixed halide perovskite formamidinium lead iodide:methylammonium lead bromide (CH(NH 2 ) 2 , CH 3 NH 3 )Pb(I,Br) 3 (FAPbI 3 :MAPbBr 3 ) perovskite films is investigated by exposing the perovskite films to the full or partial atomic layer deposition processes for the electron selective layer candidates ZnO and SnO x . Exposing the samples to the heat, the vacuum, and even the counter reactant of H 2 O of the atomic layer deposition processes does not appear to alter the perovskite films in terms of crystallinity, but the choice of metal precursor is found to be critical. The Zn precursor Zn(C 2 H 5 ) 2 either by itself or in combination with H 2 O during the ZnO atomic layer deposition (ALD) process is found to enhance the decomposition of the bulk of the perovskite film into PbI 2 without even forming ZnO. In contrast, the Sn precursor Sn(N(CH 3 ) 2 ) 4 does not seem to degrade the bulk of the perovskite film, and conformal SnO x films can successfully be grown on top of it using atomic layer deposition. Using this SnO x film as the electron selective layer in inverted perovskite solar cells results in a lower power conversion efficiency of 3.4% than the 8.4% for the reference devices using phenyl-C 70 -butyric acid methyl ester. However, the devices with SnO x show strong hysteresis and can be pushed to an efficiency of 7.8% after biasing treatments. Still, these cells lacks both open circuit voltage and fill factor compared to the references, especially when thicker SnO x films are used. Upon further investigation, a possible cause of these losses could be that the perovskite/SnO x interface is not ideal and more specifically found to be rich in Sn, O, and halides, which is probably a result of the nucleation during the SnO x growth and which might introduce barriers or alter the band alignment for the transport of charge carriers.
Integrative Modeling and Inference in High Dimensional Genomic and Metabolic Data
DEFF Research Database (Denmark)
Brink-Jensen, Kasper
in Manuscript I preserves the attributes of the compounds found in LC–MS samples while identifying genes highly associated with these. The main obstacles that must be overcome with this approach are dimension reduction and variable selection, here done with PARAFAC and LASSO respectively. One important drawback...... of the LASSO has been the lack of inference, the variables selected could potentially just be the most important from a set of non–important variables. Manuscript II addresses this problem with a permutation based significance test for the variables chosen by the LASSO. Once a set of relevant variables has......, particularly it scales to many lists and it provides an intuitive interpretation of the measure....
Directory of Open Access Journals (Sweden)
L.V. Arun Shalin
2016-01-01
Full Text Available Clustering is a process of grouping elements together, designed in such a way that the elements assigned to similar data points in a cluster are more comparable to each other than the remaining data points in a cluster. During clustering certain difficulties related when dealing with high dimensional data are ubiquitous and abundant. Works concentrated using anonymization method for high dimensional data spaces failed to address the problem related to dimensionality reduction during the inclusion of non-binary databases. In this work we study methods for dimensionality reduction for non-binary database. By analyzing the behavior of dimensionality reduction for non-binary database, results in performance improvement with the help of tag based feature. An effective multi-clustering anonymization approach called Discrete Component Task Specific Multi-Clustering (DCTSM is presented for dimensionality reduction on non-binary database. To start with we present the analysis of attribute in the non-binary database and cluster projection identifies the sparseness degree of dimensions. Additionally with the quantum distribution on multi-cluster dimension, the solution for relevancy of attribute and redundancy on non-binary data spaces is provided resulting in performance improvement on the basis of tag based feature. Multi-clustering tag based feature reduction extracts individual features and are correspondingly replaced by the equivalent feature clusters (i.e. tag clusters. During training, the DCTSM approach uses multi-clusters instead of individual tag features and then during decoding individual features is replaced by corresponding multi-clusters. To measure the effectiveness of the method, experiments are conducted on existing anonymization method for high dimensional data spaces and compared with the DCTSM approach using Statlog German Credit Data Set. Improved tag feature extraction and minimum error rate compared to conventional anonymization
Garashchuk, Sophya; Rassolov, Vitaly A
2008-07-14
Semiclassical implementation of the quantum trajectory formalism [J. Chem. Phys. 120, 1181 (2004)] is further developed to give a stable long-time description of zero-point energy in anharmonic systems of high dimensionality. The method is based on a numerically cheap linearized quantum force approach; stabilizing terms compensating for the linearization errors are added into the time-evolution equations for the classical and nonclassical components of the momentum operator. The wave function normalization and energy are rigorously conserved. Numerical tests are performed for model systems of up to 40 degrees of freedom.
Benediktsson, J. A.; Swain, P. H.; Ersoy, O. K.
1993-01-01
Application of neural networks to classification of remote sensing data is discussed. Conventional two-layer backpropagation is found to give good results in classification of remote sensing data but is not efficient in training. A more efficient variant, based on conjugate-gradient optimization, is used for classification of multisource remote sensing and geographic data and very-high-dimensional data. The conjugate-gradient neural networks give excellent performance in classification of multisource data, but do not compare as well with statistical methods in classification of very-high-dimentional data.
Directory of Open Access Journals (Sweden)
Saumyadipta Pyne
Full Text Available In biomedical applications, an experimenter encounters different potential sources of variation in data such as individual samples, multiple experimental conditions, and multivariate responses of a panel of markers such as from a signaling network. In multiparametric cytometry, which is often used for analyzing patient samples, such issues are critical. While computational methods can identify cell populations in individual samples, without the ability to automatically match them across samples, it is difficult to compare and characterize the populations in typical experiments, such as those responding to various stimulations or distinctive of particular patients or time-points, especially when there are many samples. Joint Clustering and Matching (JCM is a multi-level framework for simultaneous modeling and registration of populations across a cohort. JCM models every population with a robust multivariate probability distribution. Simultaneously, JCM fits a random-effects model to construct an overall batch template--used for registering populations across samples, and classifying new samples. By tackling systems-level variation, JCM supports practical biomedical applications involving large cohorts. Software for fitting the JCM models have been implemented in an R package EMMIX-JCM, available from http://www.maths.uq.edu.au/~gjm/mix_soft/EMMIX-JCM/.
Energy Technology Data Exchange (ETDEWEB)
Xiong, Zhi-bo, E-mail: xzb328@163.com [School of Energy and Power Engineering, University of Shanghai for Science & Technology, Shanghai 200093 (China); Collaborative Innovation Research Institute, University of Shanghai for Science & Technology, Shanghai 200093 (China); Shanghai Power Equipment Research Institute, Shanghai 200240 (China); Liu, Jing; Zhou, Fei; Liu, Dun-yu; Lu, Wei [School of Energy and Power Engineering, University of Shanghai for Science & Technology, Shanghai 200093 (China); Jin, Jing [School of Energy and Power Engineering, University of Shanghai for Science & Technology, Shanghai 200093 (China); Collaborative Innovation Research Institute, University of Shanghai for Science & Technology, Shanghai 200093 (China); Ding, Shi-fa [Shanghai Power Equipment Research Institute, Shanghai 200240 (China)
2017-06-01
Highlights: • Iron-cerium-tungsten mixed oxide catalysts were prepared through three different methods. • The effect of preparation methods on the NH{sub 3}-SCR activity and the surface structure properties of catalyst were investigated. • Iron-cerium-tungsten mixed oxide prepared through microwave irradiation assistant critic acid sol-gel shows higher NH{sub 3}-SCR activity. - Abstract: A series of magnetic Fe{sub 0.85}Ce{sub 0.10}W{sub 0.05}O{sub z} catalysts were synthesized by three different methods(Co-precipitation(Fe{sub 0.85}Ce{sub 0.10}W{sub 0.05}O{sub z}-CP), Hydrothermal treatment assistant critic acid sol-gel method(Fe{sub 0.85}Ce{sub 0.10}W{sub 0.05}O{sub z}-HT) and Microwave irradiation assistant critic acid sol-gel method(Fe{sub 0.85}Ce{sub 0.10}W{sub 0.05}O{sub z}-MW)), and the catalytic activity was evaluated for selective catalytic reduction of NO with NH{sub 3}. The catalyst was characterized by XRD, N{sub 2} adsorption-desorption, XPS, H{sub 2}-TPR and NH{sub 3}-TPD. Among the tested catalysts, Fe{sub 0.85}Ce{sub 0.10}W{sub 0.05}O{sub z}-MW shows the highest NO{sub x} conversion over per gram in unit time with NO{sub x} conversion of 60.8% at 350 °C under a high gas hourly space velocity of 1,200,000 ml/(g h). Different from Fe{sub 0.85}Ce{sub 0.10}W{sub 0.05}O{sub z}-CP catalyst, there exists a large of iron oxide crystallite(γ-Fe{sub 2}O{sub 3} and α-Fe{sub 2}O{sub 3}) scattered in Fe{sub 0.85}Ce{sub 0.10}W{sub 0.05}O{sub z} catalysts prepared through hydrothermal treatment or microwave irradiation assistant critic acid sol-gel method, and higher iron atomic concentration on their surface. And Fe{sub 0.85}Ce{sub 0.10}W{sub 0.05}O{sub z}-MW shows higher surface absorbed oxygen concentration and better dispersion compared with Fe{sub 0.85}Ce{sub 0.10}W{sub 0.05}O{sub z}-HT catalyst. These features were favorable for the high catalytic performance of NO reduction with NH{sub 3} over Fe{sub 0.85}Ce{sub 0.10}W{sub 0.05}O{sub z}-MW catalyst.
Directory of Open Access Journals (Sweden)
Angela Simeone
2014-09-01
Full Text Available Functional genomics screens using multi-parametric assays are powerful approaches for identifying genes involved in particular cellular processes. However, they suffer from problems like noise, and often provide little insight into molecular mechanisms. A bottleneck for addressing these issues is the lack of computational methods for the systematic integration of multi-parametric phenotypic datasets with molecular interactions. Here, we present Integrative Multi Profile Analysis of Cellular Traits (IMPACT. The main goal of IMPACT is to identify the most consistent phenotypic profile among interacting genes. This approach utilizes two types of external information: sets of related genes (IMPACT-sets and network information (IMPACT-modules. Based on the notion that interacting genes are more likely to be involved in similar functions than non-interacting genes, this data is used as a prior to inform the filtering of phenotypic profiles that are similar among interacting genes. IMPACT-sets selects the most frequent profile among a set of related genes. IMPACT-modules identifies sub-networks containing genes with similar phenotype profiles. The statistical significance of these selections is subsequently quantified via permutations of the data. IMPACT (1 handles multiple profiles per gene, (2 rescues genes with weak phenotypes and (3 accounts for multiple biases e.g. caused by the network topology. Application to a genome-wide RNAi screen on endocytosis showed that IMPACT improved the recovery of known endocytosis-related genes, decreased off-target effects, and detected consistent phenotypes. Those findings were confirmed by rescreening 468 genes. Additionally we validated an unexpected influence of the IGF-receptor on EGF-endocytosis. IMPACT facilitates the selection of high-quality phenotypic profiles using different types of independent information, thereby supporting the molecular interpretation of functional screens.
Energy Technology Data Exchange (ETDEWEB)
Zawadzka-Kazimierczuk, Anna; Kozminski, Wiktor [University of Warsaw, Faculty of Chemistry (Poland); Billeter, Martin, E-mail: martin.billeter@chem.gu.se [University of Gothenburg, Biophysics Group, Department of Chemistry and Molecular Biology (Sweden)
2012-09-15
While NMR studies of proteins typically aim at structure, dynamics or interactions, resonance assignments represent in almost all cases the initial step of the analysis. With increasing complexity of the NMR spectra, for example due to decreasing extent of ordered structure, this task often becomes both difficult and time-consuming, and the recording of high-dimensional data with high-resolution may be essential. Random sampling of the evolution time space, combined with sparse multidimensional Fourier transform (SMFT), allows for efficient recording of very high dimensional spectra ({>=}4 dimensions) while maintaining high resolution. However, the nature of this data demands for automation of the assignment process. Here we present the program TSAR (Tool for SMFT-based Assignment of Resonances), which exploits all advantages of SMFT input. Moreover, its flexibility allows to process data from any type of experiments that provide sequential connectivities. The algorithm was tested on several protein samples, including a disordered 81-residue fragment of the {delta} subunit of RNA polymerase from Bacillus subtilis containing various repetitive sequences. For our test examples, TSAR achieves a high percentage of assigned residues without any erroneous assignments.
International Nuclear Information System (INIS)
Grossman, Y.
1997-10-01
In supersymmetric models with nonvanishing Majorana neutrino masses, the sneutrino and antisneutrino mix. The conditions under which this mixing is experimentally observable are studied, and mass-splitting of the sneutrino mass eigenstates and sneutrino oscillation phenomena are analyzed
Predicting Future High-Cost Schizophrenia Patients Using High-Dimensional Administrative Data
Directory of Open Access Journals (Sweden)
Yajuan Wang
2017-06-01
Full Text Available BackgroundThe burden of serious and persistent mental illness such as schizophrenia is substantial and requires health-care organizations to have adequate risk adjustment models to effectively allocate their resources to managing patients who are at the greatest risk. Currently available models underestimate health-care costs for those with mental or behavioral health conditions.ObjectivesThe study aimed to develop and evaluate predictive models for identification of future high-cost schizophrenia patients using advanced supervised machine learning methods.MethodsThis was a retrospective study using a payer administrative database. The study cohort consisted of 97,862 patients diagnosed with schizophrenia (ICD9 code 295.* from January 2009 to June 2014. Training (n = 34,510 and study evaluation (n = 30,077 cohorts were derived based on 12-month observation and prediction windows (PWs. The target was average total cost/patient/month in the PW. Three models (baseline, intermediate, final were developed to assess the value of different variable categories for cost prediction (demographics, coverage, cost, health-care utilization, antipsychotic medication usage, and clinical conditions. Scalable orthogonal regression, significant attribute selection in high dimensions method, and random forests regression were used to develop the models. The trained models were assessed in the evaluation cohort using the regression R2, patient classification accuracy (PCA, and cost accuracy (CA. The model performance was compared to the Centers for Medicare & Medicaid Services Hierarchical Condition Categories (CMS-HCC model.ResultsAt top 10% cost cutoff, the final model achieved 0.23 R2, 43% PCA, and 63% CA; in contrast, the CMS-HCC model achieved 0.09 R2, 27% PCA with 45% CA. The final model and the CMS-HCC model identified 33 and 22%, respectively, of total cost at the top 10% cost cutoff.ConclusionUsing advanced feature selection leveraging detailed
Saadulla, Lawand; Reeves, W Brian; Irey, Brittany; Ghahramani, Nasrollah
2012-02-01
To investigate the impacts of availability of pre-mixed solutions and computerized order entry on nephrologists' choice of the initial mode of renal replacement therapy in acute renal failure. We studied 898 patients with acute renal failure in 3 consecutive eras: era 1 (custom-mixed solution; n = 309), era 2 (pre-mixed commercial solution; n = 324), and era 3 (post-computerized order entry; n = 265). The proportion of patients treated with renal replacement therapy and the time from consult to initiation of continuous renal replacement therapy was similar in the 3 eras. Following introduction of the pre-mixed solution, the proportion of patients treated with continuous renal replacement therapy increased (20% vs. 33%; p mixed solution increases the likelihood of initiating continuous renal replacement therapy in acute renal failure, initiating it at a lower creatinine and for older patients, use of continuous veno-venous hemodialysis and higher prescribed continuous renal replacement therapy dose. Computerized order entry implementation is associated with an additional increase in the use of continuous veno-venous hemodialysis, higher total prescribed dialysis dose, and use of CRRT among an increasing number of patients not on mechanical ventilation. The effect of these changes on patient survival is not significant.
He, Ling Yan; Wang, Tie-Jun; Wang, Chuan
2016-07-11
High-dimensional quantum system provides a higher capacity of quantum channel, which exhibits potential applications in quantum information processing. However, high-dimensional universal quantum logic gates is difficult to achieve directly with only high-dimensional interaction between two quantum systems and requires a large number of two-dimensional gates to build even a small high-dimensional quantum circuits. In this paper, we propose a scheme to implement a general controlled-flip (CF) gate where the high-dimensional single photon serve as the target qudit and stationary qubits work as the control logic qudit, by employing a three-level Λ-type system coupled with a whispering-gallery-mode microresonator. In our scheme, the required number of interaction times between the photon and solid state system reduce greatly compared with the traditional method which decomposes the high-dimensional Hilbert space into 2-dimensional quantum space, and it is on a shorter temporal scale for the experimental realization. Moreover, we discuss the performance and feasibility of our hybrid CF gate, concluding that it can be easily extended to a 2n-dimensional case and it is feasible with current technology.
Directory of Open Access Journals (Sweden)
Ottavia eDipasquale
2015-02-01
Full Text Available High dimensional independent component analysis (ICA, compared to low dimensional ICA, allows performing a detailed parcellation of the resting state networks. The purpose of this study was to give further insight into functional connectivity (FC in Alzheimer’s disease (AD using high dimensional ICA. For this reason, we performed both low and high dimensional ICA analyses of resting state fMRI (rfMRI data of 20 healthy controls and 21 AD patients, focusing on the primarily altered default mode network (DMN and exploring the sensory motor network (SMN. As expected, results obtained at low dimensionality were in line with previous literature. Moreover, high dimensional results allowed us to observe either the presence of within-network disconnections and FC damage confined to some of the resting state sub-networks. Due to the higher sensitivity of the high dimensional ICA analysis, our results suggest that high-dimensional decomposition in sub-networks is very promising to better localize FC alterations in AD and that FC damage is not confined to the default mode network.
International Nuclear Information System (INIS)
Liu, W; Sawant, A; Ruan, D
2016-01-01
Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit more descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction
High-silica ZSM 5 zeolites were incorporated into poly(dimethyl siloxane) (PDMS) polymers to form mixed matrix membranes for ethanol removal from water via pervaporation. Membrane formulation and preparation parameters were varied to determine the effect on pervaporation perform...
Energy Technology Data Exchange (ETDEWEB)
Miao, Yan-Gang [Nankai University, School of Physics, Tianjin (China); Chinese Academy of Sciences, State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, P.O. Box 2735, Beijing (China); CERN, PH-TH Division, Geneva 23 (Switzerland); Xu, Zhen-Ming [Nankai University, School of Physics, Tianjin (China)
2016-04-15
Considering non-Gaussian smeared matter distributions, we investigate the thermodynamic behaviors of the noncommutative high-dimensional Schwarzschild-Tangherlini anti-de Sitter black hole, and we obtain the condition for the existence of extreme black holes. We indicate that the Gaussian smeared matter distribution, which is a special case of non-Gaussian smeared matter distributions, is not applicable for the six- and higher-dimensional black holes due to the hoop conjecture. In particular, the phase transition is analyzed in detail. Moreover, we point out that the Maxwell equal area law holds for the noncommutative black hole whose Hawking temperature is within a specific range, but fails for one whose the Hawking temperature is beyond this range. (orig.)
Miao, Yan-Gang
2016-01-01
Considering non-Gaussian smeared matter distributions, we investigate thermodynamic behaviors of the noncommutative high-dimensional Schwarzschild-Tangherlini anti-de Sitter black hole, and obtain the condition for the existence of extreme black holes. We indicate that the Gaussian smeared matter distribution, which is a special case of non-Gaussian smeared matter distributions, is not applicable for the 6- and higher-dimensional black holes due to the hoop conjecture. In particular, the phase transition is analyzed in detail. Moreover, we point out that the Maxwell equal area law maintains for the noncommutative black hole with the Hawking temperature within a specific range, but fails with the Hawking temperature beyond this range.
Directory of Open Access Journals (Sweden)
F. C. Cooper
2013-04-01
Full Text Available The fluctuation-dissipation theorem (FDT has been proposed as a method of calculating the response of the earth's atmosphere to a forcing. For this problem the high dimensionality of the relevant data sets makes truncation necessary. Here we propose a method of truncation based upon the assumption that the response to a localised forcing is spatially localised, as an alternative to the standard method of choosing a number of the leading empirical orthogonal functions. For systems where this assumption holds, the response to any sufficiently small non-localised forcing may be estimated using a set of truncations that are chosen algorithmically. We test our algorithm using 36 and 72 variable versions of a stochastic Lorenz 95 system of ordinary differential equations. We find that, for long integrations, the bias in the response estimated by the FDT is reduced from ~75% of the true response to ~30%.
Directory of Open Access Journals (Sweden)
Ali Dashti
Full Text Available This paper presents an implementation of the brute-force exact k-Nearest Neighbor Graph (k-NNG construction for ultra-large high-dimensional data cloud. The proposed method uses Graphics Processing Units (GPUs and is scalable with multi-levels of parallelism (between nodes of a cluster, between different GPUs on a single node, and within a GPU. The method is applicable to homogeneous computing clusters with a varying number of nodes and GPUs per node. We achieve a 6-fold speedup in data processing as compared with an optimized method running on a cluster of CPUs and bring a hitherto impossible [Formula: see text]-NNG generation for a dataset of twenty million images with 15 k dimensionality into the realm of practical possibility.
Wang, Xueyi
2012-02-08
The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 10(6) records and 10(4) dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces.
MIXED AND MIXING SYSTEMS WORLDWIDE: A PREFACE
Directory of Open Access Journals (Sweden)
Seán Patrick Donlan
2012-09-01
Full Text Available This issue of the Potchefstroom Electronic Law Journal (South Africa sees thepublication of a selection of articles derived from the Third International Congress ofthe World Society of Mixed Jurisdiction Jurists (WSMJJ. That Congress was held atthe Hebrew University of Jerusalem, Israel in the summer of 2011. It reflected athriving Society consolidating its core scholarship on classical mixed jurisdictions(Israel, Louisiana, the Philippines, Puerto Rico, Quebec, Scotland, and South Africawhile reaching to new horizons (including Cyprus, Hong Kong and Macau, Malta,Nepal, etc. This publication reflects in microcosm the complexity of contemporaryscholarship on mixed and plural legal systems. This complexity is, of course, wellunderstoodby South African jurists whose system is derived both from the dominantEuropean traditions as well as from African customary systems, including both thosethat make up part of the official law of the state as well as those non-state norms thatcontinue to be important in the daily lives of many South Africans.
Le, Hoa V; Poole, Charles; Brookhart, M Alan; Schoenbach, Victor J; Beach, Kathleen J; Layton, J Bradley; Stürmer, Til
2013-11-19
The High-Dimensional Propensity Score (hd-PS) algorithm can select and adjust for baseline confounders of treatment-outcome associations in pharmacoepidemiologic studies that use healthcare claims data. How hd-PS performance is affected by aggregating medications or medical diagnoses has not been assessed. We evaluated the effects of aggregating medications or diagnoses on hd-PS performance in an empirical example using resampled cohorts with small sample size, rare outcome incidence, or low exposure prevalence. In a cohort study comparing the risk of upper gastrointestinal complications in celecoxib or traditional NSAIDs (diclofenac, ibuprofen) initiators with rheumatoid arthritis and osteoarthritis, we (1) aggregated medications and International Classification of Diseases-9 (ICD-9) diagnoses into hierarchies of the Anatomical Therapeutic Chemical classification (ATC) and the Clinical Classification Software (CCS), respectively, and (2) sampled the full cohort using techniques validated by simulations to create 9,600 samples to compare 16 aggregation scenarios across 50% and 20% samples with varying outcome incidence and exposure prevalence. We applied hd-PS to estimate relative risks (RR) using 5 dimensions, predefined confounders, ≤ 500 hd-PS covariates, and propensity score deciles. For each scenario, we calculated: (1) the geometric mean RR; (2) the difference between the scenario mean ln(RR) and the ln(RR) from published randomized controlled trials (RCT); and (3) the proportional difference in the degree of estimated confounding between that scenario and the base scenario (no aggregation). Compared with the base scenario, aggregations of medications into ATC level 4 alone or in combination with aggregation of diagnoses into CCS level 1 improved the hd-PS confounding adjustment in most scenarios, reducing residual confounding compared with the RCT findings by up to 19%. Aggregation of codes using hierarchical coding systems may improve the performance of
International Nuclear Information System (INIS)
Ross, W.A.; Bierschbach, M.C.; Dukelow, J.S. Jr.
1995-06-01
Six alternatives for the interim storage of remote-handled mixed wastes from the 324 Building on the Hanford Site have been identified and evaluated. The alternatives focus on the interim storage facility and include use of existing facilities in the 200 Area, the construction of new facilities, and the vitrification of the wastes within the 324 Building to remove the majority of the wastes from under RCRA regulations. The six alternatives are summarized in Table S.1, which identifies the primary facilities to be utilized, the anticipated schedule for removal of the wastes, the costs of the transfer from 324 Building to the interim storage facility (including any capital costs), and an initial risk comparison of the alternatives. A recently negotiated Tri-Party Agreement (TPA) change requires the last of the mixed wastes to be removed by May 1999. The ability to use an existing facility reduces the costs since it eliminates the need for new capital construction. The basic regulatory approvals for the storage of mixed wastes are in place for the PUREX facility, but the Form HI permit will need some minor modifications since the 324 Building wastes have some additional characteristic waste codes and the current permit limits storage of wastes to those from the facility itself. Regulatory reviews have indicated that it will be best to use the tunnels to store the wastes. The PUREX alternatives will only provide storage for about 65% of the wastes. This results from the current schedule of the B-Cell Clean Out Project, which projects that dispersible debris will continue to be collected in small quantities until the year 2000. The remaining fraction of the wastes will then be stored in another facility. Central Waste Complex (CWC) is currently proposed for that residual waste storage; however, other options may also be available
Mixed wasted integrated program: Logic diagram
International Nuclear Information System (INIS)
Mayberry, J.; Stelle, S.; O'Brien, M.; Rudin, M.; Ferguson, J.; McFee, J.
1994-01-01
The Mixed Waste Integrated Program Logic Diagram was developed to provide technical alternative for mixed wastes projects for the Office of Technology Development's Mixed Waste Integrated Program (MWIP). Technical solutions in the areas of characterization, treatment, and disposal were matched to a select number of US Department of Energy (DOE) treatability groups represented by waste streams found in the Mixed Waste Inventory Report (MWIR)
Energy Technology Data Exchange (ETDEWEB)
Storm, Emma; Weniger, Christoph [GRAPPA, Institute of Physics, University of Amsterdam, Science Park 904, 1090 GL Amsterdam (Netherlands); Calore, Francesca, E-mail: e.m.storm@uva.nl, E-mail: c.weniger@uva.nl, E-mail: francesca.calore@lapth.cnrs.fr [LAPTh, CNRS, 9 Chemin de Bellevue, BP-110, Annecy-le-Vieux, 74941, Annecy Cedex (France)
2017-08-01
We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (∼> 10{sup 5}) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that are motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |ℓ|<90{sup o} and | b |<20{sup o}, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.
Du, Jing; Wang, Jian
2015-11-01
Bessel beams carrying orbital angular momentum (OAM) with helical phase fronts exp(ilφ)(l=0;±1;±2;…), where φ is the azimuthal angle and l corresponds to the topological number, are orthogonal with each other. This feature of Bessel beams provides a new dimension to code/decode data information on the OAM state of light, and the theoretical infinity of topological number enables possible high-dimensional structured light coding/decoding for free-space optical communications. Moreover, Bessel beams are nondiffracting beams having the ability to recover by themselves in the face of obstructions, which is important for free-space optical communications relying on line-of-sight operation. By utilizing the OAM and nondiffracting characteristics of Bessel beams, we experimentally demonstrate 12 m distance obstruction-free optical m-ary coding/decoding using visible Bessel beams in a free-space optical communication system. We also study the bit error rate (BER) performance of hexadecimal and 32-ary coding/decoding based on Bessel beams with different topological numbers. After receiving 500 symbols at the receiver side, a zero BER of hexadecimal coding/decoding is observed when the obstruction is placed along the propagation path of light.
Regis, Rommel G.
2014-02-01
This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.
Schröder, Markus; Meyer, Hans-Dieter
2017-08-01
We propose a Monte Carlo method, "Monte Carlo Potfit," for transforming high-dimensional potential energy surfaces evaluated on discrete grid points into a sum-of-products form, more precisely into a Tucker form. To this end we use a variational ansatz in which we replace numerically exact integrals with Monte Carlo integrals. This largely reduces the numerical cost by avoiding the evaluation of the potential on all grid points and allows a treatment of surfaces up to 15-18 degrees of freedom. We furthermore show that the error made with this ansatz can be controlled and vanishes in certain limits. We present calculations on the potential of HFCO to demonstrate the features of the algorithm. To demonstrate the power of the method, we transformed a 15D potential of the protonated water dimer (Zundel cation) in a sum-of-products form and calculated the ground and lowest 26 vibrationally excited states of the Zundel cation with the multi-configuration time-dependent Hartree method.
Wu, Shuang; Liu, Zhi-Ping; Qiu, Xing; Wu, Hulin
2014-01-01
The immune response to viral infection is regulated by an intricate network of many genes and their products. The reverse engineering of gene regulatory networks (GRNs) using mathematical models from time course gene expression data collected after influenza infection is key to our understanding of the mechanisms involved in controlling influenza infection within a host. A five-step pipeline: detection of temporally differentially expressed genes, clustering genes into co-expressed modules, identification of network structure, parameter estimate refinement, and functional enrichment analysis, is developed for reconstructing high-dimensional dynamic GRNs from genome-wide time course gene expression data. Applying the pipeline to the time course gene expression data from influenza-infected mouse lungs, we have identified 20 distinct temporal expression patterns in the differentially expressed genes and constructed a module-based dynamic network using a linear ODE model. Both intra-module and inter-module annotations and regulatory relationships of our inferred network show some interesting findings and are highly consistent with existing knowledge about the immune response in mice after influenza infection. The proposed method is a computationally efficient, data-driven pipeline bridging experimental data, mathematical modeling, and statistical analysis. The application to the influenza infection data elucidates the potentials of our pipeline in providing valuable insights into systematic modeling of complicated biological processes.
Meng, Xi; Nguyen, Bao D.; Ridge, Clark; Shaka, A. J.
2009-01-01
High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to “reduced-dimensionality” strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the Filter Diagonalization Method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths. PMID:18926747
Cavaglieri, Daniele; Bewley, Thomas
2015-04-01
Implicit/explicit (IMEX) Runge-Kutta (RK) schemes are effective for time-marching ODE systems with both stiff and nonstiff terms on the RHS; such schemes implement an (often A-stable or better) implicit RK scheme for the stiff part of the ODE, which is often linear, and, simultaneously, a (more convenient) explicit RK scheme for the nonstiff part of the ODE, which is often nonlinear. Low-storage RK schemes are especially effective for time-marching high-dimensional ODE discretizations of PDE systems on modern (cache-based) computational hardware, in which memory management is often the most significant computational bottleneck. In this paper, we develop and characterize eight new low-storage implicit/explicit RK schemes which have higher accuracy and better stability properties than the only low-storage implicit/explicit RK scheme available previously, the venerable second-order Crank-Nicolson/Runge-Kutta-Wray (CN/RKW3) algorithm that has dominated the DNS/LES literature for the last 25 years, while requiring similar storage (two, three, or four registers of length N) and comparable floating-point operations per timestep.
DEFF Research Database (Denmark)
Karlsson, Rasmus K. B.; Hansen, Heine Anton; Bligaard, Thomas
2014-01-01
affected by the presence of small amounts of Ru dopant, whereas oxygen adsorption is relatively unaffected by Ti dopants in RuO2. The calculations also indicate that coordinatively unsaturated Ti sites on Ru-doped TiO2 and on Ru0.3Ti0.7O2 could form active and selective sites for Cl2 evolution....... These results suggest a reason for why DSA shows a higher chlorine selectivity than RuO2 and propose an experimental test of the hypothesis....
Mixing ventilation guide on mixing air distribution design
Kandzia, Claudia; Kosonen, Risto; Krikor Melikov, Arsen; Nielsen, Peter Vilhelm
2013-01-01
In this guidebook most of the known and used in practice methods for achieving mixing air distribution are discussed. Mixing ventilation has been applied to many different spaces providing fresh air and thermal comfort to the occupants. Today, a design engineer can choose from large selection of air diffusers and exhaust openings.
Mixing Ventilation. Guide on mixing air distribution design
DEFF Research Database (Denmark)
Kandzia, Claudia; Kosonen, Risto; Melikov, Arsen Krikor
In this guidebook most of the known and used in practice methods for achieving mixing air distribution are discussed. Mixing ventilation has been applied to many different spaces providing fresh air and thermal comfort to the occupants. Today, a design engineer can choose from large selection...
International Nuclear Information System (INIS)
Mironenko, V.R.; Kuritsyn, Yu.A.; Bolshov, M.A.; Liger, V.V.
2017-01-01
Determination of a gas medium temperature by diode laser absorption spectrometry (DLAS) is based on the measurement of integral intensities of the absorption lines of a test molecule (generally water vapor molecule). In case of local thermodynamic equilibrium temperature is inferred from the ratio of the integral intensities of two lines with different low energy levels. For the total gas pressure above 1 atm the absorption lines are broadened and one cannot find isolated well resolved water vapor absorption lines within relatively narrow spectral interval of fast diode laser (DL) tuning range (about 3 cm"−"1). For diagnostics of a gas object in the case of high temperature and pressure DLAS technique can be realized with two diode lasers working in different spectral regions with strong absorption lines. In such situation the criteria of the optimal line selection differs significantly from the case of narrow lines. These criteria are discussed in our work. The software for selection the optimal spectral regions using the HITRAN-2012 and HITEMP data bases is developed. The program selects spectral regions of DL tuning, minimizing the error of temperature determination δT/T, basing on the attainable experimental error of line intensity measurement δS. Two combinations of optimal spectral regions were selected – (1.392 & 1.343 μm) and (1.392 & 1.339 μm). Different algorithms of experimental data processing are discussed.
Liu, Qi; Cai, Hui-Ying; Jin, Guang-Ze
2013-10-01
To accurately quantify forest carbon density and net primary productivity (NPP) is of great significance in estimating the role of forest ecosystems in global carbon cycle. By using the forest inventory and allometry approaches, this paper measured the carbon density and NPP of the virgin broadleaved-Korean pine (Pinus koraiensis) forest and of the broadleaved-Korean pine forest after 34 years selective-cutting (the cutting intensity was 30%, and the cutting trees were in large diameter class). The total carbon density of the virgin and selective-cutting broadleaved-Korean pine forests was (397.95 +/- 93.82) and (355.61 +/- 59.37) t C x hm(-2), respectively. In the virgin forest, the carbon density of the vegetation, debris, and soil accounted for 31.0%, 3.1%, and 65.9% of the total carbon pool, respectively; in the selective-cutting forest, the corresponding values were 31.7%, 2.9%, and 65.4%, respectively. No significant differences were observed in the total carbon density and the carbon density of each component between the two forests. The total NPP of the virgin and selective-cutting forests was (36.27 +/- 0.36) and (6.35 +/- 0.70) t C x hm(-2) x a(-1), among which, the NPP of overstory, understory, and fine roots in virgin forest and selective-cutting forest accounted for 60.3%, 2.0%, and 37.7%, and 66.1%, 2.0%, and 31.2%, respectively. No significant differences were observed in the total NPP and the contribution rate of each component between the two forests. However, the ratios of the needle and broadleaf NPPs of the virgin and selective-cutting forests were 47.24:52.76 and 20.48:79.52, respectively, with a significant difference. The results indicated that the carbon density and NPP of the broadleaved-Korean pine forest after 34 years selective-cutting recovered to the levels of the virgin broadleaved-Korean pine forest.
Wang, Yipei; Liu, Xin; Schneider, Brandon; Zverina, Elaina A.; Russ, Kristen; Wijeyesakere, Sanjeeva J.; Fierke, Carol A.; Richardson, Rudy J.; Philbert, Martin A.
2012-01-01
Astrocytes are acutely sensitive to 1,3-dinitrobenzene (1,3-DNB) while adjacent neurons are relatively unaffected, consistent with other chemically-induced energy deprivation syndromes. Previous studies have investigated the role of astrocytes in protecting neurons from hypoxia and chemical injury via adenosine release. Adenosine is considered neuroprotective, but it is rapidly removed by extracellular deaminases such as adenosine deaminase (ADA). The present study tested the hypothesis that ADA is inhibited by 1,3-DNB as a substrate mimic, thereby preventing adenosine catabolism. ADA was inhibited by 1,3-DNB with an IC50 of 284μM, Hill slope, n = 4.8 ± 0.4. Native gel electrophoresis showed that 1,3-DNB did not denature ADA. Furthermore, adding Triton X-100 (0.01–0.05%, wt/vol), Nonidet P-40 (0.0015–0.0036%, wt/vol), or bovine serum albumin (0.05 mg/ml or changing [ADA] (0.2 and 2nM) did not substantially alter the 1,3-DNB IC50 value. Likewise, dynamic light scattering showed no particle formation over a (1,3-DNB) range of 149–1043μM. Kinetics revealed mixed inhibition with 1,3-DNB binding to ADA (KI = 520 ± 100μM, n = 1 ± 0.6) and the ADA-adenosine complex (KIS = 262 ± 7μM, n = 6 ± 0.6, indicating positive cooperativity). In accord with the kinetics, docking predicted binding of 1,3-DNB to the active site and three peripheral sites. In addition, exposure of DI TNC-1 astrocytes to 10–500μM 1,3-DNB produced concentration-dependent increases in extracellular adenosine at 24 h. Overall, the results demonstrate that 1,3-DNB is a mixed inhibitor of ADA and may thus lead to increases in extracellular adenosine. The finding may provide insights to guide future work on chemically-induced energy deprivation. PMID:22106038
Puzio, Kinga; Delépée, Raphaël; Vidal, Richard; Agrofoglio, Luigi A
2013-08-06
A novel molecularly imprinted polymer (MIP) for vanillin was prepared by photo initiated polymerization in dichloromethane using a mixed semi-covalent and non-covalent imprinting strategy. Taking polymerisable syringaldehyde as "dummy" template, acrylamide was chosen as functional monomer on B3LYP/6-31+G(d,p) density functional theory computational method basis with counterpoise. The binding parameters for the recognition of vanillin on imprinted polymers were studied with three different isotherm models (Langmuir, bi-Langmuir and Langmuir-Freundlich) and compared. The results indicate an heterogeneity of binding sites. It was found and proved by DFT calculations that the specific binding of vanillin in the cavities is due to non-covalent interactions of the template with the hydroxyphenyl- and the amide-moieties. The binding geometry of vanillin in the MIP cavity was also modelled. The obtained MIP is highly specific for vanillin (with an imprinting factor of 7.4) and was successfully applied to the extraction of vanillin from vanilla pods, red wine spike with vanillin, natural and artificial vanilla sugar with a recovery of 80%. Copyright © 2013 Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
Bang Appel, Helene; Singla, Rashmi
2016-01-01
Despite an increase in cross border intimate relationships and children of mixed parentage, there is little mention or scholarship about them in the area of childhood and migrancy in the Nordic countries. The international literature implies historical pathologisation, contestation and current...... of identity formation in the . They position themselves as having an “in-between” identity or “ just Danes” in their every day lives among friends, family, and during leisure activities. Thus a new paradigm is evolving away- from the pathologisation of mixed children, simplified one-sided categories...
International Nuclear Information System (INIS)
Zhang, Fang; Zhang, Yan; Chen, Yun; Dai, Kun; Loosdrecht, Mark C.M. van; Zeng, Raymond J.
2015-01-01
Highlights: • Simultaneous production of acetate and methane from glycerol was investigated. • Acetate accounted for more than 90% of metabolites in liquid solutions. • The maximum concentration of acetate was above 13 g/L. • 93% of archaea were hydrogenotrophic methanogens. • Thermoanaerobacter was main bacterium and its percentage was 92%. - Abstract: The feasibility of simultaneous production of acetate and methane from glycerol was investigated by selective enrichment of hydrogenotrophic methanogens in an extreme-thermophilic (70 °C) fermentation. Fed-batch experiments showed acetate was produced at the concentration up to 13.0 g/L. A stable operation of the continuous stirred tank reactor (CSTR) was reached within 100 days. Acetate accounted for more than 90 w/w% of metabolites in the fermentation liquid. The yields of methane and acetate were close to the theoretical yields with 0.74–0.80 mol-methane/mol-glycerol and 0.63–0.70 mol-acetate/mol-glycerol. The obtained microbial community was characterized. Hydrogenotrophic methanogens, mainly Methanothermobacter thermautotrophicus formed 93% of the methanogenogenic community. This confirms that a high temperature (70 °C) could effectively select for hydrogenotrophic methanogenic archaea. Thermoanaerobacter spp. was the main bacterium forming 91.5% of the bacterial population. This work demonstrated the conversion of the byproduct of biodiesel production, glycerol, to acetate as a chemical and biogas for energy generation
DEFF Research Database (Denmark)
Brabrand, Helle
2010-01-01
levels than those related to building, and this exploration is a special challenge and competence implicit artistic development work. The project Mixed Movements generates drawing-material, not primary as representation, but as a performance-based media, making the body being-in-the-media felt and appear...... as possible operational moves....
2014-09-30
negative (right panel c) and the kinetic energy dissipation is larger than that expected from meterological forcing alone (right panel a). This is...10.1002/grl.50919. Shcherbina, A. et al., 2014, The LatMix Summer Campaign: Submesoscale Stirring in the Upper Ocean., Bull. American Meterological
Mixed alcohols production from syngas
International Nuclear Information System (INIS)
Stevens, R.R.; Conway, M.M.
1988-01-01
A process is described for selectively producing mixed alcohols from synthesis gas comprising contacting a mixture of hydrogen and carbon monoxide with a catalytic amount of a catalyst containing components of (1) a catalytically active metal of molybdenum or tungsten, in free or combined form; (2) a cocatalytic metal or cobalt or nickel in free or combined form; and (3) a Fischer-Tropsch promoter of an alkali or alkaline earth series metal, in free or combined form; the components combined by dry mixing, mixing as a wet paste, wet impregnation, and then sulfided, the catalyst excluding rhodium, ruthenium and copper, at a pressure of at least about 500 psig and under conditions sufficient to form the mixed alcohols in at least 20 percent CO/sub 2/ free carbon selectivity, the mixed alcohols containing a C/sub 1/ to C/sub 2-5/ alcohol weight ratio of less than about 1:1
Yu, Ming'e.; Li, Caiting; Zeng, Guangming; Zhou, Yang; Zhang, Xunan; Xie, Yin'e.
2015-07-01
A series of novel catalysts (CexSny) for the selective catalytic reduction of NO by NH3 were prepared by the inverse co-precipitation method. The aim of this novel design was to improve the NO removal efficiency of CeTi by the introduction of SnO2. It was found that the Ce-Sn-Ti catalyst was much more active than Ce-Ti and the best Ce:Sn molar ratio was 2:1. Ce2Sn1 possessed a satisfied NO removal efficiency at low temperature (160-280 °C), while over 90% NO removal efficiency maintained in the temperature range of 280-400 °C at the gas hourly space velocity (GHSV) of 50,000 h-1. Besides, Ce2Sn1 kept a stable NO removal efficiency within a wide range of GHSV and a long period of reacting time. Meanwhile, Ce2Sn1 exhibited remarkable resistance to both respectively and simultaneously H2O and SO2 poisoning due to the introduction of SnO2. The promotional effect of SnO2 was studied by N2 adsorption-desorption, X-ray diffraction (XRD), Raman spectra, X-ray photoelectron spectroscopy (XPS) and H2 temperature programmed reduction (H2-TPR) for detail information. The characterization results revealed that the excellent catalytic performance of Ce2Sn1 was associated with the higher specific surface area, larger pore volume and poorer crystallization. Besides, the introduction of SnO2 could result in not only greater conversion of Ce4+ to Ce3+ but also the increase amount of chemisorbed oxygen, which are beneficial to improve the SCR activity. More importantly, a novel peak appearing at lower temperatures through the new redox equilibrium of 2Ce4+ + Sn2+ ↔ 2Ce3+ + Sn4+ and higher total H2 consumption can be obtained by the addition of SnO2. Finally, the possible reaction mechanism of the selective catalytic reduction over Ce2Sn1 was also proposed.
Fischer, M; Joguet, D; Robin, G; Peltier, L; Laheurte, P
2016-05-01
Ti-Nb alloys are excellent candidates for biomedical applications such as implantology and joint replacement because of their very low elastic modulus, their excellent biocompatibility and their high strength. A low elastic modulus, close to that of the cortical bone minimizes the stress shielding effect that appears subsequent to the insertion of an implant. The objective of this study is to investigate the microstructural and mechanical properties of a Ti-Nb alloy elaborated by selective laser melting on powder bed of a mixture of Ti and Nb elemental powders (26 at.%). The influence of operating parameters on porosity of manufactured samples and on efficacy of dissolving Nb particles in Ti was studied. The results obtained by optical microscopy, SEM analysis and X-ray microtomography show that the laser energy has a significant effect on the compactness and homogeneity of the manufactured parts. Homogeneous and compact samples were obtained for high energy levels. Microstructure of these samples has been further characterized. Their mechanical properties were assessed by ultrasonic measures and the Young's modulus found is close to that of classically elaborated Ti-26 Nbingot. Copyright © 2016 Elsevier B.V. All rights reserved.
A Mixed Methods Sampling Methodology for a Multisite Case Study
Sharp, Julia L.; Mobley, Catherine; Hammond, Cathy; Withington, Cairen; Drew, Sam; Stringfield, Sam; Stipanovic, Natalie
2012-01-01
The flexibility of mixed methods research strategies makes such approaches especially suitable for multisite case studies. Yet the utilization of mixed methods to select sites for these studies is rarely reported. The authors describe their pragmatic mixed methods approach to select a sample for their multisite mixed methods case study of a…
International Nuclear Information System (INIS)
Adelberger, E.G.
1975-01-01
The field of parity mixing in light nuclei bears upon one of the exciting and active problems of physics--the nature of the fundamental weak interaction. It is also a subject where polarization techniques play a very important role. Weak interaction theory is first reviewed to motivate the parity mixing experiments. Two very attractive systems are discussed where the nuclear physics is so beautifully simple that the experimental observation of tiny effects directly measures parity violating (PV) nuclear matrix elements which are quite sensitive to the form of the basic weak interaction. Since the measurement of very small analyzing powers and polarizations may be of general interest to this conference, some discussion is devoted to experimental techniques
Qi, Ping; Liang, Zhi-An; Wang, Yu; Xiao, Jian; Liu, Jia; Zhou, Qing-Qiong; Zheng, Chun-Hao; Luo, Li-Ni; Lin, Zi-Hao; Zhu, Fang; Zhang, Xue-Wu
2016-03-11
In this study, mixed hemimicelles solid-phase extraction (MHSPE) based on sodium dodecyl sulfate (SDS) coated nano-magnets Fe3O4 was investigated as a novel method for the extraction and separation of four banned cationic dyes, Auramine O, Rhodamine B, Basic orange 21 and Basic orange 22, in condiments prior to HPLC detection. The main factors affecting the extraction of analysts, such as pH, surfactant and adsorbent concentrations and zeta potential were studied and optimized. Under optimized conditions, the proposed method was successful applied for the analysis of banned cationic dyes in food samples such as chili sauce, soybean paste and tomato sauce. Validation data showed the good recoveries in the range of 70.1-104.5%, with relative standard deviations less than 15%. The method limits of determination/quantification were in the range of 0.2-0.9 and 0.7-3μgkg(-1), respectively. The selective adsorption and enrichment of cationic dyes were achieved by the synergistic effects of hydrophobic interactions and electrostatic attraction between mixed hemimicelles and the cationic dyes, which also resulted in the removal of natural pigments interferences from sample extracts. When applied to real samples, RB was detected in several positive samples (chili powders) within the range from 0.042 to 0.177mgkg(-1). These results indicate that magnetic MHSPE is an efficient and selective sample preparation technique for the extraction of banned cationic dyes in a complex matrix. Copyright © 2016 Elsevier B.V. All rights reserved.
Francis, Jill J; Duncan, Eilidh M; Prior, Maria E; Maclennan, Graeme S; Dombrowski, Stephan; Bellingan, Geoff U; Campbell, Marion K; Eccles, Martin P; Rose, Louise; Rowan, Kathryn M; Shulman, Rob; Peter R Wilson, A; Cuthbertson, Brian H
2014-04-01
Hospital-acquired infections (HAIs) are a major cause of morbidity and mortality. Critically ill patients in intensive care units (ICUs) are particularly susceptible to these infections. One intervention that has gained much attention in reducing HAIs is selective decontamination of the digestive tract (SDD). SDD involves the application of topical non-absorbable antibiotics to the oropharynx and stomach and a short course of intravenous (i.v.) antibiotics. SDD may reduce infections and improve mortality, but has not been widely adopted in the UK or internationally. Hence, there is a need to identify the reasons for low uptake and whether or not further clinical research is needed before wider implementation would be considered appropriate. The project objectives were to (1) identify and describe the SDD intervention, (2) identify views about the evidence base, (3) identify acceptability of further research and (4) identify feasibility of further randomised controlled trials (RCTs). A four-stage approach involving (1) case studies of two ICUs in which SDD is delivered including observations, interviews and documentary analysis, (2) a three-round Delphi study for in-depth investigation of clinicians' views, including semi-structured interviews and two iterations of questionnaires with structured feedback, (3) a nationwide online survey of consultants in intensive care medicine and clinical microbiology and (4) semistructured interviews with international clinical triallists to identify the feasibility of further research. Case studies were set in two UK ICUs. Other stages of this research were conducted by telephone and online with NHS staff working in ICUs. (1) Staff involved in SDD adoption or delivery in two UK ICUs, (2) ICU experts (intensive care consultants, clinical microbiologists, hospital pharmacists and ICU clinical leads), (3) all intensive care consultants and clinical microbiologists in the UK with responsibility for patients in ICUs were invited and
Energy Technology Data Exchange (ETDEWEB)
Fischer, M. [Laboratoire d' Etude des Microstructures et de Mécanique des Matériaux LEM3 (UMR CNRS 7239), Université de Lorraine, Ile de Saulcy, F-57045 Metz (France); Joguet, D. [Laboratoire d' Etudes et de Recherches sur les Matériaux, les Procédés et les Surfaces LERMPS, Université de Technologie de Belfort Montbéliard, Sevenans, 90010 Belfort (France); Robin, G. [Laboratoire d' Etude des Microstructures et de Mécanique des Matériaux LEM3 (UMR CNRS 7239), Université de Lorraine, Ile de Saulcy, F-57045 Metz (France); Peltier, L. [Laboratoire d' Etude des Microstructures et de Mécanique des Matériaux LEM3 (UMR CNRS 7239), Ecole Nationale Supérieure d' Arts et Métiers, F-57078 Metz (France); Laheurte, P. [Laboratoire d' Etude des Microstructures et de Mécanique des Matériaux LEM3 (UMR CNRS 7239), Université de Lorraine, Ile de Saulcy, F-57045 Metz (France)
2016-05-01
Ti–Nb alloys are excellent candidates for biomedical applications such as implantology and joint replacement because of their very low elastic modulus, their excellent biocompatibility and their high strength. A low elastic modulus, close to that of the cortical bone minimizes the stress shielding effect that appears subsequent to the insertion of an implant. The objective of this study is to investigate the microstructural and mechanical properties of a Ti–Nb alloy elaborated by selective laser melting on powder bed of a mixture of Ti and Nb elemental powders (26 at.%). The influence of operating parameters on porosity of manufactured samples and on efficacy of dissolving Nb particles in Ti was studied. The results obtained by optical microscopy, SEM analysis and X-ray microtomography show that the laser energy has a significant effect on the compactness and homogeneity of the manufactured parts. Homogeneous and compact samples were obtained for high energy levels. Microstructure of these samples has been further characterized. Their mechanical properties were assessed by ultrasonic measures and the Young's modulus found is close to that of classically elaborated Ti–26Nb ingot. - Highlights: • Biomimetic implants can be provided from additive manufacturing with Ti–Nb. • We made parts in a Ti–Nb alloy elaborated in situ from a mixture of elemental powders. • Process parameters have a significant impact on homogeneity and compactness. • Non-columnar elongated beta-grains are stacked with an orientation {001}<100 >. • Low Young's modulus is achieved by this texture.
Zukowski, Witold; Berkowicz, Gabriela; Baron, Jerzy; Kandefer, Stanisław; Jamanek, Dariusz; Szarlik, Stefan; Wielgosz, Zbigniew; Zielecka, Maria
2014-01-01
2,6-dimethylphenol (2,6-DMP) is a product of phenol methylation, especially important for the plastics industry. The process of phenol methylation in the gas phase is strongly exothermic. In order to ensure good temperature equalization in the catalyst bed, the process was carried out using a catalyst in the form of a fluidized bed - in particular, the commercial iron-chromium catalyst TZC-3/1. Synthesis of 2,6-dimethylphenol from phenol and methanol in fluidized bed of iron-chromium catalyst was carried out and the fluidization of the catalyst was examined. Stable state of fluidized bed of iron-chromium catalyst was achieved. The measured velocities allowed to determine the minimum flow of reactants, ensuring introduction of the catalyst bed in the reactor into the state of fluidization. Due to a high content of o-cresol in products of 2,6-dimethylphenol synthesis, circulation in the technological node was proposed. A series of syntheses with variable amount of o-cresol in the feedstock allowed to determine the parameters of stationary states. A stable work of technological node with o-cresol circulation is possible in the temperature range of350-380°C, and o-cresolin/phenolin molar ratio of more than 0.48. Synthesis of 2,6-DMP over the iron-chromium catalyst is characterized by more than 90% degree of phenol conversion. Moreover, the O-alkylation did not occur (which was confirmed by GC-MS analysis). By applying o-cresol circulation in the 2,6-DMP process, selectivity of more than 85% degree of 2,6-DMP was achieved. The participation levels of by-products: 2,4-DMP and 2,4,6-TMP were low. In the optimal conditions based on the highest yield of 2,6-DMP achieved in the technological node applying o-cresol circulation, there are 2%mol. of 2,4-DMP and 6%mol. of 2,4,6-TMP in the final mixture, whereas 2,4,6-TMP can be useful as a chain stopper and polymer's molar mass regulator during the polymerization of 2,6-DMP.
Bruce Bagwell, C
2018-01-01
This chapter outlines how to approach the complex tasks associated with designing models for high-dimensional cytometry data. Unlike gating approaches, modeling lends itself to automation and accounts for measurement overlap among cellular populations. Designing these models is now easier because of a new technique called high-definition t-SNE mapping. Nontrivial examples are provided that serve as a guide to create models that are consistent with data.
Mathematics++ selected topics beyond the basic courses
Kantor, Ida; Šámal, Robert
2015-01-01
Mathematics++ is a concise introduction to six selected areas of 20th century mathematics providing numerous modern mathematical tools used in contemporary research in computer science, engineering, and other fields. The areas are: measure theory, high-dimensional geometry, Fourier analysis, representations of groups, multivariate polynomials, and topology. For each of the areas, the authors introduce basic notions, examples, and results. The presentation is clear and accessible, stressing intuitive understanding, and it includes carefully selected exercises as an integral part. Theory is comp
Penalized feature selection and classification in bioinformatics
Ma, Shuangge; Huang, Jian
2008-01-01
In bioinformatics studies, supervised classification with high-dimensional input variables is frequently encountered. Examples routinely arise in genomic, epigenetic and proteomic studies. Feature selection can be employed along with classifier construction to avoid over-fitting, to generate more reliable classifier and to provide more insights into the underlying causal relationships. In this article, we provide a review of several recently developed penalized feature selection and classific...
Mixed wasted integrated program: Logic diagram
Energy Technology Data Exchange (ETDEWEB)
Mayberry, J.; Stelle, S. [Science Applications International Corp., Idaho Falls, ID (United States); O`Brien, M. [Univ. of Arizona, Tucson, AZ (United States); Rudin, M. [Univ. of Nevada, Las Vegas, NV (United States); Ferguson, J. [Lockheed Idaho Technologies Co., Idaho Falls, ID (United States); McFee, J. [I.T. Corp., Albuquerque, NM (United States)
1994-11-30
The Mixed Waste Integrated Program Logic Diagram was developed to provide technical alternative for mixed wastes projects for the Office of Technology Development`s Mixed Waste Integrated Program (MWIP). Technical solutions in the areas of characterization, treatment, and disposal were matched to a select number of US Department of Energy (DOE) treatability groups represented by waste streams found in the Mixed Waste Inventory Report (MWIR).
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
An Introduction to LANL Mixed Potential Sensors
Energy Technology Data Exchange (ETDEWEB)
Mukundan, Rangachary [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brosha, Eric Lanich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kreller, Cortney [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-01-26
These are slides for a webinar given on the topics of an introduction to LANL mixed potential sensors. Topics include the history of LANL electrochemical sensor work, an introduction to mixed potential sensors, LANL uniqueness, and an application of LANL mixed potential sensors. The summary is as follows: Improved understanding of the mixed-potential sensor mechanism (factors controlling the sensor response identified), sensor design optimized to maximize sensor sensitivity and durability (porous electrolyte/dense electrodes), electrodes selected for various specific applications (CO, HC, H_{2}), sensor operating parameters optimized for improved gas selectivity (NO_{x}, NH_{3}).
A selective overview of feature screening for ultrahigh-dimensional data.
JingYuan, Liu; Wei, Zhong; RunZe, L I
2015-10-01
High-dimensional data have frequently been collected in many scientific areas including genomewide association study, biomedical imaging, tomography, tumor classifications, and finance. Analysis of high-dimensional data poses many challenges for statisticians. Feature selection and variable selection are fundamental for high-dimensional data analysis. The sparsity principle, which assumes that only a small number of predictors contribute to the response, is frequently adopted and deemed useful in the analysis of high-dimensional data. Following this general principle, a large number of variable selection approaches via penalized least squares or likelihood have been developed in the recent literature to estimate a sparse model and select significant variables simultaneously. While the penalized variable selection methods have been successfully applied in many high-dimensional analyses, modern applications in areas such as genomics and proteomics push the dimensionality of data to an even larger scale, where the dimension of data may grow exponentially with the sample size. This has been called ultrahigh-dimensional data in the literature. This work aims to present a selective overview of feature screening procedures for ultrahigh-dimensional data. We focus on insights into how to construct marginal utilities for feature screening on specific models and motivation for the need of model-free feature screening procedures.
Handbook of mixed membership models and their applications
Airoldi, Edoardo M; Erosheva, Elena A; Fienberg, Stephen E
2014-01-01
In response to scientific needs for more diverse and structured explanations of statistical data, researchers have discovered how to model individual data points as belonging to multiple groups. Handbook of Mixed Membership Models and Their Applications shows you how to use these flexible modeling tools to uncover hidden patterns in modern high-dimensional multivariate data. It explores the use of the models in various application settings, including survey data, population genetics, text analysis, image processing and annotation, and molecular biology.Through examples using real data sets, yo
Mixed Connective Tissue Disease
Mixed connective tissue disease Overview Mixed connective tissue disease has signs and symptoms of a combination of disorders — primarily lupus, scleroderma and polymyositis. For this reason, mixed connective tissue disease ...
International Nuclear Information System (INIS)
Harnby, N.
1988-01-01
Covering all aspects of mixing, this work presents research and developments in industrial applications, flow patterns and mixture analysis, mixing of solids into liquids, and mixing of gases into liquids
Directory of Open Access Journals (Sweden)
José Jaime Vasconcelos Cavalcanti
2009-01-01
genotypic adaptability and stability. The experiments were set up employing a complete block design, with eleven treatments, three repetitions and five plants per plot. The results showed an alteration in the clone order, in the different environments, as reflected from the genotypic correlation of average magnitude (0.58. The heritability of clones presented moderate to high magnitude for the different traits, indicating excellent possibilities for selection, allowing selective accuracy of 83%. The methods MHVG, PRVG, and MHPRVG can be part of selective criteria in the cashew breeding program.
KEY-WORDS: Anacardium occidentale; genotype x environment interaction; BLUP/REML.