Effects of dependence in high-dimensional multiple testing problems
Directory of Open Access Journals (Sweden)
van de Wiel Mark A
2008-02-01
Full Text Available Abstract Background We consider effects of dependence among variables of high-dimensional data in multiple hypothesis testing problems, in particular the False Discovery Rate (FDR control procedures. Recent simulation studies consider only simple correlation structures among variables, which is hardly inspired by real data features. Our aim is to systematically study effects of several network features like sparsity and correlation strength by imposing dependence structures among variables using random correlation matrices. Results We study the robustness against dependence of several FDR procedures that are popular in microarray studies, such as Benjamin-Hochberg FDR, Storey's q-value, SAM and resampling based FDR procedures. False Non-discovery Rates and estimates of the number of null hypotheses are computed from those methods and compared. Our simulation study shows that methods such as SAM and the q-value do not adequately control the FDR to the level claimed under dependence conditions. On the other hand, the adaptive Benjamini-Hochberg procedure seems to be most robust while remaining conservative. Finally, the estimates of the number of true null hypotheses under various dependence conditions are variable. Conclusion We discuss a new method for efficient guided simulation of dependent data, which satisfy imposed network constraints as conditional independence structures. Our simulation set-up allows for a structural study of the effect of dependencies on multiple testing criterions and is useful for testing a potentially new method on π0 or FDR estimation in a dependency context.
Applications of Asymptotic Sampling on High Dimensional Structural Dynamic Problems
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian
2011-01-01
The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has consid...... dimensional reliability problems in structural dynamics.......The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has...... is minimized. Next, the method is applied on different cases of linear and nonlinear systems with a large number of random variables representing the dynamic excitation. The results show that asymptotic sampling is capable of providing good approximations of low failure probability events for very high...
A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem
Zekić-Sušac, Marijana; Pfeifer, Sanja; Šarlija, Nataša
2014-01-01
Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART ...
Characterization of discontinuities in high-dimensional stochastic problems on adaptive sparse grids
International Nuclear Information System (INIS)
Jakeman, John D.; Archibald, Richard; Xiu Dongbin
2011-01-01
In this paper we present a set of efficient algorithms for detection and identification of discontinuities in high dimensional space. The method is based on extension of polynomial annihilation for discontinuity detection in low dimensions. Compared to the earlier work, the present method poses significant improvements for high dimensional problems. The core of the algorithms relies on adaptive refinement of sparse grids. It is demonstrated that in the commonly encountered cases where a discontinuity resides on a small subset of the dimensions, the present method becomes 'optimal', in the sense that the total number of points required for function evaluations depends linearly on the dimensionality of the space. The details of the algorithms will be presented and various numerical examples are utilized to demonstrate the efficacy of the method.
International Nuclear Information System (INIS)
Langrene, Nicolas
2014-01-01
This thesis deals with the numerical solution of general stochastic control problems, with notable applications for electricity markets. We first propose a structural model for the price of electricity, allowing for price spikes well above the marginal fuel price under strained market conditions. This model allows to price and partially hedge electricity derivatives, using fuel forwards as hedging instruments. Then, we propose an algorithm, which combines Monte-Carlo simulations with local basis regressions, to solve general optimal switching problems. A comprehensive rate of convergence of the method is provided. Moreover, we manage to make the algorithm parsimonious in memory (and hence suitable for high dimensional problems) by generalizing to this framework a memory reduction method that avoids the storage of the sample paths. We illustrate this on the problem of investments in new power plants (our structural power price model allowing the new plants to impact the price of electricity). Finally, we study more general stochastic control problems (the control can be continuous and impact the drift and volatility of the state process), the solutions of which belong to the class of fully nonlinear Hamilton-Jacobi-Bellman equations, and can be handled via constrained Backward Stochastic Differential Equations, for which we develop a backward algorithm based on control randomization and parametric optimizations. A rate of convergence between the constraPned BSDE and its discrete version is provided, as well as an estimate of the optimal control. This algorithm is then applied to the problem of super replication of options under uncertain volatilities (and correlations). (author)
Clustering high dimensional data
DEFF Research Database (Denmark)
Assent, Ira
2012-01-01
High-dimensional data, i.e., data described by a large number of attributes, pose specific challenges to clustering. The so-called ‘curse of dimensionality’, coined originally to describe the general increase in complexity of various computational problems as dimensionality increases, is known...... to render traditional clustering algorithms ineffective. The curse of dimensionality, among other effects, means that with increasing number of dimensions, a loss of meaningful differentiation between similar and dissimilar objects is observed. As high-dimensional objects appear almost alike, new approaches...... for clustering are required. Consequently, recent research has focused on developing techniques and clustering algorithms specifically for high-dimensional data. Still, open research issues remain. Clustering is a data mining task devoted to the automatic grouping of data based on mutual similarity. Each cluster...
A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem
Directory of Open Access Journals (Sweden)
Zekić-Sušac Marijana
2014-09-01
Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.
Greedy algorithms for high-dimensional non-symmetric linear problems***
Directory of Open Access Journals (Sweden)
Cancès E.
2013-12-01
Full Text Available In this article, we present a family of numerical approaches to solve high-dimensional linear non-symmetric problems. The principle of these methods is to approximate a function which depends on a large number of variates by a sum of tensor product functions, each term of which is iteratively computed via a greedy algorithm ? . There exists a good theoretical framework for these methods in the case of (linear and nonlinear symmetric elliptic problems. However, the convergence results are not valid any more as soon as the problems under consideration are not symmetric. We present here a review of the main algorithms proposed in the literature to circumvent this difficulty, together with some new approaches. The theoretical convergence results and the practical implementation of these algorithms are discussed. Their behaviors are illustrated through some numerical examples. Dans cet article, nous présentons une famille de méthodes numériques pour résoudre des problèmes linéaires non symétriques en grande dimension. Le principe de ces approches est de représenter une fonction dépendant d’un grand nombre de variables sous la forme d’une somme de fonctions produit tensoriel, dont chaque terme est calculé itérativement via un algorithme glouton ? . Ces méthodes possèdent de bonnes propriétés théoriques dans le cas de problèmes elliptiques symétriques (linéaires ou non linéaires, mais celles-ci ne sont plus valables dès lors que les problèmes considérés ne sont plus symétriques. Nous présentons une revue des principaux algorithmes proposés dans la littérature pour contourner cette difficulté ainsi que de nouvelles approches que nous proposons. Les résultats de convergence théoriques et la mise en oeuvre pratique de ces algorithmes sont détaillés et leur comportement est illustré au travers d’exemples numériques.
Reducing the Complexity of Genetic Fuzzy Classifiers in Highly-Dimensional Classification Problems
Directory of Open Access Journals (Sweden)
DimitrisG. Stavrakoudis
2012-04-01
Full Text Available This paper introduces the Fast Iterative Rule-based Linguistic Classifier (FaIRLiC, a Genetic Fuzzy Rule-Based Classification System (GFRBCS which targets at reducing the structural complexity of the resulting rule base, as well as its learning algorithm's computational requirements, especially when dealing with high-dimensional feature spaces. The proposed methodology follows the principles of the iterative rule learning (IRL approach, whereby a rule extraction algorithm (REA is invoked in an iterative fashion, producing one fuzzy rule at a time. The REA is performed in two successive steps: the first one selects the relevant features of the currently extracted rule, whereas the second one decides the antecedent part of the fuzzy rule, using the previously selected subset of features. The performance of the classifier is finally optimized through a genetic tuning post-processing stage. Comparative results in a hyperspectral remote sensing classification as well as in 12 real-world classification datasets indicate the effectiveness of the proposed methodology in generating high-performing and compact fuzzy rule-based classifiers, even for very high-dimensional feature spaces.
Banks, H. T.; Ito, K.
1991-01-01
A hybrid method for computing the feedback gains in linear quadratic regulator problem is proposed. The method, which combines use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite-dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantages of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed, and numerical evidence of the efficacy of these ideas is presented.
Regis, Rommel G.
2014-02-01
This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.
Safaei, S.; Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.
Wang, Wei; Yang, Jiong
With the rapid growth of computational biology and e-commerce applications, high-dimensional data becomes very common. Thus, mining high-dimensional data is an urgent problem of great practical importance. However, there are some unique challenges for mining data of high dimensions, including (1) the curse of dimensionality and more crucial (2) the meaningfulness of the similarity measure in the high dimension space. In this chapter, we present several state-of-art techniques for analyzing high-dimensional data, e.g., frequent pattern mining, clustering, and classification. We will discuss how these methods deal with the challenges of high dimensionality.
Directory of Open Access Journals (Sweden)
Shouheng Tuo
Full Text Available Harmony Search (HS and Teaching-Learning-Based Optimization (TLBO as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.
International Nuclear Information System (INIS)
Lucka, Felix
2012-01-01
Sparsity has become a key concept for solving of high-dimensional inverse problems using variational regularization techniques. Recently, using similar sparsity-constraints in the Bayesian framework for inverse problems by encoding them in the prior distribution has attracted attention. Important questions about the relation between regularization theory and Bayesian inference still need to be addressed when using sparsity promoting inversion. A practical obstacle for these examinations is the lack of fast posterior sampling algorithms for sparse, high-dimensional Bayesian inversion. Accessing the full range of Bayesian inference methods requires being able to draw samples from the posterior probability distribution in a fast and efficient way. This is usually done using Markov chain Monte Carlo (MCMC) sampling algorithms. In this paper, we develop and examine a new implementation of a single component Gibbs MCMC sampler for sparse priors relying on L1-norms. We demonstrate that the efficiency of our Gibbs sampler increases when the level of sparsity or the dimension of the unknowns is increased. This property is contrary to the properties of the most commonly applied Metropolis–Hastings (MH) sampling schemes. We demonstrate that the efficiency of MH schemes for L1-type priors dramatically decreases when the level of sparsity or the dimension of the unknowns is increased. Practically, Bayesian inversion for L1-type priors using MH samplers is not feasible at all. As this is commonly believed to be an intrinsic feature of MCMC sampling, the performance of our Gibbs sampler also challenges common beliefs about the applicability of sample based Bayesian inference. (paper)
High dimensional classifiers in the imbalanced case
DEFF Research Database (Denmark)
Bak, Britta Anker; Jensen, Jens Ledet
We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...... to a bias in the classification. A theoretical analysis of the independence classifier reveals the origin of the bias and based on this we suggest two new classifiers that can handle any imbalance ratio. The analytical results are supplemented by a simulation study, where the suggested classifiers in some...
CSIR Research Space (South Africa)
Mc
2012-07-01
Full Text Available stream_source_info McLaren_2012.pdf.txt stream_content_type text/plain stream_size 2190 Content-Encoding ISO-8859-1 stream_name McLaren_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 High dimensional... entanglement M. McLAREN1,2, F.S. ROUX1 & A. FORBES1,2,3 1. CSIR National Laser Centre, PO Box 395, Pretoria 0001 2. School of Physics, University of the Stellenbosch, Private Bag X1, 7602, Matieland 3. School of Physics, University of Kwazulu...
Common Group Problems: A Field Study.
Weinberg, Sanford B.; And Others
1981-01-01
A field study of a naturally functioning group (N=125) was conducted to identify common group problems. Trained observers attended group meetings and described the problems encountered. Difficulties of cohesion, leadership, sub-group formation, and personality conflict were identified. (RC)
Problem specific heuristics for group scheduling problems in cellular manufacturing
Neufeld, Janis Sebastian
2016-01-01
The group scheduling problem commonly arises in cellular manufacturing systems, where parts are grouped into part families. It is characterized by a sequencing task on two levels: on the one hand, a sequence of jobs within each part family has to be identified while, on the other hand, a family sequence has to be determined. In order to solve this NP-hard problem usually heuristic solution approaches are used. In this thesis different aspects of group scheduling are discussed and problem spec...
Group Design Problems in Engineering Design Graphics.
Kelley, David
2001-01-01
Describes group design techniques used within the engineering design graphics sequence at Western Washington University. Engineering and design philosophies such as concurrent engineering place an emphasis on group collaboration for the solving of design problems. (Author/DDR)
Chernozhukov, Victor; Hansen, Christian; Spindler, Martin
2016-01-01
In this article the package High-dimensional Metrics (\\texttt{hdm}) is introduced. It is a collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e...
The compressed word problem for groups
Lohrey, Markus
2014-01-01
The Compressed Word Problem for Groups provides a detailed exposition of known results on the compressed word problem, emphasizing efficient algorithms for the compressed word problem in various groups. The author presents the necessary background along with the most recent results on the compressed word problem to create a cohesive self-contained book accessible to computer scientists as well as mathematicians. Readers will quickly reach the frontier of current research which makes the book especially appealing for students looking for a currently active research topic at the intersection of group theory and computer science. The word problem introduced in 1910 by Max Dehn is one of the most important decision problems in group theory. For many groups, highly efficient algorithms for the word problem exist. In recent years, a new technique based on data compression for providing more efficient algorithms for word problems, has been developed, by representing long words over group generators in a compres...
High dimensional neurocomputing growth, appraisal and applications
Tripathi, Bipin Kumar
2015-01-01
The book presents a coherent understanding of computational intelligence from the perspective of what is known as "intelligent computing" with high-dimensional parameters. It critically discusses the central issue of high-dimensional neurocomputing, such as quantitative representation of signals, extending the dimensionality of neuron, supervised and unsupervised learning and design of higher order neurons. The strong point of the book is its clarity and ability of the underlying theory to unify our understanding of high-dimensional computing where conventional methods fail. The plenty of application oriented problems are presented for evaluating, monitoring and maintaining the stability of adaptive learning machine. Author has taken care to cover the breadth and depth of the subject, both in the qualitative as well as quantitative way. The book is intended to enlighten the scientific community, ranging from advanced undergraduates to engineers, scientists and seasoned researchers in computational intelligenc...
Decision problems for groups and semigroups
International Nuclear Information System (INIS)
Adian, S I; Durnev, V G
2000-01-01
The paper presents a detailed survey of results concerning the main decision problems of group theory and semigroup theory, including the word problem, the isomorphism problem, recognition problems, and other algorithmic questions related to them. The well-known theorems of Markov-Post, P.S. Novikov, Adian-Rabin, Higman, Magnus, and Lyndon are given with complete proofs. As a rule, the proofs presented in this survey are substantially simpler than those given in the original papers. For the sake of completeness, we first prove the insolubility of the halting problem for Turing machines, on which the insolubility of the word problem for semigroups is based. Specific attention is also paid to the simplest examples of semigroups with insoluble word problem. We give a detailed proof of the significant result of Lyndon that, in the class of groups presented by a system of defining relations for which the maximum mutual overlapping of any two relators is strictly less than one fifth of their lengths, the word problem is soluble, while insoluble word problems can occur when non-strict inequality is allowed. A proof of the corresponding result for finitely presented semigroups is also given, when the corresponding fraction is one half
Music Taste Groups and Problem Behavior.
Mulder, Juul; Bogt, Tom Ter; Raaijmakers, Quinten; Vollebergh, Wilma
2007-04-01
Internalizing and externalizing problems differ by musical tastes. A high school-based sample of 4159 adolescents, representative of Dutch youth aged 12 to 16, reported on their personal and social characteristics, music preferences and social-psychological functioning, measured with the Youth Self-Report (YSR). Cluster analysis on their music preferences revealed six taste groups: Middle-of-the-road (MOR) listeners, Urban fans, Exclusive Rock fans, Rock-Pop fans, Elitists, and Omnivores. A seventh group of musically Low-Involved youth was added. Multivariate analyses revealed that when gender, age, parenting, school, and peer variables were controlled, Omnivores and fans within the Exclusive Rock groups showed relatively high scores on internalizing YSR measures, and social, thought and attention problems. Omnivores, Exclusive Rock, Rock-Pop and Urban fans reported more externalizing problem behavior. Belonging to the MOR group that highly appreciates the most popular, chart-based pop music appears to buffer problem behavior. Music taste group membership uniquely explains variance in both internalizing and externalizing problem behavior.
Behaviors of Problem-Solving Groups
National Research Council Canada - National Science Library
Bennis, Warren G
1958-01-01
The results of two studies are contained in this report in summary form. They represent the first parts of a program of research designed to study the effects of change and history on the on the behaviors of problem-solving Groups...
Group invariance in engineering boundary value problems
Seshadri, R
1985-01-01
REFEREN CES . 156 9 Transforma.tion of a Boundary Value Problem to an Initial Value Problem . 157 9.0 Introduction . 157 9.1 Blasius Equation in Boundary Layer Flow . 157 9.2 Longitudinal Impact of Nonlinear Viscoplastic Rods . 163 9.3 Summary . 168 REFERENCES . . . . . . . . . . . . . . . . . . 168 . 10 From Nonlinear to Linear Differential Equa.tions Using Transformation Groups. . . . . . . . . . . . . . 169 . 10.1 From Nonlinear to Linear Differential Equations . 170 10.2 Application to Ordinary Differential Equations -Bernoulli's Equation . . . . . . . . . . . 173 10.3 Application to Partial Differential Equations -A Nonlinear Chemical Exchange Process . 178 10.4 Limitations of the Inspectional Group Method . 187 10.5 Summary . 188 REFERENCES . . . . 188 11 Miscellaneous Topics . 190 11.1 Reduction of Differential Equations to Algebraic Equations 190 11.2 Reduction of Order of an Ordinary Differential Equation . 191 11.3 Transformat.ion From Ordinary to Partial Differential Equations-Search for First Inte...
High-dimensional covariance estimation with high-dimensional data
Pourahmadi, Mohsen
2013-01-01
Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-01
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-07
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Topology of high-dimensional manifolds
Energy Technology Data Exchange (ETDEWEB)
Farrell, F T [State University of New York, Binghamton (United States); Goettshe, L [Abdus Salam ICTP, Trieste (Italy); Lueck, W [Westfaelische Wilhelms-Universitaet Muenster, Muenster (Germany)
2002-08-15
The School on High-Dimensional Manifold Topology took place at the Abdus Salam ICTP, Trieste from 21 May 2001 to 8 June 2001. The focus of the school was on the classification of manifolds and related aspects of K-theory, geometry, and operator theory. The topics covered included: surgery theory, algebraic K- and L-theory, controlled topology, homology manifolds, exotic aspherical manifolds, homeomorphism and diffeomorphism groups, and scalar curvature. The school consisted of 2 weeks of lecture courses and one week of conference. Thwo-part lecture notes volume contains the notes of most of the lecture courses.
High-dimensional change-point estimation: Combining filtering with convex optimization
Soh, Yong Sheng; Chandrasekaran, Venkat
2017-01-01
We consider change-point estimation in a sequence of high-dimensional signals given noisy observations. Classical approaches to this problem such as the filtered derivative method are useful for sequences of scalar-valued signals, but they have undesirable scaling behavior in the high-dimensional setting. However, many high-dimensional signals encountered in practice frequently possess latent low-dimensional structure. Motivated by this observation, we propose a technique for high-dimensional...
Sociodrama: Group Creative Problem Solving in Action.
Riley, John F.
1990-01-01
Sociodrama is presented as a structured, yet flexible, method of encouraging the use of creative thinking to examine a difficult problem. An example illustrates the steps involved in putting sociodrama into action. Production techniques useful in sociodrama include the soliloquy, double, role reversal, magic shop, unity of opposites, and audience…
New renormalization group approach to multiscale problems
Energy Technology Data Exchange (ETDEWEB)
Einhorn, M B; Jones, D R.T.
1984-02-27
A new renormalization group is presented which exploits invariance with respect to more than one scale. The method is illustrated by a simple model, and future applications to fields such as critical phenomena and supersymmetry are speculated upon.
Chernozhukov, Victor; Hansen, Chris; Spindler, Martin
2016-01-01
The package High-dimensional Metrics (\\Rpackage{hdm}) is an evolving collection of statistical methods for estimation and quantification of uncertainty in high-dimensional approximately sparse models. It focuses on providing confidence intervals and significance testing for (possibly many) low-dimensional subcomponents of the high-dimensional parameter vector. Efficient estimators and uniformly valid confidence intervals for regression coefficients on target variables (e.g., treatment or poli...
Irregular grid methods for pricing high-dimensional American options
Berridge, S.J.
2004-01-01
This thesis proposes and studies numerical methods for pricing high-dimensional American options; important examples being basket options, Bermudan swaptions and real options. Four new methods are presented and analysed, both in terms of their application to various test problems, and in terms of
High-dimensional model estimation and model selection
CERN. Geneva
2015-01-01
I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.
Emergent Leadership in Children's Cooperative Problem Solving Groups
Sun, Jingjng; Anderson, Richard C.; Perry, Michelle; Lin, Tzu-Jung
2017-01-01
Social skills involved in leadership were examined in a problem-solving activity in which 252 Chinese 5th-graders worked in small groups on a spatial-reasoning puzzle. Results showed that students who engaged in peer-managed small-group discussions of stories prior to problem solving produced significantly better solutions and initiated…
Modeling High-Dimensional Multichannel Brain Signals
Hu, Lechuan; Fortin, Norbert J.; Ombao, Hernando
2017-01-01
aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel
Problem-Based Group Activities for Teaching Sensation and Perception
Kreiner, David S.
2009-01-01
This article describes 14 problem-based group activities for a sensation and perception course. The intent was to provide opportunities for students to practice applying their knowledge to real-world problems related to course content. Student ratings of how effectively the activities helped them learn were variable but relatively high. Students…
Clothing Problems of Upper Middle Socio-Economic Group ...
African Journals Online (AJOL)
This paper focuses on the clothing problems of affluent female consumers in the upper middle socioeconomic group, who have money to spend, as well as some access to retail fashion. Their clothing problems were discussed in relation to fashion leadership, fashion involvement, brand typologies, maintaining an interest in ...
Asymptotically Honest Confidence Regions for High Dimensional
DEFF Research Database (Denmark)
Caner, Mehmet; Kock, Anders Bredahl
While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... develop an oracle inequality for the conservative Lasso only assuming the existence of a certain number of moments. This is done by means of the Marcinkiewicz-Zygmund inequality which in our context provides sharper bounds than Nemirovski's inequality. As opposed to van de Geer et al. (2014) we allow...
Group Problem Solving as a Zone of Proximal Development activity
Brewe, Eric
2006-12-01
Vygotsky described learning as a process, intertwined with development, which is strongly influenced by social interactions with others that are at differing developmental stages.i These interactions create a Zone of Proximal Development for each member of the interaction. Vygotsky’s notion of social constructivism is not only a theory of learning, but also of development. While teaching introductory physics in an interactive format, I have found manifestations of Vygotsky’s theory in my classroom. The source of evidence is a paired problem solution. A standard mechanics problem was solved by students in two classes as a homework assignment. Students handed in the homework and then solved the same problem in small groups. The solutions to both the group and individual problem were assessed by multiple reviewers. In many cases the group score was the same as the highest individual score in the group, but in some cases, the group score was higher than any individual score. For this poster, I will analyze the individual and group scores and focus on three groups solutions and video that provide evidence of learning through membership in a Zone of Proximal Development. Endnotes i L. Vygotsky -Mind and society: The development of higher mental processes. Cambridge, MA: Harvard University Press. (1978).
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2011-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.
Application of the group-theoretical method to physical problems
Abd-el-malek, Mina B.
1998-01-01
The concept of the theory of continuous groups of transformations has attracted the attention of applied mathematicians and engineers to solve many physical problems in the engineering sciences. Three applications are presented in this paper. The first one is the problem of time-dependent vertical temperature distribution in a stagnant lake. Two cases have been considered for the forms of the water parameters, namely water density and thermal conductivity. The second application is the unstea...
Efficient and accurate nearest neighbor and closest pair search in high-dimensional space
Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos
2010-01-01
Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii
Relativized problems with abelian phase group in topological dynamics.
McMahon, D
1976-04-01
Let (X, T) be the equicontinuous minimal transformation group with X = pi(infinity)Z(2), the Cantor group, and S = [unk](infinity)Z(2) endowed with the discrete topology acting on X by right multiplication. For any countable group T we construct a function F:X x S --> T such that if (Y, T) is a minimal transformation group, then (X x Y, S) is a minimal transformation group with the action defined by (x, y)s = [xs, yF(x, s)]. If (W, T) is a minimal transformation group and varphi:(Y, T) --> (W, T) is a homomorphism, then identity x varphi:(X x Y, S) --> (X x W, S) is a homomorphism and has many of the same properties that varphi has. For this reason, one may assume that the phase group is abelian (or S) without loss of generality for many relativized problems in topological dynamics.
Collective Action Problem in Heterogeneous Groups with Punishment and Foresight
Perry, Logan; Shrestha, Mahendra Duwal; Vose, Michael D.; Gavrilets, Sergey
2018-03-01
The collective action problem can easily undermine cooperation in groups. Recent work has shown that within-group heterogeneity can under some conditions promote voluntary provisioning of collective goods. Here we generalize this work for the case when individuals can not only contribute to the production of collective goods, but also punish free-riders. To do this, we extend the standard theory by allowing individuals to have limited foresight so they can anticipate actions of their group-mates. For humans, this is a realistic assumption because we possess a "theory of mind". We use agent-based simulations to study collective actions that aim to overcome challenges from nature or win competition with neighboring groups. We contrast the dynamics of collective action in egalitarian and hierarchical groups. We show that foresight allows groups to overcome both the first- and second-order free-rider problems. While foresight increases cooperation, it does not necessarily result in higher payoffs. We show that while between-group conflicts promotes within-group cooperation, the effects of cultural group selection on cooperation are relatively small. Our models predict the emergence of a division of labor in which more powerful individuals specialize in punishment while less powerful individuals mostly contribute to the production of collective goods.
Differences in problems of motivation in different special groups
Kunnen, E.S.; Steenbeek, H.W.
1999-01-01
In general, children with a range of special needs have below-average motivation and perceived control. We have investigated whether differences exist between the types of problem in different special groups. Theory distinguishes between two types: low motivation and perceived control can be based
Differences in problems of motivation in different special groups
Kunnen, E.S.; Steenbeek, H.W.
In general, children with a range of special needs have below-average motivation and perceived control. We have investigated whether differences exist between the types of problem in different special groups. Theory distinguishes between two types: low motivation and perceived control can be based
Renormalization-group study of the four-body problem
International Nuclear Information System (INIS)
Schmidt, Richard; Moroz, Sergej
2010-01-01
We perform a renormalization-group analysis of the nonrelativistic four-boson problem by means of a simple model with pointlike three- and four-body interactions. We investigate in particular the region where the scattering length is infinite and all energies are close to the atom threshold. We find that the four-body problem behaves truly universally, independent of any four-body parameter. Our findings confirm the recent conjectures of others that the four-body problem is universal, now also from a renormalization-group perspective. We calculate the corresponding relations between the four- and three-body bound states, as well as the full bound-state spectrum and comment on the influence of effective range corrections.
High-Dimensional Adaptive Particle Swarm Optimization on Heterogeneous Systems
International Nuclear Information System (INIS)
Wachowiak, M P; Sarlo, B B; Foster, A E Lambe
2014-01-01
Much work has recently been reported in parallel GPU-based particle swarm optimization (PSO). Motivated by the encouraging results of these investigations, while also recognizing the limitations of GPU-based methods for big problems using a large amount of data, this paper explores the efficacy of employing other types of parallel hardware for PSO. Most commodity systems feature a variety of architectures whose high-performance capabilities can be exploited. In this paper, high-dimensional problems and those that employ a large amount of external data are explored within the context of heterogeneous systems. Large problems are decomposed into constituent components, and analyses are undertaken of which components would benefit from multi-core or GPU parallelism. The current study therefore provides another demonstration that ''supercomputing on a budget'' is possible when subtasks of large problems are run on hardware most suited to these tasks. Experimental results show that large speedups can be achieved on high dimensional, data-intensive problems. Cost functions must first be analysed for parallelization opportunities, and assigned hardware based on the particular task
High Dimensional Classification Using Features Annealed Independence Rules.
Fan, Jianqing; Fan, Yingying
2008-01-01
Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.
Group Work Tests for Context-Rich Problems
Meyer, Chris
2016-05-01
The group work test is an assessment strategy that promotes higher-order thinking skills for solving context-rich problems. With this format, teachers are able to pose challenging, nuanced questions on a test, while providing the support weaker students need to get started and show their understanding. The test begins with a group discussion phase, when students are given a "number-free" version of the problem. This phase allows students to digest the story-like problem, explore solution ideas, and alleviate some test anxiety. After 10-15 minutes of discussion, students inform the instructor of their readiness for the individual part of the test. What follows next is a pedagogical phase change from lively group discussion to quiet individual work. The group work test is a natural continuation of the group work in our daily physics classes and helps reinforce the importance of collaboration. This method has met with success at York Mills Collegiate Institute, in Toronto, Ontario, where it has been used consistently for unit tests and the final exam of the grade 12 university preparation physics course.
Computing group cardinality constraint solutions for logistic regression problems.
Zhang, Yong; Kwon, Dongjin; Pohl, Kilian M
2017-01-01
We derive an algorithm to directly solve logistic regression based on cardinality constraint, group sparsity and use it to classify intra-subject MRI sequences (e.g. cine MRIs) of healthy from diseased subjects. Group cardinality constraint models are often applied to medical images in order to avoid overfitting of the classifier to the training data. Solutions within these models are generally determined by relaxing the cardinality constraint to a weighted feature selection scheme. However, these solutions relate to the original sparse problem only under specific assumptions, which generally do not hold for medical image applications. In addition, inferring clinical meaning from features weighted by a classifier is an ongoing topic of discussion. Avoiding weighing features, we propose to directly solve the group cardinality constraint logistic regression problem by generalizing the Penalty Decomposition method. To do so, we assume that an intra-subject series of images represents repeated samples of the same disease patterns. We model this assumption by combining series of measurements created by a feature across time into a single group. Our algorithm then derives a solution within that model by decoupling the minimization of the logistic regression function from enforcing the group sparsity constraint. The minimum to the smooth and convex logistic regression problem is determined via gradient descent while we derive a closed form solution for finding a sparse approximation of that minimum. We apply our method to cine MRI of 38 healthy controls and 44 adult patients that received reconstructive surgery of Tetralogy of Fallot (TOF) during infancy. Our method correctly identifies regions impacted by TOF and generally obtains statistically significant higher classification accuracy than alternative solutions to this model, i.e., ones relaxing group cardinality constraints. Copyright © 2016 Elsevier B.V. All rights reserved.
Kleinian groups and uniformization in examples and problems
Krushkal′, S L
1986-01-01
Aimed at researchers, graduate students and undergraduates alike, this book presents a unified exposition of all the main areas and methods of the theory of Kleinian groups and the theory of uniformization of manifolds. The past 20 years have seen a rejuvenation of the field, due to the development of powerful new methods in topology, the theory of functions of several complex variables, and the theory of quasiconformal mappings. Thus this new book should provide a valuable resource, listing the basic facts regarding Kleinian groups and serving as a general guide to the primary literature, particularly the Russian literature in the field. In addition, the book includes a large number of examples, problems, and unsolved problems, many of them presented for the first time.
Data analysis in high-dimensional sparse spaces
DEFF Research Database (Denmark)
Clemmensen, Line Katrine Harder
classification techniques for high-dimensional problems are presented: Sparse discriminant analysis, sparse mixture discriminant analysis and orthogonality constrained support vector machines. The first two introduces sparseness to the well known linear and mixture discriminant analysis and thereby provide low...... are applied to classifications of fish species, ear canal impressions used in the hearing aid industry, microbiological fungi species, and various cancerous tissues and healthy tissues. In addition, novel applications of sparse regressions (also called the elastic net) to the medical, concrete, and food...
Introduction to high-dimensional statistics
Giraud, Christophe
2015-01-01
Ever-greater computing technologies have given rise to an exponentially growing volume of data. Today massive data sets (with potentially thousands of variables) play an important role in almost every branch of modern human activity, including networks, finance, and genetics. However, analyzing such data has presented a challenge for statisticians and data analysts and has required the development of new statistical methods capable of separating the signal from the noise.Introduction to High-Dimensional Statistics is a concise guide to state-of-the-art models, techniques, and approaches for ha
Estimating High-Dimensional Time Series Models
DEFF Research Database (Denmark)
Medeiros, Marcelo C.; Mendes, Eduardo F.
We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...
The management of social problems talk in a support group
Directory of Open Access Journals (Sweden)
Andrezza Gomes Peretti
2013-01-01
Full Text Available The comprehension of the health-disease process from a multifactorial perspective has allowed important transformations in the healthcare practices. In this article, we discuss the use of the support group as a resource for mental health care, analyzing how conversations about social issues are managed in this context. Based on contributions from the social constructionist movement, we analyzed the transcripts of the conversations developed in meetings of a support group offered to patients of a mental health outpatient clinic. The analysis of the process of meaning making indicates that the discourse of the social influence on mental health is not legitimized, due to a predominant individualistic discourse, which psychologizes care and is centered on the emotional analysis of the problems of the quotidian. We argue that this mode of management brings limits to the construction of the group as a device for promoting autonomy and encouraging the social transformation processes.
High-dimensional cluster analysis with the Masked EM Algorithm
Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.
2014-01-01
Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694
Modeling high dimensional multichannel brain signals
Hu, Lechuan
2017-03-27
In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.
Modeling high dimensional multichannel brain signals
Hu, Lechuan; Fortin, Norbert; Ombao, Hernando
2017-01-01
In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.
Group theoretic reduction of Laplacian dynamical problems on fractal lattices
International Nuclear Information System (INIS)
Schwalm, W.A.; Schwalm, M.K.; Giona, M.
1997-01-01
Discrete forms of the Schroedinger equation, the diffusion equation, the linearized Landau-Ginzburg equation, and discrete models for vibrations and spin dynamics belong to a class of Laplacian-based finite difference models. Real-space renormalization of such models on finitely ramified regular fractals is known to give exact recursion relations. It is shown that these recursions commute with Lie groups representing continuous symmetries of the discrete models. Each such symmetry reduces the order of the renormalization recursions by one, resulting in a system of recursions with one fewer variable. Group trajectories are obtained from inverse images of fixed and invariant sets of the recursions. A subset of the Laplacian finite difference models can be mapped by change of boundary conditions and time dependence to a diffusion problem with closed boundaries. In such cases conservation of mass simplifies the group flow and obtaining the groups becomes easier. To illustrate this, the renormalization recursions for Green functions on four standard examples are decoupled. The examples are (1) the linear chain, (2) an anisotropic version of Dhar close-quote s 3-simplex, similar to a model dealt with by Hood and Southern, (3) the fourfold coordinated Sierpiacute nski lattice of Rammal and of Domany et al., and (4) a form of the Vicsek lattice. Prospects for applying the group theoretic method to more general dynamical systems are discussed. copyright 1997 The American Physical Society
Resonating-group method for nuclear many-body problems
International Nuclear Information System (INIS)
Tang, Y.C.; LeMere, M.; Thompson, D.R.
1977-01-01
The resonating-group method is a microscopic method which uses fully antisymmetric wave functions, treats correctly the motion of the total center of mass, and takes cluster correlation into consideration. In this review, the formulation of this method is discussed for various nuclear many-body problems, and a complex-generator-coordinate technique which has been employed to evaluate matrix elements required in resonating-group calculations is described. Several illustrative examples of bound-state, scattering, and reaction calculations, which serve to demonstrate the usefulness of this method, are presented. Finally, by utilization of the results of these calculations, the role played by the Pauli principle in nuclear scattering and reaction processes is discussed. 21 figures, 2 tables, 185 references
Clustering high dimensional data using RIA
Energy Technology Data Exchange (ETDEWEB)
Aziz, Nazrina [School of Quantitative Sciences, College of Arts and Sciences, Universiti Utara Malaysia, 06010 Sintok, Kedah (Malaysia)
2015-05-15
Clustering may simply represent a convenient method for organizing a large data set so that it can easily be understood and information can efficiently be retrieved. However, identifying cluster in high dimensionality data sets is a difficult task because of the curse of dimensionality. Another challenge in clustering is some traditional functions cannot capture the pattern dissimilarity among objects. In this article, we used an alternative dissimilarity measurement called Robust Influence Angle (RIA) in the partitioning method. RIA is developed using eigenstructure of the covariance matrix and robust principal component score. We notice that, it can obtain cluster easily and hence avoid the curse of dimensionality. It is also manage to cluster large data sets with mixed numeric and categorical value.
Variance inflation in high dimensional Support Vector Machines
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2013-01-01
Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors...... the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets....... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...
Asymptotics of empirical eigenstructure for high dimensional spiked covariance.
Wang, Weichen; Fan, Jianqing
2017-06-01
We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.
Scalable Nearest Neighbor Algorithms for High Dimensional Data.
Muja, Marius; Lowe, David G
2014-11-01
For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.
Manifold learning to interpret JET high-dimensional operational space
International Nuclear Information System (INIS)
Cannas, B; Fanni, A; Pau, A; Sias, G; Murari, A
2013-01-01
In this paper, the problem of visualization and exploration of JET high-dimensional operational space is considered. The data come from plasma discharges selected from JET campaigns from C15 (year 2005) up to C27 (year 2009). The aim is to learn the possible manifold structure embedded in the data and to create some representations of the plasma parameters on low-dimensional maps, which are understandable and which preserve the essential properties owned by the original data. A crucial issue for the design of such mappings is the quality of the dataset. This paper reports the details of the criteria used to properly select suitable signals downloaded from JET databases in order to obtain a dataset of reliable observations. Moreover, a statistical analysis is performed to recognize the presence of outliers. Finally data reduction, based on clustering methods, is performed to select a limited and representative number of samples for the operational space mapping. The high-dimensional operational space of JET is mapped using a widely used manifold learning method, the self-organizing maps. The results are compared with other data visualization methods. The obtained maps can be used to identify characteristic regions of the plasma scenario, allowing to discriminate between regions with high risk of disruption and those with low risk of disruption. (paper)
solving the cell formation problem in group technology
Directory of Open Access Journals (Sweden)
Prafulla Joglekar
2001-01-01
Full Text Available Over the last three decades, numerous algorithms have been proposed to solve the work-cell formation problem. For practicing manufacturing managers it would be nice to know as to which algorithm would be most effective and efficient for their specific situation. While several studies have attempted to fulfill this need, most have not resulted in any definitive recommendations and a better methodology of evaluation of cell formation algorithms is urgently needed. Prima facie, the methodology underlying Miltenburg and Zhang's (M&Z (1991 evaluation of nine well-known cell formation algorithms seems very promising. The primary performance measure proposed by M&Z effectively captures the objectives of a good solution to a cell formation problem and is worthy of use in future studies. Unfortunately, a critical review of M&Z's methodology also reveals certain important flaws in M&Z's methodology. For example, M&Z may not have duplicated each algorithm precisely as the developer(s of that algorithm intended. Second, M&Z's misrepresent Chandrasekharan and Rajagopalan's [C&R's] (1986 grouping efficiency measure. Third, M&Z's secondary performance measures lead them to unnecessarily ambivalent results. Fourth, several of M&Z's empirical conclusions can be theoretically deduced. It is hoped that future evaluations of cell formation algorithms will benefit from both the strengths and weaknesses of M&Z's work.
Modeling High-Dimensional Multichannel Brain Signals
Hu, Lechuan
2017-12-12
Our goal is to model and measure functional and effective (directional) connectivity in multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The difficulties from analyzing these data mainly come from two aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with potentially high lag order so that complex lead-lag temporal dynamics between the channels can be captured. Estimates of the VAR model will be obtained by our proposed hybrid LASSLE (LASSO + LSE) method which combines regularization (to control for sparsity) and least squares estimation (to improve bias and mean-squared error). Then we employ some measures of connectivity but put an emphasis on partial directed coherence (PDC) which can capture the directional connectivity between channels. PDC is a frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative to all possible receivers in the network. The proposed modeling approach provided key insights into potential functional relationships among simultaneously recorded sites during performance of a complex memory task. Specifically, this novel method was successful in quantifying patterns of effective connectivity across electrode locations, and in capturing how these patterns varied across trial epochs and trial types.
Class prediction for high-dimensional class-imbalanced data
Directory of Open Access Journals (Sweden)
Lusa Lara
2010-10-01
Full Text Available Abstract Background The goal of class prediction studies is to develop rules to accurately predict the class membership of new samples. The rules are derived using the values of the variables available for each subject: the main characteristic of high-dimensional data is that the number of variables greatly exceeds the number of samples. Frequently the classifiers are developed using class-imbalanced data, i.e., data sets where the number of samples in each class is not equal. Standard classification methods used on class-imbalanced data often produce classifiers that do not accurately predict the minority class; the prediction is biased towards the majority class. In this paper we investigate if the high-dimensionality poses additional challenges when dealing with class-imbalanced prediction. We evaluate the performance of six types of classifiers on class-imbalanced data, using simulated data and a publicly available data set from a breast cancer gene-expression microarray study. We also investigate the effectiveness of some strategies that are available to overcome the effect of class imbalance. Results Our results show that the evaluated classifiers are highly sensitive to class imbalance and that variable selection introduces an additional bias towards classification into the majority class. Most new samples are assigned to the majority class from the training set, unless the difference between the classes is very large. As a consequence, the class-specific predictive accuracies differ considerably. When the class imbalance is not too severe, down-sizing and asymmetric bagging embedding variable selection work well, while over-sampling does not. Variable normalization can further worsen the performance of the classifiers. Conclusions Our results show that matching the prevalence of the classes in training and test set does not guarantee good performance of classifiers and that the problems related to classification with class
Renormalization-group approach to nonlinear radiation-transport problems
International Nuclear Information System (INIS)
Chapline, G.F.
1980-01-01
A Monte Carlo method is derived for solving nonlinear radiation-transport problems that allows one to average over the effects of many photon absorptions and emissions at frequencies where the opacity is large. This method should allow one to treat radiation-transport problems with large optical depths, e.g., line-transport problems, with little increase in computational effort over that which is required for optically thin problems
Evaluating Clustering in Subspace Projections of High Dimensional Data
DEFF Research Database (Denmark)
Müller, Emmanuel; Günnemann, Stephan; Assent, Ira
2009-01-01
Clustering high dimensional data is an emerging research field. Subspace clustering or projected clustering group similar objects in subspaces, i.e. projections, of the full space. In the past decade, several clustering paradigms have been developed in parallel, without thorough evaluation...... and comparison between these paradigms on a common basis. Conclusive evaluation and comparison is challenged by three major issues. First, there is no ground truth that describes the "true" clusters in real world data. Second, a large variety of evaluation measures have been used that reflect different aspects...... of the clustering result. Finally, in typical publications authors have limited their analysis to their favored paradigm only, while paying other paradigms little or no attention. In this paper, we take a systematic approach to evaluate the major paradigms in a common framework. We study representative clustering...
Pro-Lie Groups: A Survey with Open Problems
Directory of Open Access Journals (Sweden)
Karl H. Hofmann
2015-07-01
Full Text Available A topological group is called a pro-Lie group if it is isomorphic to a closed subgroup of a product of finite-dimensional real Lie groups. This class of groups is closed under the formation of arbitrary products and closed subgroups and forms a complete category. It includes each finite-dimensional Lie group, each locally-compact group that has a compact quotient group modulo its identity component and, thus, in particular, each compact and each connected locally-compact group; it also includes all locally-compact Abelian groups. This paper provides an overview of the structure theory and the Lie theory of pro-Lie groups, including results more recent than those in the authors’ reference book on pro-Lie groups. Significantly, it also includes a review of the recent insight that weakly-complete unital algebras provide a natural habitat for both pro-Lie algebras and pro-Lie groups, indeed for the exponential function that links the two. (A topological vector space is weakly complete if it is isomorphic to a power RX of an arbitrary set of copies of R. This class of real vector spaces is at the basis of the Lie theory of pro-Lie groups. The article also lists 12 open questions connected to pro-Lie groups.
6th Hilbert's problem and S.Lie's infinite groups
International Nuclear Information System (INIS)
Konopleva, N.P.
1999-01-01
The progress in Hilbert's sixth problem solving is demonstrated. That became possible thanks to the gauge field theory in physics and to the geometrical treatment of the gauge fields. It is shown that the fibre bundle spaces geometry is the best basis for solution of the problem being discussed. This talk has been reported at the International Seminar '100 Years after Sophus Lie' (Leipzig, Germany)
An irregular grid approach for pricing high-dimensional American options
Berridge, S.J.; Schumacher, J.M.
2008-01-01
We propose and test a new method for pricing American options in a high-dimensional setting. The method is centered around the approximation of the associated complementarity problem on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE
Global communication schemes for the numerical solution of high-dimensional PDEs
DEFF Research Database (Denmark)
Hupp, Philipp; Heene, Mario; Jacob, Riko
2016-01-01
The numerical treatment of high-dimensional partial differential equations is among the most compute-hungry problems and in urgent need for current and future high-performance computing (HPC) systems. It is thus also facing the grand challenges of exascale computing such as the requirement...
Spectrally-Corrected Estimation for High-Dimensional Markowitz Mean-Variance Optimization
Z. Bai (Zhidong); H. Li (Hua); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)
2016-01-01
textabstractThis paper considers the portfolio problem for high dimensional data when the dimension and size are both large. We analyze the traditional Markowitz mean-variance (MV) portfolio by large dimension matrix theory, and find the spectral distribution of the sample covariance is the main
Berridge, S.J.; Schumacher, J.M.
2004-01-01
We propose a method for pricing high-dimensional American options on an irregular grid; the method involves using quadratic functions to approximate the local effect of the Black-Scholes operator.Once such an approximation is known, one can solve the pricing problem by time stepping in an explicit
Multigrid for high dimensional elliptic partial differential equations on non-equidistant grids
bin Zubair, H.; Oosterlee, C.E.; Wienands, R.
2006-01-01
This work presents techniques, theory and numbers for multigrid in a general d-dimensional setting. The main focus is the multigrid convergence for high-dimensional partial differential equations (PDEs). As a model problem we have chosen the anisotropic diffusion equation, on a unit hypercube. We
An Irregular Grid Approach for Pricing High-Dimensional American Options
Berridge, S.J.; Schumacher, J.M.
2004-01-01
We propose and test a new method for pricing American options in a high-dimensional setting.The method is centred around the approximation of the associated complementarity problem on an irregular grid.We approximate the partial differential operator on this grid by appealing to the SDE
Engineering two-photon high-dimensional states through quantum interference
Zhang, Yingwen; Roux, Filippus S.; Konrad, Thomas; Agnew, Megan; Leach, Jonathan; Forbes, Andrew
2016-01-01
Many protocols in quantum science, for example, linear optical quantum computing, require access to large-scale entangled quantum states. Such systems can be realized through many-particle qubits, but this approach often suffers from scalability problems. An alternative strategy is to consider a lesser number of particles that exist in high-dimensional states. The spatial modes of light are one such candidate that provides access to high-dimensional quantum states, and thus they increase the storage and processing potential of quantum information systems. We demonstrate the controlled engineering of two-photon high-dimensional states entangled in their orbital angular momentum through Hong-Ou-Mandel interference. We prepare a large range of high-dimensional entangled states and implement precise quantum state filtering. We characterize the full quantum state before and after the filter, and are thus able to determine that only the antisymmetric component of the initial state remains. This work paves the way for high-dimensional processing and communication of multiphoton quantum states, for example, in teleportation beyond qubits. PMID:26933685
Model-based Clustering of High-Dimensional Data in Astrophysics
Bouveyron, C.
2016-05-01
The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.
Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.
Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela
2016-12-01
Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.
Explorations on High Dimensional Landscapes: Spin Glasses and Deep Learning
Sagun, Levent
This thesis deals with understanding the structure of high-dimensional and non-convex energy landscapes. In particular, its focus is on the optimization of two classes of functions: homogeneous polynomials and loss functions that arise in machine learning. In the first part, the notion of complexity of a smooth, real-valued function is studied through its critical points. Existing theoretical results predict that certain random functions that are defined on high dimensional domains have a narrow band of values whose pre-image contains the bulk of its critical points. This section provides empirical evidence for convergence of gradient descent to local minima whose energies are near the predicted threshold justifying the existing asymptotic theory. Moreover, it is empirically shown that a similar phenomenon may hold for deep learning loss functions. Furthermore, there is a comparative analysis of gradient descent and its stochastic version showing that in high dimensional regimes the latter is a mere speedup. The next study focuses on the halting time of an algorithm at a given stopping condition. Given an algorithm, the normalized fluctuations of the halting time follow a distribution that remains unchanged even when the input data is sampled from a new distribution. Two qualitative classes are observed: a Gumbel-like distribution that appears in Google searches, human decision times, and spin glasses and a Gaussian-like distribution that appears in conjugate gradient method, deep learning with MNIST and random input data. Following the universality phenomenon, the Hessian of the loss functions of deep learning is studied. The spectrum is seen to be composed of two parts, the bulk which is concentrated around zero, and the edges which are scattered away from zero. Empirical evidence is presented for the bulk indicating how over-parametrized the system is, and for the edges that depend on the input data. Furthermore, an algorithm is proposed such that it would
Multivariate statistics high-dimensional and large-sample approximations
Fujikoshi, Yasunori; Shimizu, Ryoichi
2010-01-01
A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic
Statistical mechanics of complex neural systems and high dimensional data
International Nuclear Information System (INIS)
Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya
2013-01-01
Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks. (paper)
Permutation groups and transformation semigroups : results and problems
Araujo, Joao; Cameron, Peter Jephson
2015-01-01
J.M. Howie, the influential St Andrews semigroupist, claimed that we value an area of pure mathematics to the extent that (a) it gives rise to arguments that are deep and elegant, and (b) it has interesting interconnections with other parts of pure mathematics. This paper surveys some recent results on the transformation semigroup generated by a permutation group $G$ and a single non-permutation $a$. Our particular concern is the influence that properties of $G$ (related to homogeneity, trans...
Conflict Management in "Ad Hoc" Problem-Solving Groups: A Preliminary Investigation.
Wallace, Les; Baxter, Leslie
Full study of small group communication must include consideration of task and socio-emotional dimensions, especially in relation to group problem solving. Thirty small groups were tested for their reactions in various "ad hoc" conflict resolution situations. Instructions to the groups were (1) no problem-solving instructions (control),…
Progress in high-dimensional percolation and random graphs
Heydenreich, Markus
2017-01-01
This text presents an engaging exposition of the active field of high-dimensional percolation that will likely provide an impetus for future work. With over 90 exercises designed to enhance the reader’s understanding of the material, as well as many open problems, the book is aimed at graduate students and researchers who wish to enter the world of this rich topic. The text may also be useful in advanced courses and seminars, as well as for reference and individual study. Part I, consisting of 3 chapters, presents a general introduction to percolation, stating the main results, defining the central objects, and proving its main properties. No prior knowledge of percolation is assumed. Part II, consisting of Chapters 4–9, discusses mean-field critical behavior by describing the two main techniques used, namely, differential inequalities and the lace expansion. In Parts I and II, all results are proved, making this the first self-contained text discussing high-dimensiona l percolation. Part III, consist...
Inference for High-dimensional Differential Correlation Matrices.
Cai, T Tony; Zhang, Anru
2016-01-01
Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.
Use of the Fox derivatives in the solution of the word problem for groups
International Nuclear Information System (INIS)
Majumdar, S.
1988-09-01
Applying Fox's free partial derivative, the word problem of a finitely presented group has been reduced to the problem of finding an algorithm for determining the existence of a root of a system of linear equations over the integral group ring. The solubility of the word problem for torsion-free one-relator groups and torsion-free polycyclic-by-finite groups has been deduced. (author). 10 refs
Hall, Kimberly R.; Rushing, Jeri Lynn; Khurshid, Ayesha
2011-01-01
Problem-focused interventions are considered to be one of the most effective group counseling strategies with adolescents. This article describes a problem-focused group counseling model, Solving Problems Together (SPT), that focuses on working with students who struggle with negative peer pressure. Adapted from the teaching philosophy of…
Assessing the Internal Dynamics of Mathematical Problem Solving in Small Groups.
Artzt, Alice F.; Armour-Thomas, Eleanor
The purpose of this exploratory study was to examine the problem-solving behaviors and perceptions of (n=27) seventh-grade students as they worked on solving a mathematical problem within a small-group setting. An assessment system was developed that allowed for this analysis. To assess problem-solving behaviors within a small group a Group…
Directory of Open Access Journals (Sweden)
L.V. Arun Shalin
2016-01-01
Full Text Available Clustering is a process of grouping elements together, designed in such a way that the elements assigned to similar data points in a cluster are more comparable to each other than the remaining data points in a cluster. During clustering certain difficulties related when dealing with high dimensional data are ubiquitous and abundant. Works concentrated using anonymization method for high dimensional data spaces failed to address the problem related to dimensionality reduction during the inclusion of non-binary databases. In this work we study methods for dimensionality reduction for non-binary database. By analyzing the behavior of dimensionality reduction for non-binary database, results in performance improvement with the help of tag based feature. An effective multi-clustering anonymization approach called Discrete Component Task Specific Multi-Clustering (DCTSM is presented for dimensionality reduction on non-binary database. To start with we present the analysis of attribute in the non-binary database and cluster projection identifies the sparseness degree of dimensions. Additionally with the quantum distribution on multi-cluster dimension, the solution for relevancy of attribute and redundancy on non-binary data spaces is provided resulting in performance improvement on the basis of tag based feature. Multi-clustering tag based feature reduction extracts individual features and are correspondingly replaced by the equivalent feature clusters (i.e. tag clusters. During training, the DCTSM approach uses multi-clusters instead of individual tag features and then during decoding individual features is replaced by corresponding multi-clusters. To measure the effectiveness of the method, experiments are conducted on existing anonymization method for high dimensional data spaces and compared with the DCTSM approach using Statlog German Credit Data Set. Improved tag feature extraction and minimum error rate compared to conventional anonymization
Brauer groups and obstruction problems moduli spaces and arithmetic
Hassett, Brendan; Várilly-Alvarado, Anthony; Viray, Bianca
2017-01-01
The contributions in this book explore various contexts in which the derived category of coherent sheaves on a variety determines some of its arithmetic. This setting provides new geometric tools for interpreting elements of the Brauer group. With a view towards future arithmetic applications, the book extends a number of powerful tools for analyzing rational points on elliptic curves, e.g., isogenies among curves, torsion points, modular curves, and the resulting descent techniques, as well as higher-dimensional varieties like K3 surfaces. Inspired by the rapid recent advances in our understanding of K3 surfaces, the book is intended to foster cross-pollination between the fields of complex algebraic geometry and number theory. Contributors: · Nicolas Addington · Benjamin Antieau · Kenneth Ascher · Asher Auel · Fedor Bogomolov · Jean-Louis Colliot-Thélène · Krishna Dasaratha · Brendan Hassett · Colin Ingalls · Martí Lahoz · Emanuele Macrì · Kelly McKinnie · Andrew Obus · Ekin Ozman · Raman...
de Klerk, E.; Sotirov, R.
2007-01-01
We consider semidefinite programming relaxations of the quadratic assignment problem, and show how to exploit group symmetry in the problem data. Thus we are able to compute the best known lower bounds for several instances of quadratic assignment problems from the problem library: [R.E. Burkard,
Backtrack Programming: A Computer-Based Approach to Group Problem Solving.
Scott, Michael D.; Bodaken, Edward M.
Backtrack problem-solving appears to be a viable alternative to current problem-solving methodologies. It appears to have considerable heuristic potential as a conceptual and operational framework for small group communication research, as well as functional utility for the student group in the small group class or the management team in the…
High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps
International Nuclear Information System (INIS)
Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; Chen, Xiao
2017-01-01
This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. It relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.
CUNNINGHAM, GLORIA
THE IDEA OF A "BORN CRIMINAL" OR A CLASS OF CRIMINALS IS ERRONEOUS. SOME CITIZENS MAINTAIN THIS ATTITUDE AND THEREFORE LACK COMMUNITY CONCERN OR INVOLVEMENT, THEREBY REDUCING THE NUMBER OF RESOURCES AND COOPERATING COMMUNITY UNITS THAT A PROBATION OFFICER CAN DRAW ON. ANOTHER PROBLEM WITH RESOURCES IS THAT, EVEN WHERE THEY DO EXIST, THEY…
Harnessing high-dimensional hyperentanglement through a biphoton frequency comb
Xie, Zhenda; Zhong, Tian; Shrestha, Sajan; Xu, Xinan; Liang, Junlin; Gong, Yan-Xiao; Bienfang, Joshua C.; Restelli, Alessandro; Shapiro, Jeffrey H.; Wong, Franco N. C.; Wei Wong, Chee
2015-08-01
Quantum entanglement is a fundamental resource for secure information processing and communications, and hyperentanglement or high-dimensional entanglement has been separately proposed for its high data capacity and error resilience. The continuous-variable nature of the energy-time entanglement makes it an ideal candidate for efficient high-dimensional coding with minimal limitations. Here, we demonstrate the first simultaneous high-dimensional hyperentanglement using a biphoton frequency comb to harness the full potential in both the energy and time domain. Long-postulated Hong-Ou-Mandel quantum revival is exhibited, with up to 19 time-bins and 96.5% visibilities. We further witness the high-dimensional energy-time entanglement through Franson revivals, observed periodically at integer time-bins, with 97.8% visibility. This qudit state is observed to simultaneously violate the generalized Bell inequality by up to 10.95 standard deviations while observing recurrent Clauser-Horne-Shimony-Holt S-parameters up to 2.76. Our biphoton frequency comb provides a platform for photon-efficient quantum communications towards the ultimate channel capacity through energy-time-polarization high-dimensional encoding.
High-dimensional statistical inference: From vector to matrix
Zhang, Anru
Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The
Approximation of High-Dimensional Rank One Tensors
Bachmayr, Markus
2013-11-12
Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called \\'curse of dimensionality\\'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.
Quality and efficiency in high dimensional Nearest neighbor search
Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos
2009-01-01
Nearest neighbor (NN) search in high dimensional space is an important problem in many applications. Ideally, a practical solution (i) should be implementable in a relational database, and (ii) its query cost should grow sub-linearly with the dataset size, regardless of the data and query distributions. Despite the bulk of NN literature, no solution fulfills both requirements, except locality sensitive hashing (LSH). The existing LSH implementations are either rigorous or adhoc. Rigorous-LSH ensures good quality of query results, but requires expensive space and query cost. Although adhoc-LSH is more efficient, it abandons quality control, i.e., the neighbor it outputs can be arbitrarily bad. As a result, currently no method is able to ensure both quality and efficiency simultaneously in practice. Motivated by this, we propose a new access method called the locality sensitive B-tree (LSB-tree) that enables fast highdimensional NN search with excellent quality. The combination of several LSB-trees leads to a structure called the LSB-forest that ensures the same result quality as rigorous-LSH, but reduces its space and query cost dramatically. The LSB-forest also outperforms adhoc-LSH, even though the latter has no quality guarantee. Besides its appealing theoretical properties, the LSB-tree itself also serves as an effective index that consumes linear space, and supports efficient updates. Our extensive experiments confirm that the LSB-tree is faster than (i) the state of the art of exact NN search by two orders of magnitude, and (ii) the best (linear-space) method of approximate retrieval by an order of magnitude, and at the same time, returns neighbors with much better quality. © 2009 ACM.
Approximation of High-Dimensional Rank One Tensors
Bachmayr, Markus; Dahmen, Wolfgang; DeVore, Ronald; Grasedyck, Lars
2013-01-01
Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called 'curse of dimensionality'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.
Analysing spatially extended high-dimensional dynamics by recurrence plots
Energy Technology Data Exchange (ETDEWEB)
Marwan, Norbert, E-mail: marwan@pik-potsdam.de [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Kurths, Jürgen [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Humboldt Universität zu Berlin, Institut für Physik (Germany); Nizhny Novgorod State University, Department of Control Theory, Nizhny Novgorod (Russian Federation); Foerster, Saskia [GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing, Telegrafenberg, 14473 Potsdam (Germany)
2015-05-08
Recurrence plot based measures of complexity are capable tools for characterizing complex dynamics. In this letter we show the potential of selected recurrence plot measures for the investigation of even high-dimensional dynamics. We apply this method on spatially extended chaos, such as derived from the Lorenz96 model and show that the recurrence plot based measures can qualitatively characterize typical dynamical properties such as chaotic or periodic dynamics. Moreover, we demonstrate its power by analysing satellite image time series of vegetation cover with contrasting dynamics as a spatially extended and potentially high-dimensional example from the real world. - Highlights: • We use recurrence plots for analysing partially extended dynamics. • We investigate the high-dimensional chaos of the Lorenz96 model. • The approach distinguishes different spatio-temporal dynamics. • We use the method for studying vegetation cover time series.
Artzt, Alice F.; Armour-Thomas, Eleanor
The roles of cognition and metacognition were examined in the mathematical problem-solving behaviors of students as they worked in small groups. As an outcome, a framework that links the literature of cognitive science and mathematical problem solving was developed for protocol analysis of mathematical problem solving. Within this framework, each…
Group relationships in early and late sessions and improvement in interpersonal problems.
Lo Coco, Gianluca; Gullo, Salvatore; Di Fratello, Carla; Giordano, Cecilia; Kivlighan, Dennis M
2016-07-01
Groups are more effective when positive bonds are established and interpersonal conflicts resolved in early sessions and work is accomplished in later sessions. Previous research has provided mixed support for this group development model. We performed a test of this theoretical perspective using group members' (actors) and aggregated group members' (partners) perceptions of positive bonding, positive working, and negative group relationships measured early and late in interpersonal growth groups. Participants were 325 Italian graduate students randomly (within semester) assigned to 1 of 16 interpersonal growth groups. Groups met for 9 weeks with experienced psychologists using Yalom and Leszcz's (2005) interpersonal process model. Outcome was assessed pre- and posttreatment using the Inventory of Interpersonal Problems, and group relationships were measured at Sessions 3 and 6 using the Group Questionnaire. As hypothesized, early measures of positive bonding and late measures of positive working, for both actors and partners, were positively related to improved interpersonal problems. Also as hypothesized, late measures of positive bonding and early measures of positive working, for both actors and partners, were negatively related to improved interpersonal problems. We also found that early actor and partner positive bonding and negative relationships interacted to predict changes in interpersonal problems. The findings are consistent with group development theory and suggest that group therapists focus on group-as-a-whole positive bonding relationships in early group sessions and on group-as-a-whole positive working relationships in later group sessions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
A Mindfulness-Based Cognitive Psychoeducational Group Manual for Problem Gambling
Cormier, Abigail; McBride, Dawn Lorraine
2012-01-01
This project provides a comprehensive overview of the research literature on problem gambling in adults and includes a detailed mindfulness-based psychoeducational group manual for problem gambling, complete with an extensive group counselling consent form, assessment and screening protocols, 10 user-friendly lesson plans, templates for a…
Non-commutative cryptography and complexity of group-theoretic problems
Myasnikov, Alexei; Ushakov, Alexander
2011-01-01
This book is about relations between three different areas of mathematics and theoretical computer science: combinatorial group theory, cryptography, and complexity theory. It explores how non-commutative (infinite) groups, which are typically studied in combinatorial group theory, can be used in public-key cryptography. It also shows that there is remarkable feedback from cryptography to combinatorial group theory because some of the problems motivated by cryptography appear to be new to group theory, and they open many interesting research avenues within group theory. In particular, a lot of emphasis in the book is put on studying search problems, as compared to decision problems traditionally studied in combinatorial group theory. Then, complexity theory, notably generic-case complexity of algorithms, is employed for cryptanalysis of various cryptographic protocols based on infinite groups, and the ideas and machinery from the theory of generic-case complexity are used to study asymptotically dominant prop...
Supporting Dynamic Quantization for High-Dimensional Data Analytics.
Guzun, Gheorghi; Canahuate, Guadalupe
2017-05-01
Similarity searches are at the heart of exploratory data analysis tasks. Distance metrics are typically used to characterize the similarity between data objects represented as feature vectors. However, when the dimensionality of the data increases and the number of features is large, traditional distance metrics fail to distinguish between the closest and furthest data points. Localized distance functions have been proposed as an alternative to traditional distance metrics. These functions only consider dimensions close to query to compute the distance/similarity. Furthermore, in order to enable interactive explorations of high-dimensional data, indexing support for ad-hoc queries is needed. In this work we set up to investigate whether bit-sliced indices can be used for exploratory analytics such as similarity searches and data clustering for high-dimensional big-data. We also propose a novel dynamic quantization called Query dependent Equi-Depth (QED) quantization and show its effectiveness on characterizing high-dimensional similarity. When applying QED we observe improvements in kNN classification accuracy over traditional distance functions. Gheorghi Guzun and Guadalupe Canahuate. 2017. Supporting Dynamic Quantization for High-Dimensional Data Analytics. In Proceedings of Ex-ploreDB'17, Chicago, IL, USA, May 14-19, 2017, 6 pages. https://doi.org/http://dx.doi.org/10.1145/3077331.3077336.
A hybridized K-means clustering approach for high dimensional ...
African Journals Online (AJOL)
International Journal of Engineering, Science and Technology ... Due to incredible growth of high dimensional dataset, conventional data base querying methods are inadequate to extract useful information, so researchers nowadays ... Recently cluster analysis is a popularly used data analysis method in number of areas.
On Robust Information Extraction from High-Dimensional Data
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2014-01-01
Roč. 9, č. 1 (2014), s. 131-144 ISSN 1452-4864 Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : data mining * high-dimensional data * robust econometrics * outliers * machine learning Subject RIV: IN - Informatics, Computer Science
Inference in High-dimensional Dynamic Panel Data Models
DEFF Research Database (Denmark)
Kock, Anders Bredahl; Tang, Haihan
We establish oracle inequalities for a version of the Lasso in high-dimensional fixed effects dynamic panel data models. The inequalities are valid for the coefficients of the dynamic and exogenous regressors. Separate oracle inequalities are derived for the fixed effects. Next, we show how one can...
Pricing High-Dimensional American Options Using Local Consistency Conditions
Berridge, S.J.; Schumacher, J.M.
2004-01-01
We investigate a new method for pricing high-dimensional American options. The method is of finite difference type but is also related to Monte Carlo techniques in that it involves a representative sampling of the underlying variables.An approximating Markov chain is built using this sampling and
Hypergraph-based anomaly detection of high-dimensional co-occurrences.
Silva, Jorge; Willett, Rebecca
2009-03-01
This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.
Non-intrusive low-rank separated approximation of high-dimensional stochastic models
Doostan, Alireza; Validi, AbdoulAhad; Iaccarino, Gianluca
2013-01-01
This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.
Non-intrusive low-rank separated approximation of high-dimensional stochastic models
Doostan, Alireza
2013-08-01
This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data.
Cai, T Tony; Zhang, Anru
2016-09-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data.
Minimax Rate-optimal Estimation of High-dimensional Covariance Matrices with Incomplete Data*
Cai, T. Tony; Zhang, Anru
2016-01-01
Missing data occur frequently in a wide range of applications. In this paper, we consider estimation of high-dimensional covariance matrices in the presence of missing observations under a general missing completely at random model in the sense that the missingness is not dependent on the values of the data. Based on incomplete data, estimators for bandable and sparse covariance matrices are proposed and their theoretical and numerical properties are investigated. Minimax rates of convergence are established under the spectral norm loss and the proposed estimators are shown to be rate-optimal under mild regularity conditions. Simulation studies demonstrate that the estimators perform well numerically. The methods are also illustrated through an application to data from four ovarian cancer studies. The key technical tools developed in this paper are of independent interest and potentially useful for a range of related problems in high-dimensional statistical inference with missing data. PMID:27777471
Tasca, Giorgio A; Balfour, Louise; Presniak, Michelle D; Bissada, Hany
2012-04-01
We assessed whether an attachment-based treatment, Group Psychodynamic Interpersonal Psychotherapy (GPIP) had a greater impact compared to Group Cognitive Behavioral Therapy (GCBT) on Cold/Distant and Intrusive/Needy interpersonal problems. Ninety-five individuals with Binge Eating Disorder (BED) were randomized to GPIP or GCBT and assessed at pre-, post-, and six months post-treatment. Both therapies resulted in a significant decrease in all eight interpersonal problem subscales except the Nonassertive subscale. GPIP resulted in a greater reduction in the Cold/Distant subscale compared to GCBT, but no differences were found for changes in the Intrusive/Needy subscale. GPIP may be most relevant for those with BED who have Cold/Distant interpersonal problems and attachment avoidance.
Chan, Zenobia C Y
2013-08-01
To explore students' attitude towards problem-based learning, creativity and critical thinking, and the relevance to nursing education and clinical practice. Critical thinking and creativity are crucial in nursing education. The teaching approach of problem-based learning can help to reduce the difficulties of nurturing problem-solving skills. However, there is little in the literature on how to improve the effectiveness of a problem-based learning lesson by designing appropriate and innovative activities such as composing songs, writing poems and using role plays. Exploratory qualitative study. A sample of 100 students participated in seven semi-structured focus groups, of which two were innovative groups and five were standard groups, adopting three activities in problem-based learning, namely composing songs, writing poems and performing role plays. The data were analysed using thematic analysis. There are three themes extracted from the conversations: 'students' perceptions of problem-based learning', 'students' perceptions of creative thinking' and 'students' perceptions of critical thinking'. Participants generally agreed that critical thinking is more important than creativity in problem-based learning and clinical practice. Participants in the innovative groups perceived a significantly closer relationship between critical thinking and nursing care, and between creativity and nursing care than the standard groups. Both standard and innovative groups agreed that problem-based learning could significantly increase their critical thinking and problem-solving skills. Further, by composing songs, writing poems and using role plays, the innovative groups had significantly increased their awareness of the relationship among critical thinking, creativity and nursing care. Nursing educators should include more types of creative activities than it often does in conventional problem-based learning classes. The results could help nurse educators design an appropriate
Hybrid subgroup decomposition method for solving fine-group eigenvalue transport problems
International Nuclear Information System (INIS)
Yasseri, Saam; Rahnema, Farzad
2014-01-01
Highlights: • An acceleration technique for solving fine-group eigenvalue transport problems. • Coarse-group quasi transport theory to solve coarse-group eigenvalue transport problems. • Consistent and inconsistent formulations for coarse-group quasi transport theory. • Computational efficiency amplified by a factor of 2 using hybrid SGD for 1D BWR problem. - Abstract: In this paper, a new hybrid method for solving fine-group eigenvalue transport problems is developed. This method extends the subgroup decomposition method to efficiently couple a new coarse-group quasi transport theory with a set of fixed-source transport decomposition sweeps to obtain the fine-group transport solution. The advantages of the quasi transport theory are its high accuracy, straight-forward implementation and numerical stability. The hybrid method is analyzed for a 1D benchmark problem characteristic of boiling water reactors (BWR). It is shown that the method reproduces the fine-group transport solution with high accuracy while increasing the computational efficiency up to 12 times compared to direct fine-group transport calculations
A solution to the collective action problem in between-group conflict with within-group inequality.
Gavrilets, Sergey; Fortunato, Laura
2014-03-26
Conflict with conspecifics from neighbouring groups over territory, mating opportunities and other resources is observed in many social organisms, including humans. Here we investigate the evolutionary origins of social instincts, as shaped by selection resulting from between-group conflict in the presence of a collective action problem. We focus on the effects of the differences between individuals on the evolutionary dynamics. Our theoretical models predict that high-rank individuals, who are able to usurp a disproportional share of resources in within-group interactions, will act seemingly altruistically in between-group conflict, expending more effort and often having lower reproductive success than their low-rank group-mates. Similar behaviour is expected for individuals with higher motivation, higher strengths or lower costs, or for individuals in a leadership position. Our theory also provides an evolutionary foundation for classical equity theory, and it has implications for the origin of coercive leadership and for reproductive skew theory.
Directory of Open Access Journals (Sweden)
Patimat Alieva
2012-01-01
Full Text Available The article is devoted to the consideration and analysis of the main problems, concerning the realization of socially vulnerable groups' of the population potential. The problem of women and youth employment development takes on a special acuteness and actualite in the outlying district with a labour redundant labour market.
Berge, Maria; Danielsson, Anna T.
2013-01-01
The purpose of this article is to explore how a group of four university physics students addressed mechanics problems, in terms of student direction of attention, problem solving strategies and their establishment of and ways of interacting. Adapted from positioning theory, the concepts "positioning" and "storyline" are used to describe and to…
An approximate solution of the two-group critical problem for reflected slabs
International Nuclear Information System (INIS)
Ishiguro, Y.; Garcia, R.D.M.
1977-01-01
A new approximation is developed to solve two group slab problems involving two media where one of the media is infinite. The method consists in combining the P sub(L) approximation with invariance principles. Several numerical results are reported for the critical slab problem [pt
Utility Function for modeling Group Multicriteria Decision Making problems as games
Alexandre Bevilacqua Leoneti
2016-01-01
To assist in the decision making process, several multicriteria methods have been proposed. However, the existing methods assume a single decision-maker and do not consider decision under risk, which is better addressed by Game Theory. Hence, the aim of this research is to propose a Utility Function that makes it possible to model Group Multicriteria Decision Making problems as games. The advantage of using Game Theory for solving Group Multicriteria Decision Making problems is to evaluate th...
High Dimensional Modulation and MIMO Techniques for Access Networks
DEFF Research Database (Denmark)
Binti Othman, Maisara
Exploration of advanced modulation formats and multiplexing techniques for next generation optical access networks are of interest as promising solutions for delivering multiple services to end-users. This thesis addresses this from two different angles: high dimensionality carrierless...... the capacity per wavelength of the femto-cell network. Bit rate up to 1.59 Gbps with fiber-wireless transmission over 1 m air distance is demonstrated. The results presented in this thesis demonstrate the feasibility of high dimensionality CAP in increasing the number of dimensions and their potentially......) optical access network. 2 X 2 MIMO RoF employing orthogonal frequency division multiplexing (OFDM) with 5.6 GHz RoF signaling over all-vertical cavity surface emitting lasers (VCSEL) WDM passive optical networks (PONs). We have employed polarization division multiplexing (PDM) to further increase...
HSM: Heterogeneous Subspace Mining in High Dimensional Data
DEFF Research Database (Denmark)
Müller, Emmanuel; Assent, Ira; Seidl, Thomas
2009-01-01
Heterogeneous data, i.e. data with both categorical and continuous values, is common in many databases. However, most data mining algorithms assume either continuous or categorical attributes, but not both. In high dimensional data, phenomena due to the "curse of dimensionality" pose additional...... challenges. Usually, due to locally varying relevance of attributes, patterns do not show across the full set of attributes. In this paper we propose HSM, which defines a new pattern model for heterogeneous high dimensional data. It allows data mining in arbitrary subsets of the attributes that are relevant...... for the respective patterns. Based on this model we propose an efficient algorithm, which is aware of the heterogeneity of the attributes. We extend an indexing structure for continuous attributes such that HSM indexing adapts to different attribute types. In our experiments we show that HSM efficiently mines...
Analysis of chaos in high-dimensional wind power system.
Wang, Cong; Zhang, Hongli; Fan, Wenhui; Ma, Ping
2018-01-01
A comprehensive analysis on the chaos of a high-dimensional wind power system is performed in this study. A high-dimensional wind power system is more complex than most power systems. An 11-dimensional wind power system proposed by Huang, which has not been analyzed in previous studies, is investigated. When the systems are affected by external disturbances including single parameter and periodic disturbance, or its parameters changed, chaotic dynamics of the wind power system is analyzed and chaotic parameters ranges are obtained. Chaos existence is confirmed by calculation and analysis of all state variables' Lyapunov exponents and the state variable sequence diagram. Theoretical analysis and numerical simulations show that the wind power system chaos will occur when parameter variations and external disturbances change to a certain degree.
Reinforcement learning on slow features of high-dimensional input streams.
Directory of Open Access Journals (Sweden)
Robert Legenstein
Full Text Available Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.
High-dimensional data in economics and their (robust) analysis
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2017-01-01
Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Institutional support: RVO:67985556 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BA - General Mathematics OBOR OECD: Business and management http://library.utia.cas.cz/separaty/2017/SI/kalina-0474076.pdf
High-dimensional Data in Economics and their (Robust) Analysis
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2017-01-01
Roč. 12, č. 1 (2017), s. 171-183 ISSN 1452-4864 R&D Projects: GA ČR GA17-07384S Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : econometrics * high-dimensional data * dimensionality reduction * linear regression * classification analysis * robustness Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability
Quantifying high dimensional entanglement with two mutually unbiased bases
Directory of Open Access Journals (Sweden)
Paul Erker
2017-07-01
Full Text Available We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.
International Nuclear Information System (INIS)
Melas, Evangelos
2011-01-01
The Bondi-Metzner-Sachs group B is the common asymptotic group of all asymptotically flat (lorentzian) space-times, and is the best candidate for the universal symmetry group of General Relativity. However, in quantum gravity, complexified or euclidean versions of General Relativity are frequently considered. McCarthy has shown that there are forty-two generalizations of B for these versions of the theory and a variety of further ones, either real in any signature, or complex. A firm foundation for quantum gravity can be laid by following through the analogue of Wigner's programme for special relativity with B replacing the Poincare group P. Here the main results which have been obtained so far in this research programme are reported and the more important open problems are stated.
High dimensional model representation method for fuzzy structural dynamics
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
High-dimensional quantum cloning and applications to quantum hacking.
Bouchard, Frédéric; Fickler, Robert; Boyd, Robert W; Karimi, Ebrahim
2017-02-01
Attempts at cloning a quantum system result in the introduction of imperfections in the state of the copies. This is a consequence of the no-cloning theorem, which is a fundamental law of quantum physics and the backbone of security for quantum communications. Although perfect copies are prohibited, a quantum state may be copied with maximal accuracy via various optimal cloning schemes. Optimal quantum cloning, which lies at the border of the physical limit imposed by the no-signaling theorem and the Heisenberg uncertainty principle, has been experimentally realized for low-dimensional photonic states. However, an increase in the dimensionality of quantum systems is greatly beneficial to quantum computation and communication protocols. Nonetheless, no experimental demonstration of optimal cloning machines has hitherto been shown for high-dimensional quantum systems. We perform optimal cloning of high-dimensional photonic states by means of the symmetrization method. We show the universality of our technique by conducting cloning of numerous arbitrary input states and fully characterize our cloning machine by performing quantum state tomography on cloned photons. In addition, a cloning attack on a Bennett and Brassard (BB84) quantum key distribution protocol is experimentally demonstrated to reveal the robustness of high-dimensional states in quantum cryptography.
Uniqueness theorems for variational problems by the method of transformation groups
Reichel, Wolfgang
2004-01-01
A classical problem in the calculus of variations is the investigation of critical points of functionals {\\cal L} on normed spaces V. The present work addresses the question: Under what conditions on the functional {\\cal L} and the underlying space V does {\\cal L} have at most one critical point? A sufficient condition for uniqueness is given: the presence of a "variational sub-symmetry", i.e., a one-parameter group G of transformations of V, which strictly reduces the values of {\\cal L}. The "method of transformation groups" is applied to second-order elliptic boundary value problems on Riemannian manifolds. Further applications include problems of geometric analysis and elasticity.
On the Special Problems in Creating Group Cohesion Within the Prison Setting.
Juda, Daniel P.
1983-01-01
Describes attempts to form a communication group among male and female inmates. The failure of this effort is discussed with emphasis on the special problems and needs of groups in prisons and the lack of insight among the institution's administration and staff. (JAC)
Ruzhansky, Michael; Suragan, Durvudkhan
2015-01-01
We propose the analogues of boundary layer potentials for the sub-Laplacian on homogeneous Carnot groups/stratified Lie groups and prove continuity results for them. In particular, we show continuity of the single layer potential and establish the Plemelj type jump relations for the double layer potential. We prove sub-Laplacian adapted versions of the Stokes theorem as well as of Green's first and second formulae on homogeneous Carnot groups. Several applications to boundary value problems a...
Layer potentials, Kac's problem, and refined Hardy inequality on homogeneous Carnot groups
Ruzhansky, Michael; Suragan, Durvudkhan
2017-01-01
We propose the analogues of boundary layer potentials for the sub-Laplacian on homogeneous Carnot groups/stratified Lie groups and prove continuity results for them. In particular, we show continuity of the single layer potential and establish the Plemelj type jump relations for the double layer potential. We prove sub-Laplacian adapted versions of the Stokes theorem as well as of Green's first and second formulae on homogeneous Carnot groups. Several applications to boundary value problems a...
Efficient and accurate nearest neighbor and closest pair search in high-dimensional space
Tao, Yufei
2010-07-01
Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial
On a Consensus Measure in a Group Multi-Criteria Decision Making Problem.
Michele Fedrizzi
2010-01-01
A method for consensus measuring in a group decision problem is presented for the multiple criteria case. The decision process is supposed to be carried out according to Saaty's Analytic Hierarchy Process, and hence using pairwise comparison among the alternatives. Using a suitable distance between the experts' judgements, a scale transformation is proposed which allows a fuzzy interpretation of the problem and the definition of a consensus measure by means of fuzzy tools as linguistic quanti...
Hawking radiation of a high-dimensional rotating black hole
Energy Technology Data Exchange (ETDEWEB)
Zhao, Ren; Zhang, Lichun; Li, Huaifan; Wu, Yueqin [Shanxi Datong University, Institute of Theoretical Physics, Department of Physics, Datong (China)
2010-01-15
We extend the classical Damour-Ruffini method and discuss Hawking radiation spectrum of high-dimensional rotating black hole using Tortoise coordinate transformation defined by taking the reaction of the radiation to the spacetime into consideration. Under the condition that the energy and angular momentum are conservative, taking self-gravitation action into account, we derive Hawking radiation spectrums which satisfy unitary principle in quantum mechanics. It is shown that the process that the black hole radiates particles with energy {omega} is a continuous tunneling process. We provide a theoretical basis for further studying the physical mechanism of black-hole radiation. (orig.)
On spectral distribution of high dimensional covariation matrices
DEFF Research Database (Denmark)
Heinrich, Claudio; Podolskij, Mark
In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....
The additive hazards model with high-dimensional regressors
DEFF Research Database (Denmark)
Martinussen, Torben; Scheike, Thomas
2009-01-01
This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... model. A standard PLS algorithm can also be constructed, but it turns out that the resulting predictor can only be related to the original covariates via time-dependent coefficients. The methods are applied to a breast cancer data set with gene expression recordings and to the well known primary biliary...
High-dimensional quantum channel estimation using classical light
CSIR Research Space (South Africa)
Mabena, Chemist M
2017-11-01
Full Text Available stream_source_info Mabena_20007_2017.pdf.txt stream_content_type text/plain stream_size 960 Content-Encoding UTF-8 stream_name Mabena_20007_2017.pdf.txt Content-Type text/plain; charset=UTF-8 PHYSICAL REVIEW A 96, 053860... (2017) High-dimensional quantum channel estimation using classical light Chemist M. Mabena CSIR National Laser Centre, P.O. Box 395, Pretoria 0001, South Africa and School of Physics, University of the Witwatersrand, Johannesburg 2000, South...
McEvoy, Peter M; Burgess, Melissa M; Nathan, Paula
2014-03-01
Cognitive behavioural therapy (CBT) is efficacious, but there remains individual variability in outcomes. Patient's interpersonal problems may affect treatment outcomes, either directly or through a relationship mediated by helping alliance. Interpersonal problems may affect alliance and outcomes differentially in individual and group (CBGT) treatments. The main aim of this study was to investigate the relationship between interpersonal problems, alliance, dropout and outcomes for a clinical sample receiving either individual or group CBT for anxiety or depression in a community clinic. Patients receiving individual CBT (N=84) or CBGT (N=115) completed measures of interpersonal problems, alliance, and disorder specific symptoms at the commencement and completion of CBT. In CBGT higher pre-treatment interpersonal problems were associated with increased risk of dropout and poorer outcomes. This relationship was not mediated by alliance. In individual CBT those who reported higher alliance were more likely to complete treatment, although alliance was not associated with symptom change, and interpersonal problems were not related to attrition or outcome. Allocation to group and individual therapy was non-random, so selection bias may have influenced these results. Some analyses were only powered to detect large effects. Helping alliance ratings were high, so range restriction may have obscured the relationship between helping alliance, attrition and outcomes. Pre-treatment interpersonal problems increase risk of dropout and predict poorer outcomes in CBGT, but not in individual CBT, and this relationship is not mediated by helping alliance. Stronger alliance is associated with treatment completion in individual, but not group CBT. Copyright © 2014 Elsevier B.V. All rights reserved.
Elucidating high-dimensional cancer hallmark annotation via enriched ontology.
Yan, Shankai; Wong, Ka-Chun
2017-09-01
Cancer hallmark annotation is a promising technique that could discover novel knowledge about cancer from the biomedical literature. The automated annotation of cancer hallmarks could reveal relevant cancer transformation processes in the literature or extract the articles that correspond to the cancer hallmark of interest. It acts as a complementary approach that can retrieve knowledge from massive text information, advancing numerous focused studies in cancer research. Nonetheless, the high-dimensional nature of cancer hallmark annotation imposes a unique challenge. To address the curse of dimensionality, we compared multiple cancer hallmark annotation methods on 1580 PubMed abstracts. Based on the insights, a novel approach, UDT-RF, which makes use of ontological features is proposed. It expands the feature space via the Medical Subject Headings (MeSH) ontology graph and utilizes novel feature selections for elucidating the high-dimensional cancer hallmark annotation space. To demonstrate its effectiveness, state-of-the-art methods are compared and evaluated by a multitude of performance metrics, revealing the full performance spectrum on the full set of cancer hallmarks. Several case studies are conducted, demonstrating how the proposed approach could reveal novel insights into cancers. https://github.com/cskyan/chmannot. Copyright © 2017 Elsevier Inc. All rights reserved.
High-dimensional single-cell cancer biology.
Irish, Jonathan M; Doxie, Deon B
2014-01-01
Cancer cells are distinguished from each other and from healthy cells by features that drive clonal evolution and therapy resistance. New advances in high-dimensional flow cytometry make it possible to systematically measure mechanisms of tumor initiation, progression, and therapy resistance on millions of cells from human tumors. Here we describe flow cytometry techniques that enable a "single-cell " view of cancer. High-dimensional techniques like mass cytometry enable multiplexed single-cell analysis of cell identity, clinical biomarkers, signaling network phospho-proteins, transcription factors, and functional readouts of proliferation, cell cycle status, and apoptosis. This capability pairs well with a signaling profiles approach that dissects mechanism by systematically perturbing and measuring many nodes in a signaling network. Single-cell approaches enable study of cellular heterogeneity of primary tissues and turn cell subsets into experimental controls or opportunities for new discovery. Rare populations of stem cells or therapy-resistant cancer cells can be identified and compared to other types of cells within the same sample. In the long term, these techniques will enable tracking of minimal residual disease (MRD) and disease progression. By better understanding biological systems that control development and cell-cell interactions in healthy and diseased contexts, we can learn to program cells to become therapeutic agents or target malignant signaling events to specifically kill cancer cells. Single-cell approaches that provide deep insight into cell signaling and fate decisions will be critical to optimizing the next generation of cancer treatments combining targeted approaches and immunotherapy.
An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling
Li, Weixuan; Lin, Guang; Zhang, Dongxiao
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect-except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functions is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated
Gustafsson, Peter; Jonsson, Gunnar; Enghag, Margareta
2015-01-01
The problem-solving process is investigated for five groups of students when solving context-rich problems in an introductory physics course included in an engineering programme. Through transcripts of their conversation, the paths in the problem-solving process have been traced and related to a general problem-solving model. All groups exhibit…
Testing problem-solving capacities: differences between individual testing and social group setting.
Krasheninnikova, Anastasia; Schneider, Jutta M
2014-09-01
Testing animals individually in problem-solving tasks limits distractions of the subjects during the test, so that they can fully concentrate on the problem. However, such individual performance may not indicate the problem-solving capacity that is commonly employed in the wild when individuals are faced with a novel problem in their social groups, where the presence of a conspecific influences an individual's behaviour. To assess the validity of data gathered from parrots when tested individually, we compared the performance on patterned-string tasks among parrots tested singly and parrots tested in social context. We tested two captive groups of orange-winged amazons (Amazona amazonica) with several patterned-string tasks. Despite the differences in the testing environment (singly vs. social context), parrots from both groups performed similarly. However, we found that the willingness to participate in the tasks was significantly higher for the individuals tested in social context. The study provides further evidence for the crucial influence of social context on individual's response to a challenging situation such as a problem-solving test.
Arif, Muhammad
2012-06-01
In pattern classification problems, feature extraction is an important step. Quality of features in discriminating different classes plays an important role in pattern classification problems. In real life, pattern classification may require high dimensional feature space and it is impossible to visualize the feature space if the dimension of feature space is greater than four. In this paper, we have proposed a Similarity-Dissimilarity plot which can project high dimensional space to a two dimensional space while retaining important characteristics required to assess the discrimination quality of the features. Similarity-dissimilarity plot can reveal information about the amount of overlap of features of different classes. Separable data points of different classes will also be visible on the plot which can be classified correctly using appropriate classifier. Hence, approximate classification accuracy can be predicted. Moreover, it is possible to know about whom class the misclassified data points will be confused by the classifier. Outlier data points can also be located on the similarity-dissimilarity plot. Various examples of synthetic data are used to highlight important characteristics of the proposed plot. Some real life examples from biomedical data are also used for the analysis. The proposed plot is independent of number of dimensions of the feature space.
Saving Face: Managing Rapport in a Problem-Based Learning Group
Robinson, Leslie; Harris, Ann; Burton, Rob
2015-01-01
This qualitative study investigated the complex social aspects of communication required for students to participate effectively in Problem-Based Learning and explored how these dynamics are managed. The longitudinal study of a group of first-year undergraduates examined interactions using Rapport Management as a framework to analyse communication…
Information Problem-Solving Skills in Small Virtual Groups and Learning Outcomes
Garcia, Consuelo; Badia, Antoni
2017-01-01
This study investigated the frequency of use of information problem-solving (IPS) skills and its relationship with learning outcomes. During the course of the study, 40 teachers carried out a collaborative IPS task in small virtual groups in a 4-week online training course. The status of IPS skills was collected through self-reports handed in over…
Group problem solving as a different participatory approach to Citizenship Education.
Guérin, Laurence
2017-01-01
Purpose: The main goal of this article is to define and justify group problem solving as an approach to citizenship education. It is demonstrated that the choice of theoretical framework of democracy has consequences for the chosen learning goals, educational approach and learning activities. The
Choi, Youngsoo; Ro, Heejung
2012-01-01
The development of positive attitudes in team-based work is important in management education. This study investigates hospitality students' attitudes toward group projects by examining instructional factors and team problems. Specifically, we examine how the students' perceptions of project appropriateness, instructors' support, and evaluation…
Directory of Open Access Journals (Sweden)
Aisling T. O'Donnell
2015-08-01
Full Text Available Previous research has demonstrated that the unemployed suffer increased psychological and physical health problems compared to their employed counterparts. Further, unemployment leads to an unwanted new social identity that is stigmatizing, and stigma is known to be a stressor causing psychological and physical health problems. However, it is not yet known whether being stigmatized as an unemployed group member is associated with psychological and physical health in this group. The current study tested the impact of anticipated stigma on psychological distress and physical health problems, operationalized as somatic symptoms, in a volunteer sample of unemployed people. Results revealed that anticipated stigma had a direct effect on both psychological distress and somatic symptoms, such that greater anticipated stigma significantly predicted higher levels of both. Moreover, the direct effect on somatic symptoms became non-significant when psychological distress was taken into account. Thus, to the extent that unemployed participants anticipated experiencing greater stigma, they also reported increased psychological distress, and this psychological distress predicted increased somatic symptoms. Our findings complement and extend the existing literature on the relationships between stigmatized identities, psychological distress and physical health problems, particularly in relation to the unemployed group. This group is important to consider both theoretically, given the unwanted and transient nature of the identity compared to other stigmatized identities, but also practically, as the findings indicate a need to orient to the perceived valence of the unemployed identity and its effects on psychological and physical health.
Epidemiology of drinking, alcohol use disorders, and related problems in US ethnic minority groups.
Caetano, Raul; Vaeth, Patrice A C; Chartier, Karen G; Mills, Britain A
2014-01-01
This chapter reviews selected epidemiologic studies on drinking and associated problems among US ethnic minorities. Ethnic minorities and the White majority group exhibit important differences in alcohol use and related problems, including alcohol use disorders. Studies show a higher rate of binge drinking, drinking above guidelines, alcohol abuse, and dependence for major ethnic and racial groups, notably, Blacks, Hispanics, and American Indians/Alaskan Natives. Other problems with a higher prevalence in certain minority groups are, for example, cancer (Blacks), cirrhosis (Hispanics), fetal alcohol syndrome (Blacks and American Indians/Alaskan Natives), drinking and driving (Hispanics, American Indians/Alaskan Natives). There are also considerable differences in rates of drinking and problems within certain ethnic groups such as Hispanics, Asian Americans, and American Indians/Alaskan Natives. For instance, among Hispanics, Puerto Ricans and Mexican Americans drink more and have higher rates of disorders such as alcohol abuse and dependence than Cuban Americans. Disparities also affect the trajectory of heavy drinking and the course of alcohol dependence among minorities. Theoretic accounts of these disparities generally attribute them to the historic experience of discrimination and to minority socioeconomic disadvantages at individual and environmental levels. © 2014 Elsevier B.V. All rights reserved.
Fan, Xitao; Wang, Lin
The Monte Carlo study compared the performance of predictive discriminant analysis (PDA) and that of logistic regression (LR) for the two-group classification problem. Prior probabilities were used for classification, but the cost of misclassification was assumed to be equal. The study used a fully crossed three-factor experimental design (with…
Participation in sports groups for patients with cardiac problems : An experimental study
Schaperclaus, G; deGreef, M; Rispens, P; deCalonne, D; Landsman, M; Lie, KI; Oudhof, J
1997-01-01
An experimental study was carried out to determine the influence of participation in Sports Groups for Patients with Cardiac Problems (SPCP) on physical and mental fitness and on risk factor level after myocardial infarction. SPCP members (n = 74; 67 men and 7 women) were compared with Nonsporting
An algebraic approach to the inverse eigenvalue problem for a quantum system with a dynamical group
International Nuclear Information System (INIS)
Wang, S.J.
1993-04-01
An algebraic approach to the inverse eigenvalue problem for a quantum system with a dynamical group is formulated for the first time. One dimensional problem is treated explicitly in detail for both the finite dimensional and infinite dimensional Hilbert spaces. For the finite dimensional Hilbert space, the su(2) algebraic representation is used; while for the infinite dimensional Hilbert space, the Heisenberg-Weyl algebraic representation is employed. Fourier expansion technique is generalized to the generator space, which is suitable for analysis of irregular spectra. The polynormial operator basis is also used for complement, which is appropriate for analysis of some simple Hamiltonians. The proposed new approach is applied to solve the classical inverse Sturn-Liouville problem and to study the problems of quantum regular and irregular spectra. (orig.)
High-Dimensional Quantum Information Processing with Linear Optics
Fitzpatrick, Casey A.
Quantum information processing (QIP) is an interdisciplinary field concerned with the development of computers and information processing systems that utilize quantum mechanical properties of nature to carry out their function. QIP systems have become vastly more practical since the turn of the century. Today, QIP applications span imaging, cryptographic security, computation, and simulation (quantum systems that mimic other quantum systems). Many important strategies improve quantum versions of classical information system hardware, such as single photon detectors and quantum repeaters. Another more abstract strategy engineers high-dimensional quantum state spaces, so that each successful event carries more information than traditional two-level systems allow. Photonic states in particular bring the added advantages of weak environmental coupling and data transmission near the speed of light, allowing for simpler control and lower system design complexity. In this dissertation, numerous novel, scalable designs for practical high-dimensional linear-optical QIP systems are presented. First, a correlated photon imaging scheme using orbital angular momentum (OAM) states to detect rotational symmetries in objects using measurements, as well as building images out of those interactions is reported. Then, a statistical detection method using chains of OAM superpositions distributed according to the Fibonacci sequence is established and expanded upon. It is shown that the approach gives rise to schemes for sorting, detecting, and generating the recursively defined high-dimensional states on which some quantum cryptographic protocols depend. Finally, an ongoing study based on a generalization of the standard optical multiport for applications in quantum computation and simulation is reported upon. The architecture allows photons to reverse momentum inside the device. This in turn enables realistic implementation of controllable linear-optical scattering vertices for
Energy Efficient MAC Scheme for Wireless Sensor Networks with High-Dimensional Data Aggregate
Directory of Open Access Journals (Sweden)
Seokhoon Kim
2015-01-01
Full Text Available This paper presents a novel and sustainable medium access control (MAC scheme for wireless sensor network (WSN systems that process high-dimensional aggregated data. Based on a preamble signal and buffer threshold analysis, it maximizes the energy efficiency of the wireless sensor devices which have limited energy resources. The proposed group management MAC (GM-MAC approach not only sets the buffer threshold value of a sensor device to be reciprocal to the preamble signal but also sets a transmittable group value to each sensor device by using the preamble signal of the sink node. The primary difference between the previous and the proposed approach is that existing state-of-the-art schemes use duty cycle and sleep mode to save energy consumption of individual sensor devices, whereas the proposed scheme employs the group management MAC scheme for sensor devices to maximize the overall energy efficiency of the whole WSN systems by minimizing the energy consumption of sensor devices located near the sink node. Performance evaluations show that the proposed scheme outperforms the previous schemes in terms of active time of sensor devices, transmission delay, control overhead, and energy consumption. Therefore, the proposed scheme is suitable for sensor devices in a variety of wireless sensor networking environments with high-dimensional data aggregate.
Caillaud, Sabine; Bonnot, Virginie; Ratiu, Eugenia; Krauth-Gruber, Silvia
2016-06-01
This study explores the way groups cope with collective responsibility for ecological problems. The social representations approach was adopted, and the collective symbolic coping model was used as a frame of analysis, integrating collective emotions to enhance the understanding of coping processes. The original feature of this study is that the analysis is at group level. Seven focus groups were conducted with French students. An original use of focus groups was proposed: Discussions were structured to induce feelings of collective responsibility and enable observation of how groups cope with such feelings at various levels (social knowledge; social identities; group dynamics). Two analyses were conducted: Qualitative analysis of participants' use of various kinds of knowledge, social categories and the group dynamics, and lexicometric analysis to reveal how emotions varied during the different discussion phases. Results showed that groups' emotional states moved from negative to positive: They used specific social categories and resorted to shared stereotypes to cope with collective responsibility and maintain the integrity of their worldview. Only then did debate become possible again; it was anchored in the nature-culture dichotomy such that groups switched from group-based to system-based emotions. © 2015 The British Psychological Society.
Problem Based Learning as a Shared Musical Journey – Group Dynamics, Communication and Creativity
Directory of Open Access Journals (Sweden)
Charlotte Lindvang
2015-06-01
Full Text Available The focus of this paper is how we can facilitate problem based learning (PBL more creatively. We take a closer look upon the connection between creative processes and social communication in the PBL group including how difficulties in the social interplay may hinder creativity. The paper draws on group dynamic theory, and points out the importance of building a reflexive milieu in the group. Musical concepts are used to illustrate the communicative and creative aspects of PBL and the paper uses the analogy between improvising together and do a project work together. We also discuss the role of the supervisor in a PBL group process. Further we argue that creativity is rooted deep in our consciousness and connected to our ability to work with a flexible mind. In order to enhance the cohesion as well as the creativity of the group a model of music listening as a concrete intervention tool in PBL processes is proposed.
Perceived body weight, eating and exercise problems of different groups of women.
Coker, Elise; Telfer, James; Abraham, Suzanne
2012-10-01
To compare prevalence of problems with body weight, eating and exercise (past or present) of female psychiatric inpatients with routine care, gynaecological and obstetric female outpatients, and eating disorder inpatients. One thousand and thirty-eight females aged 18-55 years from routine care (n=99), gynaecological (n=263) and obstetric (n=271) outpatient clinics, and eating disorder (n=223) and general psychiatric units (n=182) participated. Participants self-reported past or current problems with weight, eating and exercise using a short survey. A sub-sample of women completed the Eating and Exercise Examination (EEE) which includes the Quality of Life for Eating Disorders (QOL ED). The prevalence of self-reported problems controlling weight (52%), disordered eating and eating disorders (43%) for the psychiatric patients was significantly greater than for the routine care and gynaecological and obstetrics outpatients. The psychiatric group had a significantly higher mean body mass index (BMI) of 27.3 kg/m(2) (standard deviation (SD)=6.7) and prevalence of self-reported obesity (28%) than the other groups. Treatment of women with psychiatric problems should include assessment and concurrent attention to body weight, eating disorder and exercise problems in association with appropriate medical, psychiatric, psychological and medication treatment of their presenting disorder.
Applying recursive numerical integration techniques for solving high dimensional integrals
International Nuclear Information System (INIS)
Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan
2016-11-01
The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.
Network Reconstruction From High-Dimensional Ordinary Differential Equations.
Chen, Shizhe; Shojaie, Ali; Witten, Daniela M
2017-01-01
We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.
Quantum correlation of high dimensional system in a dephasing environment
Ji, Yinghua; Ke, Qiang; Hu, Juju
2018-05-01
For a high dimensional spin-S system embedded in a dephasing environment, we theoretically analyze the time evolutions of quantum correlation and entanglement via Frobenius norm and negativity. The quantum correlation dynamics can be considered as a function of the decoherence parameters, including the ratio between the system oscillator frequency ω0 and the reservoir cutoff frequency ωc , and the different environment temperature. It is shown that the quantum correlation can not only measure nonclassical correlation of the considered system, but also perform a better robustness against the dissipation. In addition, the decoherence presents the non-Markovian features and the quantum correlation freeze phenomenon. The former is much weaker than that in the sub-Ohmic or Ohmic thermal reservoir environment.
Applying recursive numerical integration techniques for solving high dimensional integrals
Energy Technology Data Exchange (ETDEWEB)
Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik
2016-11-15
The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.
Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations
Mitry, Mina
Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.
The effect of training and breed group on problem-solving behaviours in dogs.
Marshall-Pescini, Sarah; Frazzi, Chiara; Valsecchi, Paola
2016-05-01
Dogs have become the focus of cognitive studies looking at both their physical and social problem-solving abilities (Bensky et al. in Adv Stud Behav, 45:209-387, 2013), but very little is known about the environmental and inherited factors that may affect these abilities. In the current study, we presented a manipulation task (a puzzle box) and a spatial task (the detour) to 128 dogs belonging to four different breed groups: Herding, Mastiff-like, Working and Retrievers (von Holdt et al. in Nature 464:898-902, 2010). Within each group, we tested highly trained and non-trained dogs. Results showed that trained dogs were faster at obtaining the reward in the detour task. In the manipulation task, trained dogs approached the apparatus sooner in the first familiarization trial, but no effect of breed emerged on this variable. Furthermore, regardless of breed, dogs in the trained group spent proportionally more time interacting with the apparatus and were more likely to succeed in the test trial than dogs in the non-trained group, whereas regardless of training, dogs in the working breed group were more likely to succeed than dogs in the retriever and herding breed groups (but not the mastiff-like group). Finally, trained dogs were less likely to look at a person than non-trained dogs during testing, but dogs in the herding group more likely to do so than dogs in the retriever and working but not the mastiff-like breed groups. Overall, results reveal a strong influence of training experience but less consistent differences between breed groups on different components thought to affect problem solving.
Larger groups of passerines are more efficient problem solvers in the wild
Morand-Ferron, Julie; Quinn, John L.
2011-01-01
Group living commonly helps organisms face challenging environmental conditions. Although a known phenomenon in humans, recent findings suggest that a benefit of group living in animals generally might be increased innovative problem-solving efficiency. This benefit has never been demonstrated in a natural context, however, and the mechanisms underlying improved efficiency are largely unknown. We examined the problem-solving performance of great and blue tits at automated devices and found that efficiency increased with flock size. This relationship held when restricting the analysis to naive individuals, demonstrating that larger groups increased innovation efficiency. In addition to this effect of naive flock size, the presence of at least one experienced bird increased the frequency of solving, and larger flocks were more likely to contain experienced birds. These findings provide empirical evidence for the “pool of competence” hypothesis in nonhuman animals. The probability of success also differed consistently between individuals, a necessary condition for the pool of competence hypothesis. Solvers had a higher probability of success when foraging with a larger number of companions and when using devices located near rather than further from protective tree cover, suggesting a role for reduced predation risk on problem-solving efficiency. In contrast to traditional group living theory, individuals joining larger flocks benefited from a higher seed intake, suggesting that group living facilitated exploitation of a novel food source through improved problem-solving efficiency. Together our results suggest that both ecological and social factors, through reduced predation risk and increased pool of competence, mediate innovation in natural populations. PMID:21930936
Directory of Open Access Journals (Sweden)
Mr. Evgeny M. Klimenko
2016-06-01
Full Text Available The article considers the problem of intellectuals as social class, features of the national intellectuals, and the problems that national intellectuals of indigenous ethnic groups of Khabarovsk region came across with in the 1990s. These are the problems of preservation of national culture, national language, and difficulties, which have arisen in the sphere of getting national education by representatives of the autochthonic population. The article investigates the transformation process of national culture happening after the Soviet culture had changed into the global culture. When writing this paper the author used unpublished archival documentation. The research is made with assistance of the Ministry of Education and Science of the Russian Federation, the contract No. 14.Z56.16.5304-MK, a project subject: "Regional model of transformation of indigenous small ethnos culture in the conditions of socialist modernization of the Russian Far East in the second half of 1930-1970s".
Shaffer, Patrick; Valsson, Omar; Parrinello, Michele
2016-02-02
The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin.
Shaffer, Patrick; Valsson, Omar; Parrinello, Michele
2016-01-01
The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin. PMID:26787868
Bayesian Multiresolution Variable Selection for Ultra-High Dimensional Neuroimaging Data.
Zhao, Yize; Kang, Jian; Long, Qi
2018-01-01
Ultra-high dimensional variable selection has become increasingly important in analysis of neuroimaging data. For example, in the Autism Brain Imaging Data Exchange (ABIDE) study, neuroscientists are interested in identifying important biomarkers for early detection of the autism spectrum disorder (ASD) using high resolution brain images that include hundreds of thousands voxels. However, most existing methods are not feasible for solving this problem due to their extensive computational costs. In this work, we propose a novel multiresolution variable selection procedure under a Bayesian probit regression framework. It recursively uses posterior samples for coarser-scale variable selection to guide the posterior inference on finer-scale variable selection, leading to very efficient Markov chain Monte Carlo (MCMC) algorithms. The proposed algorithms are computationally feasible for ultra-high dimensional data. Also, our model incorporates two levels of structural information into variable selection using Ising priors: the spatial dependence between voxels and the functional connectivity between anatomical brain regions. Applied to the resting state functional magnetic resonance imaging (R-fMRI) data in the ABIDE study, our methods identify voxel-level imaging biomarkers highly predictive of the ASD, which are biologically meaningful and interpretable. Extensive simulations also show that our methods achieve better performance in variable selection compared to existing methods.
Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search
Directory of Open Access Journals (Sweden)
Simon Fong
2013-01-01
Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.
Using High-Dimensional Image Models to Perform Highly Undetectable Steganography
Pevný, Tomáš; Filler, Tomáš; Bas, Patrick
This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.
Yu, Hualong; Ni, Jun
2014-01-01
Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.
Zhu, Lingxue; Lei, Jing; Devlin, Bernie; Roeder, Kathryn
2017-09-01
Scientists routinely compare gene expression levels in cases versus controls in part to determine genes associated with a disease. Similarly, detecting case-control differences in co-expression among genes can be critical to understanding complex human diseases; however statistical methods have been limited by the high dimensional nature of this problem. In this paper, we construct a sparse-Leading-Eigenvalue-Driven (sLED) test for comparing two high-dimensional covariance matrices. By focusing on the spectrum of the differential matrix, sLED provides a novel perspective that accommodates what we assume to be common, namely sparse and weak signals in gene expression data, and it is closely related with Sparse Principal Component Analysis. We prove that sLED achieves full power asymptotically under mild assumptions, and simulation studies verify that it outperforms other existing procedures under many biologically plausible scenarios. Applying sLED to the largest gene-expression dataset obtained from post-mortem brain tissue from Schizophrenia patients and controls, we provide a novel list of genes implicated in Schizophrenia and reveal intriguing patterns in gene co-expression change for Schizophrenia subjects. We also illustrate that sLED can be generalized to compare other gene-gene "relationship" matrices that are of practical interest, such as the weighted adjacency matrices.
Loss of hyperbolicity changes the number of wave groups in Riemann problems
Vítor Matos; Julio D. Silva; Dan Marchesin
2016-01-01
Themain goal of ourwork is to showthat there exists a class of 2×2 Riemann problems for which the solution comprises a singlewave group for an open set of initial conditions. This wave group comprises a 1-rarefaction joined to a 2-rarefaction, not by an intermediate state, but by a doubly characteristic shock, 1-left and 2-right characteristic. In order to ensure that perturbations of initial conditions do not destroy the adjacency of the waves, local transversality between a composite curve ...
A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems
2005-05-01
Tabu Search. Mathematical and Computer Modeling 39: 599-616. 107 Daskin , M.S., E. Stern. 1981. A Hierarchical Objective Set Covering Model for EMS... A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems by Gary W. Kinney Jr., B.G.S., M.S. Dissertation Presented to the...DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited The University of Texas at Austin May, 2005 20050504 002 REPORT
International Nuclear Information System (INIS)
Badalov, S.A.; Filippov, G.F.
1983-01-01
All the basic calculation formulas of an algebraic version of the resonating-group method for a pultichannel problem of the scattering of a nucleon by 7 Li and 7 Be taking into account α+α channel are derived. The spin-orbital and the Coulomb interactions are taken into consideration. The procedure enabling an exact projection into the states with the given values of the channel quantum numbers is proposed
HMOs and physician recruiting: a survey of problems and methods among group practice plans.
Fink, R
1981-01-01
A mail survey was conducted among 69 group practice health maintenance organizations (HMOs) to collect information on the recruiting of primary care physicians and specialists. In reporting on difficulties in recruiting physicians for primary care, the medical directors of HMOs indicated that the greatest problem was locating obstetrician-gynecologists. Among specialists, recruiting for orthopedists was reported as being most difficult, although plans that employ neurologists and anesthesiolo...
Locke, Kenneth D; Sayegh, Liliane; Penberthy, J Kim; Weber, Charlotte; Haentjens, Katherine; Turecki, Gustavo
2017-06-01
We assessed severely and persistently depressed patients' interpersonal self-efficacy, problems, and goals, plus changes in interpersonal functioning and depression during 20 weeks of group therapy. Outpatients (32 female, 26 male, mean age = 45 years) completed interpersonal circumplex measures of goals, efficacy, and problems before completing 20 weeks of manualized group therapy, during which we regularly assessed depression and interpersonal style. Compared to normative samples, patients lacked interpersonal agency, including less self-efficacy for expressive/assertive actions; stronger motives to avoid conflict, scorn, and humiliation; and more problems with being too submissive, inhibited, and accommodating. Behavioral Activation and especially Cognitive Behavioral Analysis System of Psychotherapy interventions produced improvements in depression and interpersonal agency, with increases in "agentic and communal" efficacy predicting subsequent decreases in depression. While severely and persistently depressed patients were prone to express maladaptive interpersonal dispositions, over the course of group therapy, they showed increasingly agentic and beneficial patterns of cognitions, motives, and behaviors. © 2016 Wiley Periodicals, Inc.
Grošelj, Petra; Zadnik Stirn, Lidija
2015-09-15
Environmental management problems can be dealt with by combining participatory methods, which make it possible to include various stakeholders in a decision-making process, and multi-criteria methods, which offer a formal model for structuring and solving a problem. This paper proposes a three-phase decision making approach based on the analytic network process and SWOT (strengths, weaknesses, opportunities and threats) analysis. The approach enables inclusion of various stakeholders or groups of stakeholders in particular stages of decision making. The structure of the proposed approach is composed of a network consisting of an objective cluster, a cluster of strategic goals, a cluster of SWOT factors and a cluster of alternatives. The application of the suggested approach is applied to a management problem of Pohorje, a mountainous area in Slovenia. Stakeholders from sectors that are important for Pohorje (forestry, agriculture, tourism and nature protection agencies) who can offer a wide range of expert knowledge were included in the decision-making process. The results identify the alternative of "sustainable development" as the most appropriate for development of Pohorje. The application in the paper offers an example of employing the new approach to an environmental management problem. This can also be applied to decision-making problems in various other fields. Copyright © 2015 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Laurent Berge
2012-01-01
Full Text Available This paper presents the R package HDclassif which is devoted to the clustering and the discriminant analysis of high-dimensional data. The classification methods proposed in the package result from a new parametrization of the Gaussian mixture model which combines the idea of dimension reduction and model constraints on the covariance matrices. The supervised classification method using this parametrization is called high dimensional discriminant analysis (HDDA. In a similar manner, the associated clustering method iscalled high dimensional data clustering (HDDC and uses the expectation-maximization algorithm for inference. In order to correctly t the data, both methods estimate the specific subspace and the intrinsic dimension of the groups. Due to the constraints on the covariance matrices, the number of parameters to estimate is significantly lower than other model-based methods and this allows the methods to be stable and efficient in high dimensions. Two introductory examples illustrated with R codes allow the user to discover the hdda and hddc functions. Experiments on simulated and real datasets also compare HDDC and HDDA with existing classification methods on high-dimensional datasets. HDclassif is a free software and distributed under the general public license, as part of the R software project.
Dereli-Iman, Esra
2013-01-01
Social Problem Solving for Child Scale is frequently used to determine behavioral problems of children with their own word and to identify ways of conflict encountered in daily life, and interpersonal relationships in abroad. The primary purpose of this study was to adapt the Wally Child Social Problem-Solving Detective Game Test. In order to…
A qualitative numerical study of high dimensional dynamical systems
Albers, David James
Since Poincare, the father of modern mathematical dynamical systems, much effort has been exerted to achieve a qualitative understanding of the physical world via a qualitative understanding of the functions we use to model the physical world. In this thesis, we construct a numerical framework suitable for a qualitative, statistical study of dynamical systems using the space of artificial neural networks. We analyze the dynamics along intervals in parameter space, separating the set of neural networks into roughly four regions: the fixed point to the first bifurcation; the route to chaos; the chaotic region; and a transition region between chaos and finite-state neural networks. The study is primarily with respect to high-dimensional dynamical systems. We make the following general conclusions as the dimension of the dynamical system is increased: the probability of the first bifurcation being of type Neimark-Sacker is greater than ninety-percent; the most probable route to chaos is via a cascade of bifurcations of high-period periodic orbits, quasi-periodic orbits, and 2-tori; there exists an interval of parameter space such that hyperbolicity is violated on a countable, Lebesgue measure 0, "increasingly dense" subset; chaos is much more likely to persist with respect to parameter perturbation in the chaotic region of parameter space as the dimension is increased; moreover, as the number of positive Lyapunov exponents is increased, the likelihood that any significant portion of these positive exponents can be perturbed away decreases with increasing dimension. The maximum Kaplan-Yorke dimension and the maximum number of positive Lyapunov exponents increases linearly with dimension. The probability of a dynamical system being chaotic increases exponentially with dimension. The results with respect to the first bifurcation and the route to chaos comment on previous results of Newhouse, Ruelle, Takens, Broer, Chenciner, and Iooss. Moreover, results regarding the high-dimensional
The quantum-field renormalization group in the problem of a growing phase boundary
International Nuclear Information System (INIS)
Antonov, N.V.; Vasil'ev, A.N.
1995-01-01
Within the quantum-field renormalization-group approach we examine the stochastic equation discussed by S.I. Pavlik in describing a randomly growing phase boundary. We show that, in contrast to Pavlik's assertion, the model is not multiplicatively renormalizable and that its consistent renormalization-group analysis requires introducing an infinite number of counterterms and the respective coupling constants (open-quotes chargeclose quotes). An explicit calculation in the one-loop approximation shows that a two-dimensional surface of renormalization-group points exits in the infinite-dimensional charge space. If the surface contains an infrared stability region, the problem allows for scaling with the nonuniversal critical dimensionalities of the height of the phase boundary and time, δ h and δ t , which satisfy the exact relationship 2 δ h = δ t + d, where d is the dimensionality of the phase boundary. 23 refs., 1 tab
Uncertainty dimensions of information behaviour in a group based problem solving context
DEFF Research Database (Denmark)
Hyldegård, Jette
2009-01-01
This paper presents a study of uncertainty dimensions of information behaviour in a group based problem solving context. After a presentation of the cognitive uncertainty dimension underlying Kuhlthau's ISP-model, uncertainty factors associated with personality, the work task situation and social...... members' experiences of uncertainty differ from the individual information seeker in Kuhlthau's ISP-model, and how this experience may be related to personal, work task and social factors. A number of methods have been employed to collect data on each group member during the assignment process......: a demographic survey, a personality test, 3 process surveys, 3 diaries and 3 interviews. It was found that group members' experiences of uncertainty did not correspond with the ISP-model in that other factors beyond the mere information searching process seemed to intermingle with the complex process...
Ihm, Jung-Joon; An, So-Youn; Seo, Deog-Gyu
2017-06-01
The aim of this study was to determine whether the personality types of dental students and their group dynamics were linked to their problem-based learning (PBL) performance. The Myers-Briggs Type Indicator (MBTI) instrument was used with 263 dental students enrolled in Seoul National University School of Dentistry from 2011 to 2013; the students had participated in PBL in their first year. A four-session PBL setting was designed to analyze how individual personality types and the diversity of their small groups were associated with PBL performance. Overall, the results showed that the personality type of PBL performance that was the most prominent was Judging. As a group became more diverse with its different constituent personality characteristics, there was a tendency for the group to be higher ranked in terms of PBL performance. In particular, the overperforming group was clustered around three major profiles: Extraverted Intuitive Thinking Judging (ENTJ), Introverted Sensing Thinking Judging (ISTJ), and Extraverted Sensing Thinking Judging (ESTJ). Personality analysis would be beneficial for dental faculty members in order for them to understand the extent to which cooperative learning would work smoothly, especially when considering group personalities.
High-dimensional quantum cryptography with twisted light
International Nuclear Information System (INIS)
Mirhosseini, Mohammad; Magaña-Loaiza, Omar S; O’Sullivan, Malcolm N; Rodenburg, Brandon; Malik, Mehul; Boyd, Robert W; Lavery, Martin P J; Padgett, Miles J; Gauthier, Daniel J
2015-01-01
Quantum key distribution (QKD) systems often rely on polarization of light for encoding, thus limiting the amount of information that can be sent per photon and placing tight bounds on the error rates that such a system can tolerate. Here we describe a proof-of-principle experiment that indicates the feasibility of high-dimensional QKD based on the transverse structure of the light field allowing for the transfer of more than 1 bit per photon. Our implementation uses the orbital angular momentum (OAM) of photons and the corresponding mutually unbiased basis of angular position (ANG). Our experiment uses a digital micro-mirror device for the rapid generation of OAM and ANG modes at 4 kHz, and a mode sorter capable of sorting single photons based on their OAM and ANG content with a separation efficiency of 93%. Through the use of a seven-dimensional alphabet encoded in the OAM and ANG bases, we achieve a channel capacity of 2.05 bits per sifted photon. Our experiment demonstrates that, in addition to having an increased information capacity, multilevel QKD systems based on spatial-mode encoding can be more resilient against intercept-resend eavesdropping attacks. (paper)
Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression
Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph
2017-10-01
In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.
Bayesian Subset Modeling for High-Dimensional Generalized Linear Models
Liang, Faming
2013-06-01
This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.
The literary uses of high-dimensional space
Directory of Open Access Journals (Sweden)
Ted Underwood
2015-12-01
Full Text Available Debates over “Big Data” shed more heat than light in the humanities, because the term ascribes new importance to statistical methods without explaining how those methods have changed. What we badly need instead is a conversation about the substantive innovations that have made statistical modeling useful for disciplines where, in the past, it truly wasn’t. These innovations are partly technical, but more fundamentally expressed in what Leo Breiman calls a new “culture” of statistical modeling. Where 20th-century methods often required humanists to squeeze our unstructured texts, sounds, or images into some special-purpose data model, new methods can handle unstructured evidence more directly by modeling it in a high-dimensional space. This opens a range of research opportunities that humanists have barely begun to discuss. To date, topic modeling has received most attention, but in the long run, supervised predictive models may be even more important. I sketch their potential by describing how Jordan Sellers and I have begun to model poetic distinction in the long 19th century—revealing an arc of gradual change much longer than received literary histories would lead us to expect.
Directory of Open Access Journals (Sweden)
Alessandra Turini Bolsoni-Silva
2010-01-01
Full Text Available Negative parental practices may influence the onset and maintenance of externalizing behavior problems, and positive parenting seem to improve children's social skills and reduce behavior problems. The objective of the present study was to describe the effects of an intervention designed to foster parents' social skills related to upbringing practices in order to reduce externalizing problems in children aged 4 to 6 years. Thirteen mothers and two care taker grandmothers took part in the study with an average of four participants per group. To assess intervention effects, we used a repeated measure design with control, pre, and post intervention assessments. Instruments used were: (a An interview schedule that evaluates the social interactions between parents and children functionally, considering each pair of child¿s and parent's behaviors as context for one another; (b A Social Skills Inventory; (c Child Behavior Checklist - CBCL. Intervention was effective in improving parent general social skills, decreasing negative parental practices and decreasing child behavior problems.
A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2011-01-01
Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions......: First, we propose a computationally less intensive approximate leave-one-out estimator, secondly, we show that variance inflation is also present in kernel principal component analysis (kPCA) and we provide a non-parametric renormalization scheme which can quite efficiently restore generalizability in kPCA....... As for PCA our analysis also suggests a simplified approximate expression. © 2011 Trine J. Abrahamsen and Lars K. Hansen....
Characterization of differentially expressed genes using high-dimensional co-expression networks
DEFF Research Database (Denmark)
Coelho Goncalves de Abreu, Gabriel; Labouriau, Rodrigo S.
2010-01-01
We present a technique to characterize differentially expressed genes in terms of their position in a high-dimensional co-expression network. The set-up of Gaussian graphical models is used to construct representations of the co-expression network in such a way that redundancy and the propagation...... that allow to make effective inference in problems with high degree of complexity (e.g. several thousands of genes) and small number of observations (e.g. 10-100) as typically occurs in high throughput gene expression studies. Taking advantage of the internal structure of decomposable graphical models, we...... construct a compact representation of the co-expression network that allows to identify the regions with high concentration of differentially expressed genes. It is argued that differentially expressed genes located in highly interconnected regions of the co-expression network are less informative than...
Applications of the renormalization group approach to problems in quantum field theory
International Nuclear Information System (INIS)
Renken, R.L.
1985-01-01
The presence of fluctuations at many scales of length complicates theories of quantum fields. However, interest is often focused on the low-energy consequences of a theory rather than the short distance fluctuations. In the renormalization-group approach, one takes advantage of this by constructing an effective theory with identical low-energy behavior, but without short distance fluctuations. Three problems of this type are studied here. In chapter 1, an effective lagrangian is used to compute the low-energy consequences of theories of technicolor. Corrections to weak-interaction parameters are found to be small, but conceivably measurable. In chapter 2, the renormalization group approach is applied to second order phase transitions in lattice gauge theories such as the deconfining transition in the U(1) theory. A practical procedure for studying the critical behavior based on Monte Carlo renormalization group methods is described in detail; no numerical results are presented. Chapter 3 addresses the problem of computing the low-energy behavior of atoms directly from Schrodinger's equation. A straightforward approach is described, but is found to be impractical
Win, Ni Ni; Nadarajah, Vishna Devi V; Win, Daw Khin
2015-01-01
Problem-based learning (PBL) is usually conducted in small-group learning sessions with approximately eight students per facilitator. In this study, we implemented a modified version of PBL involving collaborative groups in an undergraduate chiropractic program and assessed its pedagogical effectiveness. This study was conducted at the International Medical University, Kuala Lumpur, Malaysia, and involved the 2012 chiropractic student cohort. Six PBL cases were provided to chiropractic students, consisting of three PBL cases for which learning resources were provided and another three PBL cases for which learning resources were not provided. Group discussions were not continuously supervised, since only one facilitator was present. The students' perceptions of PBL in collaborative groups were assessed with a questionnaire that was divided into three domains: motivation, cognitive skills, and perceived pressure to work. Thirty of the 31 students (97%) participated in the study. PBL in collaborative groups was significantly associated with positive responses regarding students' motivation, cognitive skills, and perceived pressure to work (Plearning resources increased motivation and cognitive skills (Plearning resources.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.
Dazard, Jean-Eudes; Rao, J Sunil
2012-07-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.
Group problem-solving skills training for self-harm: randomised controlled trial
McAuliffe, Carmel; McLeavey, Breda C.; Fitzgerald, Anthony P.; Corcoran, Paul; Carroll, Bernie; Ryan, Louise; Fitzgerald, Eva; O'Regan, Mary; Mulqueen, Jillian; Arensman, Ella
2014-01-01
Background: Rates of self-harm are high and have recently increased. This trend and the repetitive nature of self-harm pose a significant challenge to mental health services. Aims: To determine the efficacy of a structured group problem-solving skills training (PST) programme as an intervention approach for self-harm in addition to treatment as usual (TAU) as offered by mental health services. Method: A total of 433 participants (aged 18-64 years) were randomly assigned to TAU plus PST or TAU...
The criticality problem in reflected slab type reactor in the two-group transport theory
International Nuclear Information System (INIS)
Garcia, R.D.M.
1978-01-01
The criticality problem in reflected slab type reactor is solved for the first time in the two group neutron transport theory, by singular eingenfunctions expansion, the singular integrals obtained through continuity conditions of angular distributions at the interface are regularized by a recently proposed method. The result is a coupled system of regular integral equations for the expansion coefficients, this system is solved by an ordinary interactive method. Numerical results that can be utilized as a comparative standard for aproximation methods, are presented [pt
Convergence Analysis of the Preconditioned Group Splitting Methods in Boundary Value Problems
Directory of Open Access Journals (Sweden)
Norhashidah Hj. Mohd Ali
2012-01-01
Full Text Available The construction of a specific splitting-type preconditioner in block formulation applied to a class of group relaxation iterative methods derived from the centred and rotated (skewed finite difference approximations has been shown to improve the convergence rates of these methods. In this paper, we present some theoretical convergence analysis on this preconditioner specifically applied to the linear systems resulted from these group iterative schemes in solving an elliptic boundary value problem. We will theoretically show the relationship between the spectral radiuses of the iteration matrices of the preconditioned methods which affects the rate of convergence of these methods. We will also show that the spectral radius of the preconditioned matrices is smaller than that of their unpreconditioned counterparts if the relaxation parameter is in a certain optimum range. Numerical experiments will also be presented to confirm the agreement between the theoretical and the experimental results.
Genuinely high-dimensional nonlocality optimized by complementary measurements
International Nuclear Information System (INIS)
Lim, James; Ryu, Junghee; Yoo, Seokwon; Lee, Changhyoup; Bang, Jeongho; Lee, Jinhyoung
2010-01-01
Qubits exhibit extreme nonlocality when their state is maximally entangled and this is observed by mutually unbiased local measurements. This criterion does not hold for the Bell inequalities of high-dimensional systems (qudits), recently proposed by Collins-Gisin-Linden-Massar-Popescu and Son-Lee-Kim. Taking an alternative approach, called the quantum-to-classical approach, we derive a series of Bell inequalities for qudits that satisfy the criterion as for the qubits. In the derivation each d-dimensional subsystem is assumed to be measured by one of d possible measurements with d being a prime integer. By applying to two qubits (d=2), we find that a derived inequality is reduced to the Clauser-Horne-Shimony-Holt inequality when the degree of nonlocality is optimized over all the possible states and local observables. Further applying to two and three qutrits (d=3), we find Bell inequalities that are violated for the three-dimensionally entangled states but are not violated by any two-dimensionally entangled states. In other words, the inequalities discriminate three-dimensional (3D) entanglement from two-dimensional (2D) entanglement and in this sense they are genuinely 3D. In addition, for the two qutrits we give a quantitative description of the relations among the three degrees of complementarity, entanglement and nonlocality. It is shown that the degree of complementarity jumps abruptly to very close to its maximum as nonlocality starts appearing. These characteristics imply that complementarity plays a more significant role in the present inequality compared with the previously proposed inequality.
Li, Yanming; Nan, Bin; Zhu, Ji
2015-06-01
We propose a multivariate sparse group lasso variable selection and estimation method for data with high-dimensional predictors as well as high-dimensional response variables. The method is carried out through a penalized multivariate multiple linear regression model with an arbitrary group structure for the regression coefficient matrix. It suits many biology studies well in detecting associations between multiple traits and multiple predictors, with each trait and each predictor embedded in some biological functional groups such as genes, pathways or brain regions. The method is able to effectively remove unimportant groups as well as unimportant individual coefficients within important groups, particularly for large p small n problems, and is flexible in handling various complex group structures such as overlapping or nested or multilevel hierarchical structures. The method is evaluated through extensive simulations with comparisons to the conventional lasso and group lasso methods, and is applied to an eQTL association study. © 2015, The International Biometric Society.
3D overlapped grouping Ga for optimum 2D guillotine cutting stock problem
Directory of Open Access Journals (Sweden)
Maged R. Rostom
2014-09-01
Full Text Available The cutting stock problem (CSP is one of the significant optimization problems in operations research and has gained a lot of attention for increasing efficiency in industrial engineering, logistics and manufacturing. In this paper, new methodologies for optimally solving the cutting stock problem are presented. A modification is proposed to the existing heuristic methods with a hybrid new 3-D overlapped grouping Genetic Algorithm (GA for nesting of two-dimensional rectangular shapes. The objective is the minimization of the wastage of the sheet material which leads to maximizing material utilization and the minimization of the setup time. The model and its results are compared with real life case study from a steel workshop in a bus manufacturing factory. The effectiveness of the proposed approach is shown by comparing and shop testing of the optimized cutting schedules. The results reveal its superiority in terms of waste minimization comparing to the current cutting schedules. The whole procedure can be completed in a reasonable amount of time by the developed optimization program.
Group problem-solving skills training for self-harm: randomised controlled trial.
McAuliffe, Carmel; McLeavey, Breda C; Fitzgerald, Tony; Corcoran, Paul; Carroll, Bernie; Ryan, Louise; O'Keeffe, Brian; Fitzgerald, Eva; Hickey, Portia; O'Regan, Mary; Mulqueen, Jillian; Arensman, Ella
2014-01-01
Rates of self-harm are high and have recently increased. This trend and the repetitive nature of self-harm pose a significant challenge to mental health services. To determine the efficacy of a structured group problem-solving skills training (PST) programme as an intervention approach for self-harm in addition to treatment as usual (TAU) as offered by mental health services. A total of 433 participants (aged 18-64 years) were randomly assigned to TAU plus PST or TAU alone. Assessments were carried out at baseline and at 6-week and 6-month follow-up and repeated hospital-treated self-harm was ascertained at 12-month follow-up. The treatment groups did not differ in rates of repeated self-harm at 6-week, 6-month and 12-month follow-up. Both treatment groups showed significant improvements in psychological and social functioning at follow-up. Only one measure (needing and receiving practical help from those closest to them) showed a positive treatment effect at 6-week (P = 0.004) and 6-month (P = 0.01) follow-up. Repetition was not associated with waiting time in the PST group. This brief intervention for self-harm is no more effective than treatment as usual. Further work is required to establish whether a modified, more intensive programme delivered sooner after the index episode would be effective.
Group-invariant solutions of nonlinear elastodynamic problems of plates and shells
International Nuclear Information System (INIS)
Dzhupanov, V.A.; Vassilev, V.M.; Dzhondzhorov, P.A.
1993-01-01
Plates and shells are basic structural components in nuclear reactors and their equipment. The prediction of the dynamic response of these components to fast transient loadings (e.g., loadings caused by earthquakes, missile impacts, etc.) is a quite important problem in the general context of the design, reliability and safety of nuclear power stations. Due to the extreme loading conditions a more adequate treatment of the foregoing problem should rest on a suitable nonlinear shell model, which would allow large deflections of the structures regarded to be taken into account. Such a model is provided in the nonlinear Donnell-Mushtari-Vlasov (DMV) theory. The governing system of equations of the DMV theory consists of two coupled nonlinear fourth order partial differential equations in three independent and two dependent variables. It is clear, as the case stands, that the obtaining solutions to this system directly, by using any of the general analytical or numerical techniques, would involve considerable difficulties. In the present paper, the invariance of the governing equations of DMV theory for plates and cylindrical shells relative to local Lie groups of local point transformations will be employed to get some advantages in connection with the aforementioned problem. First, the symmetry of a functional, corresponding to the governing equations of DMV theory for plates and cylindrical shells is studied. Next, the densities in the corresponding conservation laws are determined on the basis of Noether theorem. Finally, we study a class of invariant solutions of the governing equations. As is well known, group-invariant solutions are often intermediate asymptotics for a wider class of solutions of the corresponding equations. When such solutions are considered, the number of the independent variables can be reduced. For the class of invariant solutions studied here, the system of governing equations converts into a system of ordinary differential equations
Taşkin Kaya, Gülşen
2013-10-01
-output relationships in high-dimensional systems for many problems in science and engineering. The HDMR method is developed to improve the efficiency of the deducing high dimensional behaviors. The method is formed by a particular organization of low dimensional component functions, in which each function is the contribution of one or more input variables to the output variables.
Records Group. The problem of fonds in the American archival studies
Directory of Open Access Journals (Sweden)
Bartosz Nowożycki
2017-12-01
Full Text Available The term record group means a group of fonds (records and it is a type of archival fonds rarely described in Polish archival literature. Most often it is associated with the complex archival fonds, due to its comlicated structure and blurred borderlines – an effect of administrative system varying from the Polish one. The post-war attempts to modify and broaden the meaning of the complex fonds caused its resemblance to the term record group. Irena Radtke in her paper given during the 6th Archival Methods Conference in Warsaw in 1970 proposed, that the complex fonds should be one comprising records of foreign provenance that are an effect of passive succession. Bohdan Ryszewski, addressing Radtke’s idea, noticed that passive succession might be a source of complications. However, this conceptualization of the definition of the complex fonds did not correspond with the American understanding of it as an above-fonds structure.Bogdan Kroll has touched the core of the problem; he noticed that an archival construction comprising materials of various provenances cannot be seen neither as an archival fonds, nor as a complex fonds. He saw a discrepancy between the structure and partition of archival holdings and archival theory; thus Kroll proposed abandoning the term complex fonds and implementation of the term archival complex. The archival complex was supposed to be archival materials of various origins merged (in or outside of an archive into fonds, or parts of archival fonds of different institutions having the same characteristic – function. The complex was supposed to make up a separate entity in logic structure of archival holdings, comprising of all archival fonds and/or their pieces being parts of the main fonds of the complex. The problem of lack of above-fonds forms in the Polish archival theory has been also noticed by Józef Siemieński, who has formulated the term of higher-order fonds. According to his idea the higher
Kernel based methods for accelerated failure time model with ultra-high dimensional data
Directory of Open Access Journals (Sweden)
Jiang Feng
2010-12-01
Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.
In-Medium Similarity Renormalization Group Approach to the Nuclear Many-Body Problem
Hergert, Heiko; Bogner, Scott K.; Lietz, Justin G.; Morris, Titus D.; Novario, Samuel J.; Parzuchowski, Nathan M.; Yuan, Fei
We present a pedagogical discussion of Similarity Renormalization Group (SRG) methods, in particular the In-Medium SRG (IMSRG) approach for solving the nuclear many-body problem. These methods use continuous unitary transformations to evolve the nuclear Hamiltonian to a desired shape. The IMSRG, in particular, is used to decouple the ground state from all excitations and solve the many-body Schrödinger equation. We discuss the IMSRG formalism as well as its numerical implementation, and use the method to study the pairing model and infinite neutron matter. We compare our results with those of Coupled cluster theory (Chap. 8), Configuration-Interaction Monte Carlo (Chap. 9), and the Self-Consistent Green's Function approach discussed in Chap. 11 The chapter concludes with an expanded overview of current research directions, and a look ahead at upcoming developments.
Directory of Open Access Journals (Sweden)
Dheeraj Kumar Joshi
2018-03-01
Full Text Available Uncertainties due to randomness and fuzziness comprehensively exist in control and decision support systems. In the present study, we introduce notion of occurring probability of possible values into hesitant fuzzy linguistic element (HFLE and define hesitant probabilistic fuzzy linguistic set (HPFLS for ill structured and complex decision making problem. HPFLS provides a single framework where both stochastic and non-stochastic uncertainties can be efficiently handled along with hesitation. We have also proposed expected mean, variance, score and accuracy function and basic operations for HPFLS. Weighted and ordered weighted aggregation operators for HPFLS are also defined in the present study for its applications in multi-criteria group decision making (MCGDM problems. We propose a MCGDM method with HPFL information which is illustrated by an example. A real case study is also taken in the present study to rank State Bank of India, InfoTech Enterprises, I.T.C., H.D.F.C. Bank, Tata Steel, Tata Motors and Bajaj Finance using real data. Proposed HPFLS-based MCGDM method is also compared with two HFL-based decision making methods.
Variable kernel density estimation in high-dimensional feature spaces
CSIR Research Space (South Africa)
Van der Walt, Christiaan M
2017-02-01
Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...
PROBLEM ASPECTS OF FORMATION OF THE LEGAL INSTITUTE OF CONSOLIDATED TAXPAYERS’ GROUPS IN RUSSIA
Directory of Open Access Journals (Sweden)
Irina Glazunova
2017-01-01
Full Text Available The subject. The article is devoted to the prerequisites of the emergence and essential characteristics of the institution of consolidated taxpayers’ groups in Russia and abroad, revealing of advantages and disadvantages of the legal regulation of the creation and operation of consolidated groups of payers of corporate profits tax, analyzing results and directions of the development of tax consolidation in Russian Federation.The purpose of the article is to identify positive and negative aspects of the functioning of the institution of consolidated taxpayers’ groups in Russia with the establishment of prospects of tax consolidation and the likely directions of its development.The description of the problem field. The development of the world economic system stimulates the emergence of new forms of management, characterized by the enlargement of busi-ness, the pooling of resources of individual enterprises into a single system in order to optimize entrepreneurial activity. These trends are reflected in the development of tax systems of various countries, that is expressed in the formation of institutions of consolidated taxpayers’ groups. Tax consolidation in Russia is a relatively new phenomenon, and it seems necessary to examine this institution from the law enforcement point of view, to evaluate its effectiveness.Methods and methodology. The authors used methods of analysis, synthesis, as well as formal-legal, comparative-legal, historical methods of investigation.Results and the scope of its application. The authors note that the institution of tax consolidation today is presented in the tax systems of most modern countries.The practice of applying the institution of consolidated taxpayers’ groups testifies to the existence of a significant number of advantages and disadvantages of tax consolidation in Russia. The moratorium on the creation of consolidated taxpayers’ groups, due to the contradictory nature of their influence on the
One-way functions based on the discrete logarithm problem in the groups meeting conditions C(3-T (6
Directory of Open Access Journals (Sweden)
N. V. Bezverkhniy
2014-01-01
Full Text Available In this work we are consider a possibility to create schemes of open key distribution in the groups meeting conditions C(3-T(6. Our constructions use the following algorithms.1. The algorithm that solves the membership problem for cyclic subgroups, also known as the discrete logarithm problem.2. The algorithm that solves the word problem in this class of groups.Our approach is based on the geometric methods of combinatorial group theory (the method of diagrams in groups.In a cryptographic scheme based on the open key distribution one-way functions are used, i.e. functions direct calculation of which must be much easier than that of the inverse one. Our task was to construct a one-way function using groups with small cancelation conditions C(3-T(6 and to compare the calculation complexity of this function with the calculation complexity of its inverse.P.W. Shor has shown in the paper that there exists a polynomial algorithm that can be implemented in a quantum computer to solve the discrete logarithm problem in the groups of units of finite fields and the rings of congruences mod n. This stimulated a series of investigations trying to find alternative complicated mathematical problems that can be used for construction of new asymmetric cryptosystems. For example, open key distribution systems based on the conjugacy problem in matrix groups and the braid groups were proposed.In the other papers the constructions used the discrete logarithm problem in the groups of inner automorphisms of semi-direct products of SL(2,Z and Zp and GL(2,Zp and Zp. groups. The paper of E. Sakalauskas, P. Tvarijonas, A. Raulinaitis proposed a scheme that uses a composition of two problems of group theory, namely the conjugacy problem and the discrete logarithm problem.Our results show that the scheme that we propose is of polynomial complexity. Therefore its security is not sufficient for further applications in communications. However the security can be improved
Matrix correlations for high-dimensional data: The modified RV-coefficient
Smilde, A.K.; Kiers, H.A.L.; Bijlsma, S.; Rubingh, C.M.; Erk, M.J. van
2009-01-01
Motivation: Modern functional genomics generates high-dimensional datasets. It is often convenient to have a single simple number characterizing the relationship between pairs of such high-dimensional datasets in a comprehensive way. Matrix correlations are such numbers and are appealing since they
Leon, Stéphane; Bergond, Gilles; Vallenari, Antonella
1999-04-01
We present the tidal tail distributions of a sample of candidate binary clusters located in the bar of the Large Magellanic Cloud (LMC). One isolated cluster, SL 268, is presented in order to study the effect of the LMC tidal field. All the candidate binary clusters show tidal tails, confirming that the pairs are formed by physically linked objects. The stellar mass in the tails covers a large range, from 1.8x 10(3) to 3x 10(4) \\msun. We derive a total mass estimate for SL 268 and SL 356. At large radii, the projected density profiles of SL 268 and SL 356 fall off as r(-gamma ) , with gamma = 2.27 and gamma =3.44, respectively. Out of 4 pairs or multiple systems, 2 are older than the theoretical survival time of binary clusters (going from a few 10(6) years to 10(8) years). A pair shows too large age difference between the components to be consistent with classical theoretical models of binary cluster formation (Fujimoto & Kumai \\cite{fujimoto97}). We refer to this as the ``overmerging'' problem. A different scenario is proposed: the formation proceeds in large molecular complexes giving birth to groups of clusters over a few 10(7) years. In these groups the expected cluster encounter rate is larger, and tidal capture has higher probability. Cluster pairs are not born together through the splitting of the parent cloud, but formed later by tidal capture. For 3 pairs, we tentatively identify the star cluster group (SCG) memberships. The SCG formation, through the recent cluster starburst triggered by the LMC-SMC encounter, in contrast with the quiescent open cluster formation in the Milky Way can be an explanation to the paucity of binary clusters observed in our Galaxy. Based on observations collected at the European Southern Observatory, La Silla, Chile}
International Nuclear Information System (INIS)
Won, Jong Hyuck; Cho, Nam Zin
2011-01-01
In deterministic neutron transport methods, a process called fine-group to few-group condensation is used to reduce the computational burden. However, recent results on the core-reflector problem in fast reactor cores show that use of a small number of energy groups has limitation to describe neutron flux around core reflector interface. Therefore, researches are still ongoing to overcome this limitation. Recently, the authors proposed I) direct application of equivalently condensed angle-dependent total cross section to discrete ordinates method to overcome the limitation of conventional multi-group approximations, and II) local/global iteration framework in which fine-group discrete ordinates calculation is used in local problems while few-group transport calculation is used in the global problem iteratively. In this paper, an analysis of the core-reflector problem is performed in few-group structure using equivalent angle-dependent total cross section with local/global iteration. Numerical results are obtained under S 12 discrete ordinates-like transport method with scattering cross section up to P1 Legendre expansion
McEvoy, Peter M; Burgess, Melissa M; Nathan, Paula
2013-09-05
Interpersonal functioning is a key determinant of psychological well-being, and interpersonal problems (IPs) are common among individuals with psychiatric disorders. However, IPs are rarely formally assessed in clinical practice or within cognitive behavior therapy research trials as predictors of treatment attrition and outcome. The main aim of this study was to investigate the relationship between IPs, depressogenic cognitions, and treatment outcome in a large clinical sample receiving cognitive behavioral group therapy (CBGT) for depression in a community clinic. Patients (N=144) referred for treatment completed measures of IPs, negative cognitions, depression symptoms, and quality of life (QoL) before and at the completion of a 12-week manualized CBGT protocol. Two IPs at pre-treatment, 'finding it hard to be supportive of others' and 'not being open about problems,' were associated with higher attrition. Pre-treatment IPs also predicted higher post-treatment depression symptoms (but not QoL) after controlling for pre-treatment symptoms, negative cognitions, demographics, and comorbidity. In particular, 'difficulty being assertive' and a 'tendency to subjugate one's needs' were associated with higher post-treatment depression symptoms. Changes in IPs did not predict post-treatment depression symptoms or QoL when controlling for changes in negative cognitions, pre-treatment symptoms, demographics, and comorbidity. In contrast, changes in negative cognitions predicted both post-treatment depression and QoL, even after controlling for changes in IPs and the other covariates. Correlational design, potential attrition bias, generalizability to other disorders and treatments needs to be evaluated. Pre-treatment IPs may increase risk of dropout and predict poorer outcomes, but changes in negative cognitions during treatment were most strongly associated with improvement in symptoms and QoL during CBGT. Copyright © 2013 Elsevier B.V. All rights reserved.
Wise, K; Rief, W; Goebel, G
1998-06-01
Two different group treatments were evaluated in 144 in-patients suffering from impairment due to chronic tinnitus. A tinnitus management therapy (TMT) was developed using principles of cognitive-behavioral therapy and compared with problem solving group therapy. Self-ratings were used to evaluate the help patients found in dealing with life problems and tinnitus as well as the degree to which they felt they were being properly treated and taken seriously. Patients showed significantly more satisfaction with the TMT group and evaluated the help they found in coping with tinnitus and life problems significantly higher. Thus, in the light of unsatisfactory medical solutions and the poor acceptance of some psychological treatments for tinnitus, TMT appears to be an acceptable and helpful treatment program.
Das Carlo, Mandira; Swadi, Harith; Mpofu, Debbie
2003-01-01
The popularization of problem-based learning (PBL) has drawn attention to the motivational and cognitive skills necessary for medical students in group learning. This study identifies the effect of motivational and cognitive factors on group productivity of PBL tutorial groups. A self-administered questionnaire was completed by 115 students at the end of PBL tutorials for 4 themes. The questionnaire explored student perceptions about effect of motivation, cohesion, sponging, withdrawal, interaction, and elaboration on group productivity. We further analyzed (a) differences in perceptions between male and female students, (b) effect of "problems," and (c) effect of student progress over time on group productivity. There were linear relations between a tutorial group's success and the factors studied. Significant differences were noted between male and female student groups. Students and tutors need to recognize symptoms of ineffective PBL groups. Our study emphasizes the need to take into account cultural issues in setting ground rules for PBL tutorials.
Maes, Marlies; Stevens, Gonneke W. J. M.; Verkuijten, Maykel
2014-01-01
Previous research has identified ethnic group identification as a moderator in the relationship between perceived ethnic discrimination and problem behaviors in ethnic minority children. However, little is known about the influence of religious and host national identification on this relationship.
Cooper, Melanie M.; Cox, Charles T., Jr.; Nammouz, Minory; Case, Edward; Stevens, Ronald
2008-01-01
Improving students' problem-solving skills is a major goal for most science educators. While a large body of research on problem solving exists, assessment of meaningful problem solving is very difficult, particularly for courses with large numbers of students in which one-on-one interactions are not feasible. We have used a suite of software…
Cooperation and Conflict: Faction Problem of Western Medicine Group in Modern China
Directory of Open Access Journals (Sweden)
Jeongeun JO
2016-08-01
Medicine Group doctors for China to timely respond to the rapidly increased demand. However, a conflict over the promotion of hygiene administration and the unification, organization of medical education did not end. This conflict was deepening as the Nanjing nationalist government promoted sanitary administration. It was the Britain - America faction who seized a chance of victory. It was because figures from the Britain - America faction held important positions in the hygiene department. Of course, some related to the National Medical and Pharmaceutical Association of China were also involved in the hygiene department; however, most took charge of simple technical tasks, not having a significant impact on hygiene administration. To solve the problem of factions of the Western Medicine Group, the Britain - America faction or the Germany - Japan faction had to arrange the education system with a strong power, or to organize a new association of two factions mixed, as in Chinese faction(zhonghuapai. But an effort of the Britain - America faction to unify the systems of medical schools did not reach the Germany - Japan faction’s medical schools. Additionally, from 1928, executives of the two Chinese medical associations discussed their merger; however they could not agree because of practitioners’interests involved. Substantially, a conflict between factions of the Western Medicine Group continued even until the mid-1930s. This implies that the then Chinese government had a lack of capacity of uniting and organizing the medical community.
International Nuclear Information System (INIS)
Chi, Dong Pyo; Kim, Jeong San; Lee, Soojoon
2006-01-01
We consider the hidden subgroup problem on the semi-direct product of cyclic groups Z N -bar Z p , where p is a prime that does not divide p j -1 for any of the prime factors p j of N, and show that the hidden subgroup problem can be reduced to other ones for which solutions are already known
An approach to solve group-decision-making problems with ordinal interval numbers.
Fan, Zhi-Ping; Liu, Yang
2010-10-01
The ordinal interval number is a form of uncertain preference information in group decision making (GDM), while it is seldom discussed in the existing research. This paper investigates how the ranking order of alternatives is determined based on preference information of ordinal interval numbers in GDM problems. When ranking a large quantity of ordinal interval numbers, the efficiency and accuracy of the ranking process are critical. A new approach is proposed to rank alternatives using ordinal interval numbers when every ranking ordinal in an ordinal interval number is thought to be uniformly and independently distributed in its interval. First, we give the definition of possibility degree on comparing two ordinal interval numbers and the related theory analysis. Then, to rank alternatives, by comparing multiple ordinal interval numbers, a collective expectation possibility degree matrix on pairwise comparisons of alternatives is built, and an optimization model based on this matrix is constructed. Furthermore, an algorithm is also presented to rank alternatives by solving the model. Finally, two examples are used to illustrate the use of the proposed approach.
Approximate Dynamic Programming Based on High Dimensional Model Representation
Czech Academy of Sciences Publication Activity Database
Pištěk, Miroslav
2013-01-01
Roč. 49, č. 5 (2013), s. 720-737 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GAP102/11/0437 Institutional support: RVO:67985556 Keywords : approximate dynamic programming * Bellman equation * approximate HDMR minimization * trust region problem Subject RIV: BC - Control Systems Theory Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/pistek-0399560.pdf
Swank, Jacqueline M.; Shin, Sang Min
2015-01-01
This research study focused on the use of a garden group counseling intervention to address the self-esteem of children with emotional and behavioral problems. The researchers found higher self-esteem among participants (N = 31) following the gardening group. Additionally, participants discussed feeling calm and happy and learning to working…
Mulcahy, Robert Sean
2010-01-01
Learners inevitably enter adult technical training classrooms--indeed, in all classrooms--with different levels of expertise on the subject matter. When the diversity of expertise is wide and the course makes use of small group problem solving, instructors have a choice about how to group learners: they may distribute learners with greater…
DEFF Research Database (Denmark)
Zhou, Chunfang; Kolmos, Anette
2013-01-01
Recent studies regard Problem and Project Based Learning (PBL) as providing a learning environment which fosters both individual and group creativity. This paper focuses on the question: In a PBL environment, how do students perceive the interplay between individual and group creativity? Empirica...
Sala, Giovanni; Gobet, Fernand
2017-12-01
It has been proposed that playing chess enables children to improve their ability in mathematics. These claims have been recently evaluated in a meta-analysis (Sala & Gobet, 2016, Educational Research Review, 18, 46-57), which indicated a significant effect in favor of the groups playing chess. However, the meta-analysis also showed that most of the reviewed studies used a poor experimental design (in particular, they lacked an active control group). We ran two experiments that used a three-group design including both an active and a passive control group, with a focus on mathematical ability. In the first experiment (N = 233), a group of third and fourth graders was taught chess for 25 hours and tested on mathematical problem-solving tasks. Participants also filled in a questionnaire assessing their meta-cognitive ability for mathematics problems. The group playing chess was compared to an active control group (playing checkers) and a passive control group. The three groups showed no statistically significant difference in mathematical problem-solving or metacognitive abilities in the posttest. The second experiment (N = 52) broadly used the same design, but the Oriental game of Go replaced checkers in the active control group. While the chess-treated group and the passive control group slightly outperformed the active control group with mathematical problem solving, the differences were not statistically significant. No differences were found with respect to metacognitive ability. These results suggest that the effects (if any) of chess instruction, when rigorously tested, are modest and that such interventions should not replace the traditional curriculum in mathematics.
Scalable Biomarker Discovery for Diverse High-Dimensional Phenotypes
2015-11-23
William D. Shannon, Richard R. Sharp, Thomas J. Sharpton, Narmada Shenoy, Nihar U. Sheth, Gina A. Simone, Indresh Singh, Chris S. Smillie, Jack D... William D. Shannon, Richard R. Sharp, Thomas J. Sharpton, Narmada Shenoy, Nihar U. Sheth, Gina A. Simone, Indresh Singh, Christopher S. Smillie, Jack D...Susanne J. Szabo, Jeff Porter, Harri Lähdesmäki, Curtis Huttenhower, Dirk Gevers, Thomas W. Cullen , Mikael Knip, on behalf of the DIABIMMUNE Study Group
Feature selection for high-dimensional integrated data
Zheng, Charles; Schwartz, Scott; Chapkin, Robert S.; Carroll, Raymond J.; Ivanov, Ivan
2012-01-01
Motivated by the problem of identifying correlations between genes or features of two related biological systems, we propose a model of feature selection in which only a subset of the predictors Xt are dependent on the multidimensional variate Y, and the remainder of the predictors constitute a “noise set” Xu independent of Y. Using Monte Carlo simulations, we investigated the relative performance of two methods: thresholding and singular-value decomposition, in combination with stochastic optimization to determine “empirical bounds” on the small-sample accuracy of an asymptotic approximation. We demonstrate utility of the thresholding and SVD feature selection methods to with respect to a recent infant intestinal gene expression and metagenomics dataset.
Feature selection for high-dimensional integrated data
Zheng, Charles
2012-04-26
Motivated by the problem of identifying correlations between genes or features of two related biological systems, we propose a model of feature selection in which only a subset of the predictors Xt are dependent on the multidimensional variate Y, and the remainder of the predictors constitute a “noise set” Xu independent of Y. Using Monte Carlo simulations, we investigated the relative performance of two methods: thresholding and singular-value decomposition, in combination with stochastic optimization to determine “empirical bounds” on the small-sample accuracy of an asymptotic approximation. We demonstrate utility of the thresholding and SVD feature selection methods to with respect to a recent infant intestinal gene expression and metagenomics dataset.
Individual-based models for adaptive diversification in high-dimensional phenotype spaces.
Ispolatov, Iaroslav; Madhok, Vaibhav; Doebeli, Michael
2016-02-07
Most theories of evolutionary diversification are based on equilibrium assumptions: they are either based on optimality arguments involving static fitness landscapes, or they assume that populations first evolve to an equilibrium state before diversification occurs, as exemplified by the concept of evolutionary branching points in adaptive dynamics theory. Recent results indicate that adaptive dynamics may often not converge to equilibrium points and instead generate complicated trajectories if evolution takes place in high-dimensional phenotype spaces. Even though some analytical results on diversification in complex phenotype spaces are available, to study this problem in general we need to reconstruct individual-based models from the adaptive dynamics generating the non-equilibrium dynamics. Here we first provide a method to construct individual-based models such that they faithfully reproduce the given adaptive dynamics attractor without diversification. We then show that a propensity to diversify can be introduced by adding Gaussian competition terms that generate frequency dependence while still preserving the same adaptive dynamics. For sufficiently strong competition, the disruptive selection generated by frequency-dependence overcomes the directional evolution along the selection gradient and leads to diversification in phenotypic directions that are orthogonal to the selection gradient. Copyright © 2015 Elsevier Ltd. All rights reserved.
Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.
Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen
2017-12-01
In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.
Relative Effects of Three Questioning Strategies in Ill-Structured, Small Group Problem Solving
Byun, Hyunjung; Lee, Jung; Cerreto, Frank A.
2014-01-01
The purpose of this research is to investigate the relative effectiveness of using three different question-prompt strategies on promoting metacognitive skills and performance in ill-structured problem solving by examining the interplay between peer interaction and cognitive scaffolding. An ill-structured problem-solving task was given to three…
Solutions of the Noh Problem for Various Equations of State Using Lie Groups
International Nuclear Information System (INIS)
Axford, R.A.
1998-01-01
A method for developing invariant equations of state for which solutions of the Noh problem will exist is developed. The ideal gas equation of state is shown to be a special case of the general method. Explicit solutions of the Noh problem in planar, cylindrical and spherical geometry are determined for a Mie-Gruneisen and the stiff gas equation of state
Mitigating the Insider Threat Using High-Dimensional Search and Modeling
National Research Council Canada - National Science Library
Van Den Berg, Eric; Uphadyaya, Shambhu; Ngo, Phi H; Muthukrishnan, Muthu; Palan, Rajago
2006-01-01
In this project a system was built aimed at mitigating insider attacks centered around a high-dimensional search engine for correlating the large number of monitoring streams necessary for detecting insider attacks...
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
Energy Technology Data Exchange (ETDEWEB)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); CREST, JST, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012 (Japan); Shiro, Masanori [Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); Mathematical Neuroinformatics Group, Advanced Industrial Science and Technology, Tsukuba, Ibaraki 305-8568 (Japan); Takahashi, Nozomu; Mas, Paloma [Center for Research in Agricultural Genomics (CRAG), Consorci CSIC-IRTA-UAB-UB, Barcelona 08193 (Spain)
2015-01-15
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
International Nuclear Information System (INIS)
Hirata, Yoshito; Aihara, Kazuyuki; Suzuki, Hideyuki; Shiro, Masanori; Takahashi, Nozomu; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Multivariate statistical analysis a high-dimensional approach
Serdobolskii, V
2000-01-01
In the last few decades the accumulation of large amounts of in formation in numerous applications. has stimtllated an increased in terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen ...
Counting and classifying attractors in high dimensional dynamical systems.
Bagley, R J; Glass, L
1996-12-07
Randomly connected Boolean networks have been used as mathematical models of neural, genetic, and immune systems. A key quantity of such networks is the number of basins of attraction in the state space. The number of basins of attraction changes as a function of the size of the network, its connectivity and its transition rules. In discrete networks, a simple count of the number of attractors does not reveal the combinatorial structure of the attractors. These points are illustrated in a reexamination of dynamics in a class of random Boolean networks considered previously by Kauffman. We also consider comparisons between dynamics in discrete networks and continuous analogues. A continuous analogue of a discrete network may have a different number of attractors for many different reasons. Some attractors in discrete networks may be associated with unstable dynamics, and several different attractors in a discrete network may be associated with a single attractor in the continuous case. Special problems in determining attractors in continuous systems arise when there is aperiodic dynamics associated with quasiperiodicity of deterministic chaos.
Development of a coarse mesh code for the solution of two group static diffusion problems
International Nuclear Information System (INIS)
Barros, R.C. de.
1985-01-01
This new coarse mesh code designed for the solution of 2 and 3 dimensional static diffusion problems, is based on an alternating direction method which consists in the solution of one dimensional problem along each coordinate direction with leakage terms for the remaining directions estimated from previous interactions. Four versions of this code have been developed: AD21 - 2D - 1/4, AD21 - 2D - 4/4, AD21 - 3D - 1/4 and AD21 - 3D - 4/4; these versions have been designed for 2 and 3 dimensional problems with or without 1/4 symmetry. (Author) [pt
Directory of Open Access Journals (Sweden)
Aurélie eVilenne
2015-11-01
Full Text Available Aims: Recent studies with animal models showed that the stimulant and sedative effects of alcohol change during the adolescent period. In humans, the stimulant effects of ethanol are most often indirectly recorded through the measurement of explicit and implicit alcohol effect expectancies. However, it is unknown how such implicit and explicit expectancies evolve with age in humans during adolescence.Methods: Adolescent (13-16 year old, young adult (17-18 year old and adult (35-55 year old participants were recruited. On the basis of their score on the Alcohol Use Disorder Identification Test (AUDIT, they were classified as non-problem (AUDIT ≤ 7 or problem (AUDIT ≥ 11 drinkers. The participants completed the Alcohol Expectancy Questionnaire (AEQ and performed two unipolar Implicit Association Test (IAT to assess implicit associations between alcohol and the concepts of stimulation and sedation.Results: Problem drinkers from the three age groups reported significantly higher positive alcohol expectancies than non-problem drinkers on all AEQ subscales. Positive alcohol explicit expectancies also gradually decreased with age, with adolescent problem drinkers reporting especially high positive expectancies. This effect was statistically significant for all positive expectancies, with the exception of relaxation expectancies that were only close to statistical significance. In contrast, stimulation and sedation alcohol implicit associations were not significantly different between problem and non-problem drinkers and did not change with age.Conclusions: These results indicate that explicit positive alcohol effect expectancies predict current alcohol consumption levels, especially in adolescents. Positive alcohol expectancies also gradually decrease with age in the three cross-sectional groups of adolescents, young adults and adults. This effect might be related to changes in the physiological response to alcohol.
Software Tools for Robust Analysis of High-Dimensional Data
Directory of Open Access Journals (Sweden)
Valentin Todorov
2014-06-01
Full Text Available The present work discusses robust multivariate methods specifically designed for highdimensions. Their implementation in R is presented and their application is illustratedon examples. The first group are algorithms for outlier detection, already introducedelsewhere and implemented in other packages. The value added of the new package isthat all methods follow the same design pattern and thus can use the same graphicaland diagnostic tools. The next topic covered is sparse principal components including anobject oriented interface to the standard method proposed by Zou, Hastie, and Tibshirani(2006 and the robust one proposed by Croux, Filzmoser, and Fritz (2013. Robust partialleast squares (see Hubert and Vanden Branden 2003 as well as partial least squares fordiscriminant analysis conclude the scope of the new package.
Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.
Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver
2018-02-15
Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R
Bayesian Inference of High-Dimensional Dynamical Ocean Models
Lin, J.; Lermusiaux, P. F. J.; Lolla, S. V. T.; Gupta, A.; Haley, P. J., Jr.
2015-12-01
This presentation addresses a holistic set of challenges in high-dimension ocean Bayesian nonlinear estimation: i) predict the probability distribution functions (pdfs) of large nonlinear dynamical systems using stochastic partial differential equations (PDEs); ii) assimilate data using Bayes' law with these pdfs; iii) predict the future data that optimally reduce uncertainties; and (iv) rank the known and learn the new model formulations themselves. Overall, we allow the joint inference of the state, equations, geometry, boundary conditions and initial conditions of dynamical models. Examples are provided for time-dependent fluid and ocean flows, including cavity, double-gyre and Strait flows with jets and eddies. The Bayesian model inference, based on limited observations, is illustrated first by the estimation of obstacle shapes and positions in fluid flows. Next, the Bayesian inference of biogeochemical reaction equations and of their states and parameters is presented, illustrating how PDE-based machine learning can rigorously guide the selection and discovery of complex ecosystem models. Finally, the inference of multiscale bottom gravity current dynamics is illustrated, motivated in part by classic overflows and dense water formation sites and their relevance to climate monitoring and dynamics. This is joint work with our MSEAS group at MIT.
Anderson, William L; Mitchell, Steven M; Osgood, Marcy P
2008-01-01
For the past 3 yr, faculty at the University of New Mexico, Department of Biochemistry and Molecular Biology have been using interactive online Problem-Based Learning (PBL) case discussions in our large-enrollment classes. We have developed an illustrative tracking method to monitor student use of problem-solving strategies to provide targeted help to groups and to individual students. This method of assessing performance has a high interrater reliability, and senior students, with training, can serve as reliable graders. We have been able to measure improvements in many students' problem-solving strategies, but, not unexpectedly, there is a population of students who consistently apply the same failing strategy when there is no faculty intervention. This new methodology provides an effective tool to direct faculty to constructively intercede in this area of student development.
Directory of Open Access Journals (Sweden)
N. V. Bezverkhniy
2015-01-01
Full Text Available The paper considers the possibility for building a one-way function in the small cancellation group. Thus, it uses the algorithm to solve the problem for a cyclic subgroup, also known as a discrete logarithm problem, and the algorithm to solve the word problem in this class of groups.Research is conducted using geometric methods of combinatorial group theory (the method of diagrams in groups.In public channel exchange of information are used one-way functions, direct calculation of which should be much less complicated than the calculation of the inverse function. The paper considers the combination of two problems: discrete logarithms and conjugacy. This leads to the problem of conjugate membership for a cyclic subgroup. The work proposes an algorithm based on this problem, which can be used as a basis in investigation of the appropriate one-way function for its fitness to build a public key distribution scheme.The study used doughnut charts of word conjugacy, and for one special class of such charts has been proven a property of the layer-based periodicity. The presence of such properties is obviously leads to a solution of the power conjugacy of words in the considered class of groups. Unfortunately, this study failed to show any periodicity of a doughnut chart, but for one of two possible classes this periodicity has been proven.The building process of one-way function considered in the paper was studied in terms of possibility to calculate both direct and inverse mappings. The computational complexity was not considered. Thus, the following two tasks were yet unresolved: determining the quality of one-way function in the above protocol of the public key distribution and completing the study of the periodicity of doughnut charts of word conjugacy, leading to a positive solution of the power conjugacy of words in the class groups under consideration.
Ticehurst, R L; Henry, R L
1989-02-01
Behavioural problems in preschool (1-4 years) children are a common cause of referral to health services. Parents of children presenting to the child development unit with behavioural problems (n = 18) were compared with a control group (n = 45). A questionnaire was utilized to examine the parents' expectations of the children's behaviours. As might be expected, the parents of children presenting to the Unit rated their children as having more difficult behaviours. These parents had unrealistic expectations, particularly for the 'negative' behaviours (disobedience, temper tantrums, defiance and whinging). However, they were able to anticipate normal age-related difficulties in some problem areas (dawdling during mealtimes, masturbating, not sharing toys and being jealous of one's siblings). Counselling should address the issue of matching the expectations of parents with the individual rates of development of their children.
Jeong, In Ju; Kim, Soo Jin
2017-04-01
The purpose of this study was to examine the effects of a group counseling program based on goal attainment theory on self-esteem, interpersonal relationships, and school adjustment of middle school students with emotional and behavioral problems. Forty-four middle school students with emotional and behavioral problems (22 in the experimental group and 22 in the control group) from G city participated in this study. Data were collected from July 30 to September 24, 2015. The experimental group received the 8-session program, scheduled once a week, with each session lasting 45 minutes. Outcome variables included self-esteem, interpersonal relationship, and school adjustment. There were significant increases for self-esteem (t=3.69, p=.001), interpersonal relationship (t=8.88, pgroup compared to the control group. These results indicate that the group counseling program based on goal attainment theory is very effective in increasing self-esteem, interpersonal relationship, and school adjustment for middle school students with emotional and behavioral problems. Therefore, it is recommended that the group counseling program based on goal attainment theory be used as an effective psychiatric nursing intervention for mental health promotion and the prevention of mental illness in adolescents. © 2017 Korean Society of Nursing Science
de Jager, J; Wolters, H A; Pijnenborg, G H M
2016-01-01
Research has shown that young adults with psychotic disorders frequently have problems relating to sexuality, intimacy and relationships. Such problems are often neglected in clinical practice. To perform a study that explores, on the basis of focus groups, how issues such as sexuality, intimacy and relationships can be addressed as part of the treatment of adolescents suffering from a psychotic disorder. We created eight focus groups consisting of clients attending the department of psychotic disorders and caregivers who worked there. The meetings of each focus group were fully transcribed and analysed by means of Nvivo. Clients indicated they wanted to address the topics of sexuality, intimacy and relationships in a group setting. They expressed the wish to have mixed gender groups and decided that in the group discussions the main focus should be on the exchange of personal experiences. In our view, it is desirable that psychiatry should pay more attention to the subject of sexuality. By giving adolescents suffering from psychotic disorders the opportunity to discuss their experiences, problems and feelings of insecurity in a group setting and in a low-threshold environment, psychiatrists can greatly improve the quality of care that they provide for their patients.
International Nuclear Information System (INIS)
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-01-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Tripathy, Rohit; Bilionis, Ilias; Gonzalez, Marcial
2016-09-01
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Energy Technology Data Exchange (ETDEWEB)
Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu
2016-09-15
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
A Combined group EA-PROMETHEE method for a supplier selection problem
Directory of Open Access Journals (Sweden)
Hamid Reza Rezaee Kelidbari
2016-07-01
Full Text Available One of the important decisions which impacts all firms’ activities is the supplier selection problem. Since the 1950s, several works have addressed this problem by treating different aspects and instances. In this paper, a combined multiple criteria decision making (MCDM technique (EA-PROMETHEE has been applied to implement a proper decision making. To this aim, after reviewing the theoretical background regarding to supplier selection, the extension analysis (EA is used to determine the importance of criteria and PROMETHEE for appraisal of suppliers based on the criteria. An empirical example illustrated the proposed approach.
A PLG (Professional Learning Group): How to Stimulate Learners' Engagement in Problem-Solving
Sheety, Alia; Rundell, Frida
2012-01-01
This paper aims to describe, discuss and reflect the use of PLGs (professional learning groups) in higher education as a practice for enhancing student learning and team building. It will use theories supporting group-learning processes, explore optimal social contexts that enhance team collaboration, and reflect on the practice of PLG. The…
Social Reform Groups and the Legal System: Enforcement Problems. Discussion Paper No. 209-74.
Handler, Joel F.
During the last two decades, there has been a great increase in the use of litigation by social reform groups. This activity has been stimulated by the hospitality of the courts to the demands of social reform groups and the availability of subsidized young, activist lawyers. The paper examines the uses of the legal system by social reform groups…
DEFF Research Database (Denmark)
Zhou, Chunfang; Kolmos, Anette; Nielsen, Jens Frederik Dalsgaard
2012-01-01
in multiple ways in a PBL environment, such as formal and informal group discussions, regular supervisor meetings and sharing leadership. Furthermore, factors such as common goals, support of peers and openness stimulate motivation. However, the students think that a time schedule is a barrier to group...
ASPECTS OF MODERN ETHNIC SITUATION IN GEORGIA: PROBLEMS OF MINORITY GROUPS (BEZHTINTSY CASE
Directory of Open Access Journals (Sweden)
M. Sh. Sheikhov
2012-01-01
Full Text Available The article examined the current socio-economic status of one of the ethnicities of Dagestan - bezhtintsev that after the collapse of the Soviet Union were on their lands in a "foreign" country. The problems faced by bezhtintsy in Georgia and offered a way out.
Murray, Lynn M.
2012-01-01
Live-client projects are increasingly used in marketing coursework. However, students, instructors, and clients are often disappointed by the results. This paper reports an approach drawn from the problem-based learning, scaffolding, and team formation and coaching literatures that uses favor of a series of workshops designed to guide students in…
The Effects of Group Monitoring on Fatigue-Related Einstellung during Mathematical Problem Solving
Frings, Daniel
2011-01-01
Fatigue resulting from sleep deficit can lead to decreased performance in a variety of cognitive domains and can result in potentially serious accidents. The present study aimed to test whether fatigue leads to increased Einstellung (low levels of cognitive flexibility) in a series of mathematical problem-solving tasks. Many situations involving…
Group composition and its effect on female and male problem-solving in science education
Harskamp, Egbert; Ding, Ning; Suhre, Cor
2008-01-01
Background: Cooperative learning may help students elaborate upon problem information through interpersonal discourse, and this may provoke a higher level of thinking. Interaction stimulates students to put forward and order their thoughts, and to understand the ideas or questions of their peer
Jogdand, Sandip S; Naik, Jd
2014-07-01
The 'behaviour problems' are having major impact on child's bodily and social development. The family provides emotional support to an individual as well as plays a major role in the formation of one's personality. The quality and nature of the parental nurturance that the child receives will profoundly influence his future development. The knowledge of these family factors associated with behaviour problems may be helpful to identify at risk children. To study the family factors associated with behaviour problems amongst children of 6-18 Yrs age group. an adopted urban slum area of Govt. Medical College, Miraj Dist-Sangli. Cross sectional study. the sample size was calculated based upon 40% prevalence obtained in pilot study. Total 600 Children in the age group of 6-18 years residing in the urban slum area and their parents were interviewed with the help of predesigned, pretested proforma. chi-square test and risk estimate with Odd's ratio. Our study result reveals significant association between prevalence of behaviour problems with absence of either or both real parents and alcoholism in the parent or care taker. The behaviour problems have good prognosis if they are recognized earlier. Family has great role in prevention of behaviour problems in children, so parental counseling may be helpful.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix
Hu, Zongliang
2017-09-27
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-09-21
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-01-01
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data
Directory of Open Access Journals (Sweden)
Hongchao Song
2017-01-01
Full Text Available Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE and an ensemble k-nearest neighbor graphs- (K-NNG- based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.
Maes, Marlies; Stevens, Gonneke W. J. M.; Verkuyten, Maykel
2014-01-01
Previous research has identified ethnic group identification as a moderator in the relationship between perceived ethnic discrimination and problem behaviors in ethnic minority children. However, little is known about the influence of religious and host national identification on this relationship. This study investigated the moderating role of…
Ojiambo, Deborah
2011-01-01
This pilot study investigated the impact of group activity play therapy (GAPT) on displaced orphans aged 10 to 12 years living in a large children's village in Uganda. Teachers and housemothers identified 60 preadolescents exhibiting clinical levels of internalizing and externalizing behavior problems. The participants' ethnicity was African and…
Skalická, Vera; Belsky, Jay; Stenseng, Frode; Wichstrøm, Lars
2015-01-01
In this Norwegian study, bidirectional relations between children's behavior problems and child-teacher conflict and closeness were examined, and the possibility of moderation of these associations by child-care group size was tested. Eight hundred and nineteen 4-year-old children were followed up in first grade. Results revealed reciprocal…
Khumsikiew, Jeerisuda; Donsamak, Sisira; Saeteaw, Manit
2015-01-01
Problem-based Learning (PBL) is an alternate method of instruction that incorporates basic elements of cognitive learning theory. Colleges of pharmacy use PBL to aid anticipated learning outcomes and practice competencies for pharmacy student. The purpose of this study were to implement and evaluate a model of small group PBL for 5th year pharmacy…
Møller, Kim Malmbak Meltofte; Fast, Michael
2017-01-01
The purpose of this article is to discuss the relationship between learning, epistemology, and intersubjectivity in the context of problem-based learning and project-oriented work at a university level. It aims to show how the collaboration of students in a group over a long period of time can put emphasis on the knowledge-practice discussion, and…
Directory of Open Access Journals (Sweden)
Kateryna Novokhatska
2016-03-01
Full Text Available In recent years, materialized views (MVs are widely used to enhance the database performance by storing pre-calculated results of resource-intensive queries in the physical memory. In order to identify which queries may be potentially materialized, database transaction log for a long period of time should be analyzed. The goal of analysis is to distinguish resource-intensive and frequently used queries collected from database log, and optimize these queries by implementation of MVs. In order to achieve greater efficiency of MVs, they were used not only for the optimization of single queries, but also for entire groups of queries that are similar in syntax and execution results. Thus, the problem stated in this article is the development of approach that will allow forming groups of queries with similar syntax around the most resource-intensive queries in order to identify the list of potential candidates for materialization. For solving this problem, we have applied the algorithm of categorical data clustering to the query grouping problem on the step of database log analysis and searching candidates for materialization. In the current work CLOPE algorithm was modified to cover the introduced problem. Statistical and timing indicators were taken into account in order to form the clusters around the most resource intensive queries. Application of modified algorithm CLOPE allowed to decrease calculable complexity of clustering and to enhance the quality of formed groups.
Method of resonating groups in the Faddeev-Hahn equation formalism for three-body nuclear problem
Nasirov, M Z
2002-01-01
The Faddeev-Hahn equation formalism for three-body nuclear problem is considered. For solution of the equations the method of resonant groups have applied. The calculations of tritium binding energy and doublet nd-scattering length have been carried out. The results obtained shows that Faddeev-Hahn equation formalism is very simple and effective. (author)
Directory of Open Access Journals (Sweden)
Datta Susmita
2010-08-01
Full Text Available Abstract Background Generally speaking, different classifiers tend to work well for certain types of data and conversely, it is usually not known a priori which algorithm will be optimal in any given classification application. In addition, for most classification problems, selecting the best performing classification algorithm amongst a number of competing algorithms is a difficult task for various reasons. As for example, the order of performance may depend on the performance measure employed for such a comparison. In this work, we present a novel adaptive ensemble classifier constructed by combining bagging and rank aggregation that is capable of adaptively changing its performance depending on the type of data that is being classified. The attractive feature of the proposed classifier is its multi-objective nature where the classification results can be simultaneously optimized with respect to several performance measures, for example, accuracy, sensitivity and specificity. We also show that our somewhat complex strategy has better predictive performance as judged on test samples than a more naive approach that attempts to directly identify the optimal classifier based on the training data performances of the individual classifiers. Results We illustrate the proposed method with two simulated and two real-data examples. In all cases, the ensemble classifier performs at the level of the best individual classifier comprising the ensemble or better. Conclusions For complex high-dimensional datasets resulting from present day high-throughput experiments, it may be wise to consider a number of classification algorithms combined with dimension reduction techniques rather than a fixed standard algorithm set a priori.
Niec, Larissa N; Barnett, Miya L; Prewett, Matthew S; Shanley Chatham, Jenelle R
2016-08-01
Although efficacious interventions exist for childhood conduct problems, a majority of families in need of services do not receive them. To address problems of treatment access and adherence, innovative adaptations of current interventions are needed. This randomized control trial investigated the relative efficacy of a novel format of parent-child interaction therapy (PCIT), a treatment for young children with conduct problems. Eighty-one families with 3- to 6-year-old children (71.6% boys, 85.2% White) with diagnoses of oppositional defiant or conduct disorder were randomized to individual PCIT (n = 42) or the novel format, Group PCIT. Parents completed standardized measures of children's conduct problems, parenting stress, and social support at intake, posttreatment, and 6-month follow-up. Therapist ratings, parent attendance, and homework completion provided measures of treatment adherence. Throughout treatment, parenting skills were assessed using the Dyadic Parent-Child Interaction Coding System. Parents in both group and individual PCIT reported significant improvements from intake to posttreatment and follow-up in their children's conduct problems and adaptive functioning, as well as significant decreases in parenting stress. Parents in both treatment conditions also showed significant improvements in their parenting skills. There were no interactions between time and treatment format. Contrary to expectation, parents in Group PCIT did not experience greater social support or treatment adherence. Group PCIT was not inferior to individual PCIT and may be a valuable format to reach more families in need of services. Future work should explore the efficiency and sustainability of Group PCIT in community settings. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Niec, Larissa N.; Barnett, Miya L.; Prewett, Matthew S.; Shanley, Jenelle
2016-01-01
Objective Although efficacious interventions exist for childhood conduct problems, a majority of families in need of services do not receive them. To address problems of treatment access and adherence, innovative adaptations of current interventions are needed. This randomized control trial investigated the relative efficacy of a novel format of parent-child interaction therapy (PCIT), a treatment for young children with conduct problems. Methods Eighty-one families with three- to six-year-old children (71.6% male; 85.2% Caucasian) with diagnoses of oppositional defiant or conduct disorder were randomized to individual PCIT (n = 42) or the novel format, group PCIT. Parents completed standardized measures of children’s conduct problems, parenting stress, and social support at intake, posttreatment, and six-month follow-up. Therapist ratings, parent attendance, and homework completion provided measures of treatment adherence. Throughout treatment, parenting skills were assessed using the Dyadic Parent-Child Interaction Coding System. Results Parents in both group and individual PCIT reported significant improvements from intake to posttreatment and follow-up in their children’s conduct problems and adaptive functioning, as well as significant decreases in parenting stress. Parents in both treatment conditions also showed significant improvements in their parenting skills. There were no interactions between time and treatment format. Contrary to expectation, parents in group PCIT did not experience greater social support or treatment adherence. Conclusions Group PCIT was not inferior to individual PCIT and may be a valuable format to reach more families in need of services. Future work should explore the efficiency and sustainability of group PCIT in community settings. PMID:27018531
Directory of Open Access Journals (Sweden)
Ke Zhang
2003-10-01
Full Text Available Abstract. This study investigated the relative benefits of peer-controlled and moderated online collaboration during group problem solving. Thirty-five self-selected groups of four or five students were randomly assigned to the two conditions, which used the same online collaborative tool to solve twelve problem scenarios in an undergraduate statistics course. A score for the correctness of the solutions and a reasoning score were analyzed. A survey was administered to reveal differences in students' related attitudes. Three conclusions were reached: 1. Groups assigned to moderated forums displayed significantly higher reasoning scores than those in the peer-controlled condition, but the moderation did not affect correctness of solutions. 2. Students in the moderated forums reported being more likely to choose to use an optional online forum for future collaborations. 3. Students who reported having no difficulty during collaboration reported being more likely to choose to use an optional online forum in the future.
THE PROBLEMS OF SOCIALIZATION OF CHILDREN IN A CHILDREN’S MUSICAL GROUP
Directory of Open Access Journals (Sweden)
Larysa Ostapenko
2017-07-01
Full Text Available The article studies the process of the socialization of children in a musical group. The author has studied diverse factors of the socialization of children and its types (spontaneous socialization; relatively controlled socialization and socially controlled socialization. The author has also given characteristics of creative activity stimulation and described the need to be accepted by peers being realized during the participation in children’s music festivals. The notion of socialization was defined as a complex process of a child’s personality development, especially during the school/teen age, whereby an individual acquires a personal identity and learns the norms, values, behavior, and social skills appropriate to his or her social position in the context of a musical group. Educational work conducted be teachers, family members and society contributes to this process. School education in terms of a musical group consists of activities organised in order to educate personality traits through the organization of practical creative communication. Schoolchildren’s interpersonal relations are always based on social relations. It is proved that the personality development in a children’s musical group is placed in social environment and social communication. The key role belongs here to the motivation and the incentive of the schoolchildren’s creative activity. Creative communication in a children’s musical group turns out to be a powerful inner stimulation for children to fulfil their abilities. It pushes a child towards self-assertion and the gain of authority among peers. The article proves that pedagogical guidance of the creative process can be led professionally only in a well-organised musical group.
International Nuclear Information System (INIS)
Zhang, Wuhong; Su, Ming; Wu, Ziwen; Lu, Meng; Huang, Bingwei; Chen, Lixiang
2013-01-01
Twisted photons enable the definition of a Hilbert space beyond two dimensions by orbital angular momentum (OAM) eigenstates. Here we propose a feasible entanglement concentration experiment, to enhance the quality of high-dimensional entanglement shared by twisted photon pairs. Our approach is started from the full characterization of entangled spiral bandwidth, and is then based on the careful selection of the Laguerre–Gaussian (LG) modes with specific radial and azimuthal indices p and ℓ. In particular, we demonstrate the possibility of high-dimensional entanglement concentration residing in the OAM subspace of up to 21 dimensions. By means of LabVIEW simulations with spatial light modulators, we show that the Shannon dimensionality could be employed to quantify the quality of the present concentration. Our scheme holds promise in quantum information applications defined in high-dimensional Hilbert space. (letter)
Linear stability theory as an early warning sign for transitions in high dimensional complex systems
International Nuclear Information System (INIS)
Piovani, Duccio; Grujić, Jelena; Jensen, Henrik Jeldtoft
2016-01-01
We analyse in detail a new approach to the monitoring and forecasting of the onset of transitions in high dimensional complex systems by application to the Tangled Nature model of evolutionary ecology and high dimensional replicator systems with a stochastic element. A high dimensional stability matrix is derived in the mean field approximation to the stochastic dynamics. This allows us to determine the stability spectrum about the observed quasi-stable configurations. From overlap of the instantaneous configuration vector of the full stochastic system with the eigenvectors of the unstable directions of the deterministic mean field approximation, we are able to construct a good early-warning indicator of the transitions occurring intermittently. (paper)
Fickler, Robert; Lapkiewicz, Radek; Huber, Marcus; Lavery, Martin P J; Padgett, Miles J; Zeilinger, Anton
2014-07-30
Photonics has become a mature field of quantum information science, where integrated optical circuits offer a way to scale the complexity of the set-up as well as the dimensionality of the quantum state. On photonic chips, paths are the natural way to encode information. To distribute those high-dimensional quantum states over large distances, transverse spatial modes, like orbital angular momentum possessing Laguerre Gauss modes, are favourable as flying information carriers. Here we demonstrate a quantum interface between these two vibrant photonic fields. We create three-dimensional path entanglement between two photons in a nonlinear crystal and use a mode sorter as the quantum interface to transfer the entanglement to the orbital angular momentum degree of freedom. Thus our results show a flexible way to create high-dimensional spatial mode entanglement. Moreover, they pave the way to implement broad complex quantum networks where high-dimensionally entangled states could be distributed over distant photonic chips.
Group theoretic approach for solving the problem of diffusion of a drug through a thin membrane
Abd-El-Malek, Mina B.; Kassem, Magda M.; Meky, Mohammed L. M.
2002-03-01
The transformation group theoretic approach is applied to study the diffusion process of a drug through a skin-like membrane which tends to partially absorb the drug. Two cases are considered for the diffusion coefficient. The application of one parameter group reduces the number of independent variables by one, and consequently the partial differential equation governing the diffusion process with the boundary and initial conditions is transformed into an ordinary differential equation with the corresponding conditions. The obtained differential equation is solved numerically using the shooting method, and the results are illustrated graphically and in tables.
The Effects of Problem-Focused Group Counseling for Early-Stage Gynecologic Cancer Patients.
Wenzel, Lari B.; And Others
1995-01-01
Compared the effect of a 5-week group counseling treatment to an information-only control condition for 37 women with early-stage gynecologic cancer. Women completed various measures related to mood, adjustment, and coping one week before treatment, at the last session, and at one month follow up. Differences are reported. (JBJ)
Skowron, Elizabeth A.
2004-01-01
This study focused on examining the cross-cultural validity of Bowen family systems theory (M. Bowen, 1978), namely differentiation of self for individuals of color. Ethnic minority men and women completed measures of differentiation of self, ethnic group belonging, and 3 indices of personal adjustment. Initial support for the cross-cultural…
A Hybrid Neutrosophic Group ANP-TOPSIS Framework for Supplier Selection Problems
Directory of Open Access Journals (Sweden)
Mohamed Abdel-Basset
2018-06-01
Full Text Available One of the most significant competitive strategies for organizations is sustainable supply chain management (SSCM. The vital part in the administration of a sustainable supply chain is the sustainable supplier selection, which is a multi-criteria decision-making issue, including many conflicting criteria. The valuation and selection of sustainable suppliers are difficult problems due to vague, inconsistent and imprecise knowledge of decision makers. In the literature on supply chain management for measuring green performance, the requirement for methodological analysis of how sustainable variables affect each other, and how to consider vague, imprecise and inconsistent knowledge, is still unresolved. This research provides an incorporated multi-criteria decision-making procedure for sustainable supplier selection problems (SSSPs. An integrated framework is presented via interval-valued neutrosophic sets to deal with vague, imprecise and inconsistent information that exists usually in real world. The analytic network process (ANP is employed to calculate weights of selected criteria by considering their interdependencies. For ranking alternatives and avoiding additional comparisons of analytic network processes, the technique for order preference by similarity to ideal solution (TOPSIS is used. The proposed framework is turned to account for analyzing and selecting the optimal supplier. An actual case study of a dairy company in Egypt is examined within the proposed framework. Comparison with other existing methods is implemented to confirm the effectiveness and efficiency of the proposed approach.
Maxwell Strata and Cut Locus in the Sub-Riemannian Problem on the Engel Group
Ardentov, Andrei A.; Sachkov, Yuri L.
2017-12-01
We consider the nilpotent left-invariant sub-Riemannian structure on the Engel group. This structure gives a fundamental local approximation of a generic rank 2 sub-Riemannian structure on a 4-manifold near a generic point (in particular, of the kinematic models of a car with a trailer). On the other hand, this is the simplest sub-Riemannian structure of step three. We describe the global structure of the cut locus (the set of points where geodesics lose their global optimality), the Maxwell set (the set of points that admit more than one minimizer), and the intersection of the cut locus with the caustic (the set of conjugate points along all geodesics). The group of symmetries of the cut locus is described: it is generated by a one-parameter group of dilations R+ and a discrete group of reflections Z2 × Z2 × Z2. The cut locus admits a stratification with 6 three-dimensional strata, 12 two-dimensional strata, and 2 one-dimensional strata. Three-dimensional strata of the cut locus are Maxwell strata of multiplicity 2 (for each point there are 2 minimizers). Two-dimensional strata of the cut locus consist of conjugate points. Finally, one-dimensional strata are Maxwell strata of infinite multiplicity, they consist of conjugate points as well. Projections of sub-Riemannian geodesics to the 2-dimensional plane of the distribution are Euler elasticae. For each point of the cut locus, we describe the Euler elasticae corresponding to minimizers coming to this point. Finally, we describe the structure of the optimal synthesis, i. e., the set of minimizers for each terminal point in the Engel group.
Directory of Open Access Journals (Sweden)
Antonio Costa
2014-07-01
Full Text Available Production processes in Cellular Manufacturing Systems (CMS often involve groups of parts sharing the same technological requirements in terms of tooling and setup. The issue of scheduling such parts through a flow-shop production layout is known as the Flow-Shop Group Scheduling (FSGS problem or, whether setup times are sequence-dependent, the Flow-Shop Sequence-Dependent Group Scheduling (FSDGS problem. This paper addresses the FSDGS issue, proposing a hybrid metaheuristic procedure integrating features from Genetic Algorithms (GAs and Biased Random Sampling (BRS search techniques with the aim of minimizing the total flow time, i.e., the sum of completion times of all jobs. A well-known benchmark of test cases, entailing problems with two, three, and six machines, is employed for both tuning the relevant parameters of the developed procedure and assessing its performances against two metaheuristic algorithms recently presented by literature. The obtained results and a properly arranged ANOVA analysis highlight the superiority of the proposed approach in tackling the scheduling problem under investigation.
Directory of Open Access Journals (Sweden)
Thenmozhi Srinivasan
2015-01-01
Full Text Available Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM, with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets.
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
Energy Technology Data Exchange (ETDEWEB)
Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)
2016-07-15
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
The validation and assessment of machine learning: a game of prediction from high-dimensional data
DEFF Research Database (Denmark)
Pers, Tune Hannes; Albrechtsen, A; Holst, C
2009-01-01
In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often...... the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively....
Foot Problems in a Group of Patients with Rheumatoid Arthritis: An Unmet Need for Foot Care
Borman, Pinar; Ayhan, Figen; Tuncay, Figen; Sahin, Mehtap
2012-01-01
Objectives: The aim of this study was to evaluate the foot involvement in a group of RA patients in regard to symptoms, type and frequency of deformities, location, radiological changes, and foot care. Patients and Methods: A randomized selected 100 rheumatoid arthritis (RA) patients were recruited to the study. Data about foot symptoms, duration and location of foot pain, pain intensity, access to services related to foot, treatment, orthoses and assistive devices, and usefulness of therapie...
Kamp, Rachelle J A; van Berkel, Henk J M; Popeijus, Herman E; Leppink, Jimmie; Schmidt, Henk G; Dolmans, Diana H J M
2014-03-01
Even though peer process feedback is an often used tool to enhance the effectiveness of collaborative learning environments like PBL, the conditions under which it is best facilitated still need to be investigated. Therefore, this study investigated the effects of individual versus shared reflection and goal setting on students' individual contributions to the group and their academic achievement. In addition, the influence of prior knowledge on the effectiveness of peer feedback was studied. In this pretest-intervention-posttest study 242 first year students were divided into three conditions: condition 1 (individual reflection and goal setting), condition 2 (individual and shared reflection and goal setting), and condition 3 (control group). Results indicated that the quality of individual contributions to the tutorial group did not improve after receiving the peer feedback, nor did it differ between the three conditions. With regard to academic achievement, only males in conditions 1 and 2 showed better academic achievement compared with condition 3. However, there was no difference between both ways of reflection and goal setting with regard to achievement, indicating that both ways are equally effective. Nevertheless, it is still too early to conclude that peer feedback combined with reflection and goal setting is not effective in enhancing students' individual contributions. Students only had a limited number of opportunities to improve their contributions. Therefore, future research should investigate whether an increase in number of tutorial group meetings can enhance the effectiveness of peer feedback. In addition, the effect of quality of reflection and goal setting could be taken into consideration in future research.
DEFF Research Database (Denmark)
Lindekilde, Lasse
2012-01-01
There is a lack of consensus in the academic literature and among policy makers and practitioners on the definition of violent radicalisation, and current counter-radicalisation policy responses and procedures are informed by a weak and, at times, confused understanding of the motivational...... and structural factors underpinning such a process. The result is a variety of interventions across the EU, signalling a lack of consensus on the purposes of counter-radicalisation. In addition, indicators of success of counter-radicalisation policies are often unclear or unspecified. One consequence...... of this is that assessments of the effectiveness of counter-radicalisation measures and policy responses are either lacking or often methodologically questionable, impairing our understanding of the impacts of counter-radicalisation interventions on targeted communities. The article investigates problems of assessing...
The Territorial Trap and The Problem of Non-territorialized Groups
Directory of Open Access Journals (Sweden)
Mireille Marcia Karman
2016-12-01
Full Text Available This article aims to argue that territory is ahistorical concept rather than a constant one in explaining political conception of state and other political entities. Referring to liberalism and political realism, territory has been one of the core concepts in the study of political science. This paper will then elaborate the concept of territoriality and its problem in the era of globalization, which will also describe the existence of territory of non-state actor in private and public sphere. At the end of this article, I will outline the possibility to have a different reaction against the threat of non-state actor when the notion of territory is not taken for granted anymore.
Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per
2011-01-01
Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher
Directory of Open Access Journals (Sweden)
F. C. Cooper
2013-04-01
Full Text Available The fluctuation-dissipation theorem (FDT has been proposed as a method of calculating the response of the earth's atmosphere to a forcing. For this problem the high dimensionality of the relevant data sets makes truncation necessary. Here we propose a method of truncation based upon the assumption that the response to a localised forcing is spatially localised, as an alternative to the standard method of choosing a number of the leading empirical orthogonal functions. For systems where this assumption holds, the response to any sufficiently small non-localised forcing may be estimated using a set of truncations that are chosen algorithmically. We test our algorithm using 36 and 72 variable versions of a stochastic Lorenz 95 system of ordinary differential equations. We find that, for long integrations, the bias in the response estimated by the FDT is reduced from ~75% of the true response to ~30%.
One-dimensional transport code for one-group problems in plane geometry
International Nuclear Information System (INIS)
Bareiss, E.H.; Chamot, C.
1970-09-01
Equations and results are given for various methods of solution of the one-dimensional transport equation for one energy group in plane geometry with inelastic scattering and an isotropic source. After considerable investigation, a matrix method of solution was found to be faster and more stable than iteration procedures. A description of the code is included which allows for up to 24 regions, 250 points, and 16 angles such that the product of the number of angles and the number of points is less than 600
Rivolta, Davide; Lawson, Rebecca P; Palermo, Romina
2017-02-01
It has been estimated that one out of 40 people in the general population suffer from congenital prosopagnosia (CP), a neurodevelopmental disorder characterized by difficulty identifying people by their faces. CP involves impairment in recognizing faces, although the perception of non-face stimuli may also be impaired. Given that social interaction depends not only on face processing, but also on the processing of bodies, it is of theoretical importance to ascertain whether CP is also characterized by body perception impairments. Here, we tested 11 CPs and 11 matched control participants on the Body Identity Recognition Task (BIRT), a forced-choice match-to-sample task, using stimuli that require processing of body-specific, not clothing-specific, features. Results indicated that the group of CPs were as accurate as controls on the BIRT, which is in line with the lack of body perception complaints by CPs. However, the CPs were slower than controls, and when accuracy and response times were combined into inverse efficiency scores (IESs), the group of CPs were impaired, suggesting that the CPs could be using more effortful cognitive mechanisms to be as accurate as controls. In conclusion, our findings demonstrate that CP may not generally be limited to face processing difficulties, but may also extend to body perception.
Directory of Open Access Journals (Sweden)
Mazur Valerij Anatol'evich
2011-09-01
Full Text Available The question of regulatory framework for special medical group students' physical education, and their physical condition in particular is elaborated. It is found that in the current program the identified question is missing, although the assessment of individual performance standards for the physical condition of the students was envisaged in the programs of 1977 and 1982. The need for such an assessment is indicated by the large number of Ukrainian and foreign pediatricians and specialists in therapeutic physical culture. At the same time the standards for assessing these indicators are not developed. It complicates the formation of positive motivation of students to regular classes, and does not promote their self-confidence, capabilities and effectiveness of monitoring the effectiveness of exercise in various forms. The findings suggest the need to define the optimal composition of the bulk of tests and functional tests to assess the physical condition of special medical group students with various diseases and to develop appropriate indicators for their evaluation standards.
Gregurek, R
1999-12-01
Analysis of countertransference problems in the treatment of a heterogeneous group of war veterans. The method used in this work was psychodynamic clinical observation and analysis of countertransference phenomena in group therapy. In the beginning of our work, we faced with a regressive group, which was behaving as it was re-born. The leading subject in the group was aggression and the need for hospitalization to protect them and their environment from their violence. With the development of group processes, a feeling of helplessness and lack of perspective appeared, together with suicidal ideas, which, because of the development of group cohesion and trust, could be openly discussed. With time, the group became a transitional object for its members, an object that gave them a feeling of safety but also a feeling of dependence. The role of the therapist is to support group members in becoming independent. The therapist's function is in controlling, containing, and analyzing of the destructive, regressive part and in encouraging the healthy parts of the patient. With the integration of good therapeutic process, the healthy parts of the patient gain control over his or her regressive parts.
Guastello, Stephen J; Craven, Joanna; Zygowicz, Karen M; Bock, Benjamin R
2005-07-01
The process by which an initially leaderless group differentiates into one containing leadership and secondary role structures was examined using the swallowtail catastrophe model and principles of selforganization. The objectives were to identify the control variables in the process of leadership emergence in creative problem solving groups and production groups. In the first of two experiments, groups of university students (total N = 114) played a creative problem solving game. Participants later rated each other on leadership behavior, styles, and variables related to the process of conversation. A performance quality measure was included also. Control parameters in the swallowtail catastrophe model were identified through a combination of factor analysis and nonlinear regression. Leaders displayed a broad spectrum of behaviors in the general categories of Controlling the Conversation and Creativity in their role-play. In the second experiment, groups of university students (total N = 197) engaged in a laboratory work experiment that had a substantial production goal component. The same system of ratings and modeling strategy was used along with a work production measure. Leaders in the production task emerged to the extent that they exhibited control over both the creative and production aspects of the task, they could keep tension low, and the externally imposed production goals were realistic.
Schulte, Fiona; Vannatta, Kathryn; Barrera, Maru
2014-02-01
The aim of this study was to explore the ability of a group social skills intervention program for childhood brain tumor survivors to effect two steps of the social information processing model: social problem solving and social performance. Participants were 15 survivors (eight men and seven women) aged 7-15 years. The intervention consisted of eight 2-h weekly sessions focused on social skills including friendship making. Social problem solving, using hypothetical scenarios, was assessed during sessions 1 and 8. Social performance was observed during intervention sessions 1, 4, and 8. Compared with session 1, significant increases were found in social performance: frequency of maintaining eye contact and social conversations with peers over the course of the intervention. No significant changes in social problem solving were noted. This pilot study is the first to report improvements related to group social skills intervention at the level of observed social performance over the course of intervention. The lack of change in social problem solving suggests that survivors may possess the social knowledge required for social situations but have difficulty enacting social behaviors. Copyright © 2013 John Wiley & Sons, Ltd.
EURLIB-LWR-45/16 and - 15/5. Two board group libraries for LWR-shielding problems
Energy Technology Data Exchange (ETDEWEB)
Herrnberger, V
1982-04-01
Specifications of the broad group cross section libraries EURLIB-LWR-45/16 and -15/5 are given. They are based on EURLIB-III data and produced for LWR shielding problems. The elements considered are H, C{sub 12}, O, Na, Al, Si, Ca, Cr, Mn, Fe, Ni, Zr, U{sub 235}, U{sub 238}. The cross section libraries are available upon request from EIR, RSIC, NEA-CPL and IAEA-NDS. (author) Refs, figs, tabs
Marina A. Maznichenko; Nataliya I. Neskoromnykh
2016-01-01
The article presents the results of aspect analysis of the current federal state educational standards of higher education for the enlarged group of specialties"Service and tourism". There are analyzed the conformity of educational standards of higher education to the requirements of employers, the requirements for development results, to the structure and terms of realization of educational programs of undergraduate/graduate. The authors outline the key problems for each aspect, also identif...
Hesitant fuzzy soft sets with application in multicriteria group decision making problems.
Wang, Jian-qiang; Li, Xin-E; Chen, Xiao-hong
2015-01-01
Soft sets have been regarded as a useful mathematical tool to deal with uncertainty. In recent years, many scholars have shown an intense interest in soft sets and extended standard soft sets to intuitionistic fuzzy soft sets, interval-valued fuzzy soft sets, and generalized fuzzy soft sets. In this paper, hesitant fuzzy soft sets are defined by combining fuzzy soft sets with hesitant fuzzy sets. And some operations on hesitant fuzzy soft sets based on Archimedean t-norm and Archimedean t-conorm are defined. Besides, four aggregation operations, such as the HFSWA, HFSWG, GHFSWA, and GHFSWG operators, are given. Based on these operators, a multicriteria group decision making approach with hesitant fuzzy soft sets is also proposed. To demonstrate its accuracy and applicability, this approach is finally employed to calculate a numerical example.
Oner, Yaşar Ali; Okutan, Salih Erkan; Artinyan, Elizabeth; Kocazeybek, Bekir
2005-04-01
Malaria is a parasitic infection caused by Plasmodium species and it is especially seen in tropical and subtropical areas. We aimed to evaluate the effects of the infection in Afghanistan, which is an endemic place for malaria and had severe socio-economical lost after the war. We also compared these data with the ones that were recorded before the war. Blood samples were taken from 376 malaria suspected patients who come to the health center, established by the medical group of Istanbul Medical Faculty in 2002, Afghanistan. Blood samples were screened using the OPTIMAL Rapid Malaria Test and Giemsa staining method. In 95 (25.3%) patients diagnosis was malaria. In 65 patients (17.3%) the agent of the infection was P. falciparum and in 30 patients (8%) agents were other Plasmodium species.
Can We Train Machine Learning Methods to Outperform the High-dimensional Propensity Score Algorithm?
Karim, Mohammad Ehsanul; Pang, Menglan; Platt, Robert W
2018-03-01
The use of retrospective health care claims datasets is frequently criticized for the lack of complete information on potential confounders. Utilizing patient's health status-related information from claims datasets as surrogates or proxies for mismeasured and unobserved confounders, the high-dimensional propensity score algorithm enables us to reduce bias. Using a previously published cohort study of postmyocardial infarction statin use (1998-2012), we compare the performance of the algorithm with a number of popular machine learning approaches for confounder selection in high-dimensional covariate spaces: random forest, least absolute shrinkage and selection operator, and elastic net. Our results suggest that, when the data analysis is done with epidemiologic principles in mind, machine learning methods perform as well as the high-dimensional propensity score algorithm. Using a plasmode framework that mimicked the empirical data, we also showed that a hybrid of machine learning and high-dimensional propensity score algorithms generally perform slightly better than both in terms of mean squared error, when a bias-based analysis is used.
CSIR Research Space (South Africa)
Giovannini, D
2013-06-01
Full Text Available : QELS_Fundamental Science, San Jose, California United States, 9-14 June 2013 Reconstruction of High-Dimensional States Entangled in Orbital Angular Momentum Using Mutually Unbiased Measurements D. Giovannini1, ⇤, J. Romero1, 2, J. Leach3, A...
Ferdosi, Bilkis J.; Buddelmeijer, Hugo; Trager, Scott; Wilkinson, Michael H.F.; Roerdink, Jos B.T.M.
2010-01-01
Data sets in astronomy are growing to enormous sizes. Modern astronomical surveys provide not only image data but also catalogues of millions of objects (stars, galaxies), each object with hundreds of associated parameters. Exploration of this very high-dimensional data space poses a huge challenge.
High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm
Cai, Li
2010-01-01
A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…
Estimating the effect of a variable in a high-dimensional regression model
DEFF Research Database (Denmark)
Jensen, Peter Sandholt; Wurtz, Allan
assume that the effect is identified in a high-dimensional linear model specified by unconditional moment restrictions. We consider properties of the following methods, which rely on lowdimensional models to infer the effect: Extreme bounds analysis, the minimum t-statistic over models, Sala...
Multi-Scale Factor Analysis of High-Dimensional Brain Signals
Ting, Chee-Ming; Ombao, Hernando; Salleh, Sh-Hussain
2017-01-01
In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive
Pricing and hedging high-dimensional American options : an irregular grid approach
Berridge, S.; Schumacher, H.
2002-01-01
We propose and test a new method for pricing American options in a high dimensional setting. The method is centred around the approximation of the associated variational inequality on an irregular grid. We approximate the partial differential operator on this grid by appealing to the SDE
Directory of Open Access Journals (Sweden)
N. S. Morozova
2015-01-01
Full Text Available The article considers a problem of the decentralization-based approach to formation control of a group of agents, which simulate mobile autonomous robots. The agents use only local information limited by the covering range of their sensors. The agents have to build and maintain the formation, which fits to the defined target geometric formation structure with desired accuracy during the movement to the target point. At any point in time the number of agents in the group can change unexpectedly (for example, as a result of the agent failure or if a new agent joins the group.The aim of the article is to provide the base control rule, which solves the formation control problem, and to develop its modifications, which provide the correct behavior in case the agent number in the group is not equal to the size of the target geometric formation structure. The proposed base control rule, developed by the author, uses the method of involving virtual leaders. The coordinates of the virtual leaders and also the priority to follow the specific leader are calculated by each agent itself according to specific rules.The following results are presented in the article: the base control rule for solving the formation control problem, its modifications for the cases when the number of agents is greater/less than the size of the target geometric formation structure and also the computer modeling results proving the efficiency of the modified control rules. The specific feature of the control rule, developed by the author, is that each agent itself calculates the virtual leaders and each agent performs dynamic choice of the place within the formation (there is no predefined one-to-one relation between agents and places within the geometric formation structure. The results, provided in this article, can be used in robotics for developing control algorithms for the tasks, which require preserving specific relational positions among the agents while moving. One of the
Su, Shaobing; Li, Xiaoming; Zhang, Liying; Lin, Danhua; Zhang, Chen; Zhou, Yuejiao
2014-01-01
HIV risk and mental health problems are prevalent among female sex workers (FSWs) in China. The purpose of this research was to study age group differences in HIV risk and mental health problems in this population. In the current study, we divided a sample of 1022 FSWs into three age groups (≤ 20 years, 21-34 years, and ≥ 35 years). Results showed that among the three groups (1) older FSWs (≥ 35 years) were likely to be socioeconomically disadvantaged (e.g., rural residency, little education, employment in low-paying venues, and low monthly income); (2) older FSWs reported the highest rates of inconsistent, ineffective condom use, and sexually transmitted diseases history; (3) younger FSWs (≤ 20 years) reported the highest level of depression, suicidal thoughts and suicide attempts, regular-partner violence, and substance use; (4) all health-related risks except casual-partner violence were more prevalent among older and younger FSWs than among FSWs aged 21-34 years; and (5) age had a significant effect on all health indicators except suicide attempts after controlling for several key demographic factors. These findings indicate the need for intervention efforts to address varying needs among FSWs in different age groups. Specific interventional efforts are needed to reduce older FSWs' exposure to HIV risk; meanwhile, more attention should be given to improve FSWs' mental health status, especially among younger FSWs.
Burgess, Annette; Roberts, Chris; Ayton, Tom; Mellis, Craig
2018-04-10
While Problem Based Learning (PBL) has long been established internationally, Team-based learning (TBL) is a relatively new pedagogy in medical curricula. Both PBL and TBL are designed to facilitate a learner-centred approach, where students, in interactive small groups, use peer-assisted learning to solve authentic, professionally relevant problems. Differences, however, exist between PBL and TBL in terms of preparation requirements, group numbers, learning strategies, and class structure. Although there are many similarities and some differences between PBL and TBL, both rely on constructivist learning theory to engage and motivate students in their learning. The aim of our study was to qualitatively explore students' perceptions of having their usual PBL classes run in TBL format. In 2014, two iterations in a hybrid PBL curriculum were converted to TBL format, with two PBL groups of 10 students each, being combined to form one TBL class of 20, split into four groups of five students. At the completion of two TBL sessions, all students were invited to attend one of two focus groups, with 14 attending. Thematic analysis was used to code and categorise the data into themes, with constructivist theory used as a conceptual framework to identify recurrent themes. Four key themes emerged; guided learning, problem solving, collaborative learning, and critical reflection. Although structured, students were attracted to the active and collaborative approach of TBL. They perceived the key advantages of TBL to include the smaller group size, the preparatory Readiness Assurance Testing process, facilitation by a clinician, an emphasis on basic science concepts, and immediate feedback. The competitiveness of TBL was seen as a spur to learning. These elements motivated students to prepare, promoted peer assisted teaching and learning, and focussed team discussion. An important advantage of PBL over TBL, was the opportunity for adequate clinical reasoning within the problem
Foot problems in a group of patients with rheumatoid arthritis: an unmet need for foot care.
Borman, Pinar; Ayhan, Figen; Tuncay, Figen; Sahin, Mehtap
2012-01-01
The aim of this study was to evaluate the foot involvement in a group of RA patients in regard to symptoms, type and frequency of deformities, location, radiological changes, and foot care. A randomized selected 100 rheumatoid arthritis (RA) patients were recruited to the study. Data about foot symptoms, duration and location of foot pain, pain intensity, access to services related to foot, treatment, orthoses and assistive devices, and usefulness of therapies were determined by the questionnaire. Radiological changes were assessed according to modified Larsen scoring system. The scores of disease activity scale of 28 joints and Health Assessment Questionnaire indicating the functional status of RA patients were collected from patient files. A total of 100 RA patients (90 female, 10 male) with a mean age of 52.5 ±10.9 years were enrolled to the study. Eighty-nine of the 100 patients had experienced foot complaints/symptoms in the past or currently. Foot pain and foot symptoms were reported as the first site of involvement in 14 patients. Thirty-six patients had ankle pain and the most common sites of the foot symptoms were ankle (36%) and forefoot (30%) followed by hindfoot (17%) and midfoot (7%) currently. Forty-nine of the patients described that they had difficulty in performing their foot care. Insoles and orthopedic shoes were prescribed in 39 patients, but only 14 of them continued to use them. The main reasons for not wearing them were; 17 not helpful (43%), 5 made foot pain worse (12.8%), and 3 did not fit (7.6%). Foot symptoms were reported to be decreased in 24 % of the subjects after the medical treatment and 6 patients indicated that they had underwent foot surgery. Current foot pain was significantly associated with higher body mass index and longer disease duration, and duration of morning stiffness. The radiological scores did not correlate with duration of foot symptoms and current foot pain (p>0.05) but the total number of foot deformities was
Revisiting the Cooling Flow Problem in Galaxies, Groups, and Clusters of Galaxies
McDonald, M.; Gaspari, M.; McNamara, B. R.; Tremblay, G. R.
2018-05-01
We present a study of 107 galaxies, groups, and clusters spanning ∼3 orders of magnitude in mass, ∼5 orders of magnitude in central galaxy star formation rate (SFR), ∼4 orders of magnitude in the classical cooling rate ({\\dot{M}}cool}\\equiv {M}gas}(rsample, we measure the ICM cooling rate, {\\dot{M}}cool}, using archival Chandra X-ray data and acquire the SFR and systematic uncertainty in the SFR by combining over 330 estimates from dozens of literature sources. With these data, we estimate the efficiency with which the ICM cools and forms stars, finding {ε }cool}\\equiv {SFR}/{\\dot{M}}cool}=1.4 % +/- 0.4% for systems with {\\dot{M}}cool}> 30 M ⊙ yr‑1. For these systems, we measure a slope in the SFR–{\\dot{M}}cool} relation greater than unity, suggesting that the systems with the strongest cool cores are also cooling more efficiently. We propose that this may be related to, on average, higher black hole accretion rates in the strongest cool cores, which could influence the total amount (saturating near the Eddington rate) and dominant mode (mechanical versus radiative) of feedback. For systems with {\\dot{M}}cool}< 30 M ⊙ yr‑1, we find that the SFR and {\\dot{M}}cool} are uncorrelated and show that this is consistent with star formation being fueled at a low (but dominant) level by recycled ISM gas in these systems. We find an intrinsic log-normal scatter in SFR at a fixed {\\dot{M}}cool} of 0.52 ± 0.06 dex (1σ rms), suggesting that cooling is tightly self-regulated over very long timescales but can vary dramatically on short timescales. There is weak evidence that this scatter may be related to the feedback mechanism, with the scatter being minimized (∼0.4 dex) for systems for which the mechanical feedback power is within a factor of two of the cooling luminosity.
International legal problem in combating 'Islamic State' terrorist group in Syria
Directory of Open Access Journals (Sweden)
Stevanović Miroslav
2015-01-01
Full Text Available 'Islamic State of Iraq and Syria' (ISIS has occupied parts of internationally recognized states and exerts further territorial pretensions. ISIS, also, implements a repressive rule, through violations of human rights and humanitarian law, which may constitute international crimes. In facing the threat od ISIS, the perception of international terrorism is important since this group has the features of a territorial entity. So far, facing with the threat of ISIS has been reduced to a model that is adopted by the UN Security Council against the terrorist network Al-Qaida. An international coalition of states, led by the United States, has undertaken air strikes on positions ISIS, on several grounds: the responsibility to protect, the protection of national security, and at the request of Iraq. At the same time, the strikes are applied in Syria, which can not be accountable for the actions of ISIS and has not requested international assistance. International law does not allow actions which would aim to destroy or jeopardize the territorial integrity or political independence of any sovereign and independent state, which is acting in accordance with the principle of equal rights and self-determination of peoples, and is hence governed by a representative government. The UNSC resolution 2249 remains short of recommending international armed action under the aegis of UNSC, but represents a step forward in recognizing the responsibility of this body in facing ISIS, at least as far as the 'destruction of refuge' is concerned. The use of force in the territory of Syria, without the express authorization of the UNSC is illegal, because terrorism does not constitute grounds for the use of force against countries. But, it opens broader issues of responsibility for the development of ISIS and the humanitarian crisis in the Middle East, as well as the functioning of the system of collective security. Overcoming the current crisis UNSC implies not just a
Directory of Open Access Journals (Sweden)
Vladimir eKozunov
2015-04-01
Full Text Available Although MEG/EEG signals are highly variable between subjects, they allow characterizing systematic changes of cortical activity in both space and time. Traditionally a two-step procedure is used. The first step is a transition from sensor to source space by the means of solving an ill-posed inverse problem for each subject individually. The second is mapping of cortical regions consistently active across subjects. In practice the first step often leads to a set of active cortical regions whose location and timecourses display a great amount of interindividual variability hindering the subsequent group analysis.We propose Group Analysis Leads to Accuracy (GALA - a solution that combines the two steps into one. GALA takes advantage of individual variations of cortical geometry and sensor locations. It exploits the ensuing variability in electromagnetic forward model as a source of additional information. We assume that for different subjects functionally identical cortical regions are located in close proximity and partially overlap and their timecourses are correlated. This relaxed similarity constraint on the inverse solution can be expressed within a probabilistic framework, allowing for an iterative algorithm solving the inverse problem jointly for all subjects.A systematic simulation study showed that GALA, as compared with the standard min-norm approach, improves accuracy of true activity recovery, when accuracy is assessed both in terms of spatial proximity of the estimated and true activations and correct specification of spatial extent of the activated regions. This improvement obtained without using any noise normalization techniques for both solutions, preserved for a wide range of between-subject variations in both spatial and temporal features of regional activation. The corresponding activation timecourses exhibit significantly higher similarity across subjects. Similar results were obtained for a real MEG dataset of face
Sveen, Unni; Ostensjo, Sigrid; Laxe, Sara; Soberg, Helene L
2013-05-01
To describe problems in body functions, activities, and participation and the influence of environmental factors as experienced after mild traumatic brain injury (TBI), using the ICF framework. To compare our findings with the Brief and Comprehensive ICF Core Sets for TBI. Six focus-group interviews were performed with 17 participants (nine women, eight men, age ranged from 22 to 55 years) within the context of an outpatient rehabilitation programme for patients with mild TBI. The interviews were transcribed verbatim and analysed using the ICF. One-hundred and eight second-level categories derived from the interview text, showing a large diversity of TBI-related problems in functioning. Problems in cognitive and emotional functions, energy and drive, and in carrying out daily routine and work, were frequently reported. All ICF categories reported with high-to-moderate frequencies were present in the Brief ICF Core Set and 84% in the Comprehensive ICF Core Set. The reported environmental factors mainly concerned aspects of health and social security systems, social network and attitudes towards the injured person. This study confirms the diversity of problems and the environmental factors that have an impact on post-injury functioning of patients with mild TBI.
Mpofu, D J; Lanphear, J; Stewart, T; Das, M; Ridding, P; Dunn, E
1998-09-01
The Faculty of Medicine and Health Sciences (FMHS), United Arab Emirates (UAE) University is in a unique position to explore issues related to English language proficiency and medical student performance. All students entering the FMHS have English as a second language. This study focused on the issues of students' proficiency in English as measured by the TOEFL test, student background factors and interaction in problem-based learning (PBL) groups. Using a modification of Bales Interaction Process Analysis, four problem-based learning groups were observed over four thematic units, to measure the degree of student interaction within PBL groups and to compare this to individual TOEFL scores and key background variables. The students' contributions correlated highly with TOEFL test results in the giving of information (range r = 0.67-0.74). The female students adhered to interacting in English during group sessions, whereas the male students were more likely to revert to using Arabic in elaborating unclear phenomena (p TOEFL scores for the male students, but not for female students. Multivariate analysis was undertaken to analyse the relative contribution of the TOEFL, parental education and years of studying in English. The best predictor of students' contributions in PBL groups was identified as TOEFL scores. The study demonstrates the importance of facilitating a locally acceptable level of English proficiency prior to admission to the FMHS. However, it also highlights the importance of not focusing only on English proficiency but paying attention to additional factors in facilitating medical students in maximizing benefits from interactions in PBL settings.
Directory of Open Access Journals (Sweden)
Vitaliy Omelyanovich
2017-11-01
Full Text Available Background. Mental disorders prevention in specific professional groups is impossible without scientifically substantiated allocation of groups with increased neuropsychiatric and psychosomatic disorders risk. This fact indicates the need to study the gender, age and professional characteristics in law enforcement workers who already have problems with psychological adaptation. Methods and materials. The study involved 1630 law enforcement officers (1,301 men and 329 women who were evaluated with the Symptom Checklist-90-R (SCL-90-R. As the statistical methods were used the partial regression calculation coefficient η2, cohort calculation risk measures, φ*-total Fischer transformation method, and single-factor dispersion Fisher's analysis. Results. According to gender characteristics, the problems with psychological adaptation in men were significantly less pronounced than in women (φ*=1.79; p=0.37. These data were confirmed by the cohort calculation and risk measures results: men – 0.261, women – 0.349 (the psychological disadaptation risk in women was 1.3 times higher than men. There weren’t any statistically significant age differences between the representatives of both gender groups with psychological adaptation disturbances and healthy ones (φ* ≤1.19; p≥0.1. Among patients who suffered from psychosomatic diseases, were men over the age of 35 (φ* ≥2.28; p≤0.0001 and women over 26 years old (φ*= 2.16; p=0.014 prevailed. There were significantly fewer people among men with psychosomatic illnesses with 4-9 years of professional working experience than in a healthy group. On the contrary, there were significantly more patients in a law enforcement workers group with 10-15 years working experience than in the healthy one (φ*>1.73; p<0.0001. Conclusion. The risk of mental health problems in female police officers is much higher than in men. Disadaptation development is not related to the age and length of working
Bit-Table Based Biclustering and Frequent Closed Itemset Mining in High-Dimensional Binary Data
Directory of Open Access Journals (Sweden)
András Király
2014-01-01
Full Text Available During the last decade various algorithms have been developed and proposed for discovering overlapping clusters in high-dimensional data. The two most prominent application fields in this research, proposed independently, are frequent itemset mining (developed for market basket data and biclustering (applied to gene expression data analysis. The common limitation of both methodologies is the limited applicability for very large binary data sets. In this paper we propose a novel and efficient method to find both frequent closed itemsets and biclusters in high-dimensional binary data. The method is based on simple but very powerful matrix and vector multiplication approaches that ensure that all patterns can be discovered in a fast manner. The proposed algorithm has been implemented in the commonly used MATLAB environment and freely available for researchers.
Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014
Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina
2016-01-01
This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...
Su, Yapeng; Shi, Qihui; Wei, Wei
2017-02-01
New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features, and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Shell Multi-dimensional Hierarchical Cubing Approach for High-Dimensional Cube
Zou, Shuzhi; Zhao, Li; Hu, Kongfa
The pre-computation of data cubes is critical for improving the response time of OLAP systems and accelerating data mining tasks in large data warehouses. However, as the sizes of data warehouses grow, the time it takes to perform this pre-computation becomes a significant performance bottleneck. In a high dimensional data warehouse, it might not be practical to build all these cuboids and their indices. In this paper, we propose a shell multi-dimensional hierarchical cubing algorithm, based on an extension of the previous minimal cubing approach. This method partitions the high dimensional data cube into low multi-dimensional hierarchical cube. Experimental results show that the proposed method is significantly more efficient than other existing cubing methods.
Distribution of high-dimensional entanglement via an intra-city free-space link.
Steinlechner, Fabian; Ecker, Sebastian; Fink, Matthias; Liu, Bo; Bavaresco, Jessica; Huber, Marcus; Scheidl, Thomas; Ursin, Rupert
2017-07-24
Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links.
Directory of Open Access Journals (Sweden)
Hassan Hashemi
2018-05-01
Full Text Available This study introduces a new decision model with multi-criteria analysis by a group of decision makers (DMs with intuitionistic fuzzy sets (IFSs. The presented model depends on a new integration of IFSs theory, ELECTRE and VIKOR along with grey relational analysis (GRA. To portray uncertain real-life situations and take account of complex decision problem, multi-criteria group decision-making (MCGDM model by totally unknown importance are introduced with IF-setting. Hence, a weighting method depended on Entropy and IFSs, is developed to present the weights of DMs and evaluation factors. A new ranking approach is provided for prioritizing the alternatives. To indicate the applicability of the presented new decision model, an industrial application for assessing contractors in the construction industry is given and discussed from the recent literature.
Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen
2018-01-25
Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please
An Unbiased Distance-based Outlier Detection Approach for High-dimensional Data
DEFF Research Database (Denmark)
Nguyen, Hoang Vu; Gopalkrishnan, Vivekanand; Assent, Ira
2011-01-01
than a global property. Different from existing approaches, it is not grid-based and dimensionality unbiased. Thus, its performance is impervious to grid resolution as well as the curse of dimensionality. In addition, our approach ranks the outliers, allowing users to select the number of desired...... outliers, thus mitigating the issue of high false alarm rate. Extensive empirical studies on real datasets show that our approach efficiently and effectively detects outliers, even in high-dimensional spaces....
Controlling chaos in low and high dimensional systems with periodic parametric perturbations
International Nuclear Information System (INIS)
Mirus, K.A.; Sprott, J.C.
1998-06-01
The effect of applying a periodic perturbation to an accessible parameter of various chaotic systems is examined. Numerical results indicate that perturbation frequencies near the natural frequencies of the unstable periodic orbits of the chaotic systems can result in limit cycles for relatively small perturbations. Such perturbations can also control or significantly reduce the dimension of high-dimensional systems. Initial application to the control of fluctuations in a prototypical magnetic fusion plasma device will be reviewed
GAMLSS for high-dimensional data – a flexible approach based on boosting
Mayr, Andreas; Fenske, Nora; Hofner, Benjamin; Kneib, Thomas; Schmid, Matthias
2010-01-01
Generalized additive models for location, scale and shape (GAMLSS) are a popular semi-parametric modelling approach that, in contrast to conventional GAMs, regress not only the expected mean but every distribution parameter (e.g. location, scale and shape) to a set of covariates. Current fitting procedures for GAMLSS are infeasible for high-dimensional data setups and require variable selection based on (potentially problematic) information criteria. The present work describes a boosting algo...
Preface [HD3-2015: International meeting on high-dimensional data-driven science
International Nuclear Information System (INIS)
2016-01-01
A never-ending series of innovations in measurement technology and evolutions in information and communication technologies have led to the ongoing generation and accumulation of large quantities of high-dimensional data every day. While detailed data-centric approaches have been pursued in respective research fields, situations have been encountered where the same mathematical framework of high-dimensional data analysis can be found in a wide variety of seemingly unrelated research fields, such as estimation on the basis of undersampled Fourier transform in nuclear magnetic resonance spectroscopy in chemistry, in magnetic resonance imaging in medicine, and in astronomical interferometry in astronomy. In such situations, bringing diverse viewpoints together therefore becomes a driving force for the creation of innovative developments in various different research fields. This meeting focuses on “Sparse Modeling” (SpM) as a methodology for creation of innovative developments through the incorporation of a wide variety of viewpoints in various research fields. The objective of this meeting is to offer a forum where researchers with interest in SpM can assemble and exchange information on the latest results and newly established methodologies, and discuss future directions of the interdisciplinary studies for High-Dimensional Data-Driven science (HD 3 ). The meeting was held in Kyoto from 14-17 December 2015. We are pleased to publish 22 papers contributed by invited speakers in this volume of Journal of Physics: Conference Series. We hope that this volume will promote further development of High-Dimensional Data-Driven science. (paper)
Runcie, Daniel E; Mukherjee, Sayan
2013-07-01
Quantitative genetic studies that model complex, multivariate phenotypes are important for both evolutionary prediction and artificial selection. For example, changes in gene expression can provide insight into developmental and physiological mechanisms that link genotype and phenotype. However, classical analytical techniques are poorly suited to quantitative genetic studies of gene expression where the number of traits assayed per individual can reach many thousand. Here, we derive a Bayesian genetic sparse factor model for estimating the genetic covariance matrix (G-matrix) of high-dimensional traits, such as gene expression, in a mixed-effects model. The key idea of our model is that we need consider only G-matrices that are biologically plausible. An organism's entire phenotype is the result of processes that are modular and have limited complexity. This implies that the G-matrix will be highly structured. In particular, we assume that a limited number of intermediate traits (or factors, e.g., variations in development or physiology) control the variation in the high-dimensional phenotype, and that each of these intermediate traits is sparse - affecting only a few observed traits. The advantages of this approach are twofold. First, sparse factors are interpretable and provide biological insight into mechanisms underlying the genetic architecture. Second, enforcing sparsity helps prevent sampling errors from swamping out the true signal in high-dimensional data. We demonstrate the advantages of our model on simulated data and in an analysis of a published Drosophila melanogaster gene expression data set.
High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.
Andras, Peter
2018-02-01
Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.
McParland, D; Phillips, C M; Brennan, L; Roche, H M; Gormley, I C
2017-12-10
The LIPGENE-SU.VI.MAX study, like many others, recorded high-dimensional continuous phenotypic data and categorical genotypic data. LIPGENE-SU.VI.MAX focuses on the need to account for both phenotypic and genetic factors when studying the metabolic syndrome (MetS), a complex disorder that can lead to higher risk of type 2 diabetes and cardiovascular disease. Interest lies in clustering the LIPGENE-SU.VI.MAX participants into homogeneous groups or sub-phenotypes, by jointly considering their phenotypic and genotypic data, and in determining which variables are discriminatory. A novel latent variable model that elegantly accommodates high dimensional, mixed data is developed to cluster LIPGENE-SU.VI.MAX participants using a Bayesian finite mixture model. A computationally efficient variable selection algorithm is incorporated, estimation is via a Gibbs sampling algorithm and an approximate BIC-MCMC criterion is developed to select the optimal model. Two clusters or sub-phenotypes ('healthy' and 'at risk') are uncovered. A small subset of variables is deemed discriminatory, which notably includes phenotypic and genotypic variables, highlighting the need to jointly consider both factors. Further, 7 years after the LIPGENE-SU.VI.MAX data were collected, participants underwent further analysis to diagnose presence or absence of the MetS. The two uncovered sub-phenotypes strongly correspond to the 7-year follow-up disease classification, highlighting the role of phenotypic and genotypic factors in the MetS and emphasising the potential utility of the clustering approach in early screening. Additionally, the ability of the proposed approach to define the uncertainty in sub-phenotype membership at the participant level is synonymous with the concepts of precision medicine and nutrition. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Xinshang You
2016-09-01
Full Text Available This paper proposes a novel approach to cope with the multi-criteria group decision-making problems. We give the pairwise comparisons based on the best-worst-method (BWM, which can decrease comparison times. Additionally, our comparison results are determined with the positive and negative aspects. In order to deal with the decision matrices effectively, we consider the elimination and choice translation reality (ELECTRE III method under the intuitionistic multiplicative preference relations environment. The ELECTRE III method is designed for a double-automatic system. Under a certain limitation, without bothering the decision-makers to reevaluate the alternatives, this system can adjust some special elements that have the most influence on the group’s satisfaction degree. Moreover, the proposed method is suitable for both the intuitionistic multiplicative preference relation and the interval valued fuzzy preference relations through the transformation formula. An illustrative example is followed to demonstrate the rationality and availability of the novel method.
On-chip generation of high-dimensional entangled quantum states and their coherent control.
Kues, Michael; Reimer, Christian; Roztocki, Piotr; Cortés, Luis Romero; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T; Little, Brent E; Moss, David J; Caspani, Lucia; Azaña, José; Morandotti, Roberto
2017-06-28
Optical quantum states based on entangled photons are essential for solving questions in fundamental physics and are at the heart of quantum information science. Specifically, the realization of high-dimensional states (D-level quantum systems, that is, qudits, with D > 2) and their control are necessary for fundamental investigations of quantum mechanics, for increasing the sensitivity of quantum imaging schemes, for improving the robustness and key rate of quantum communication protocols, for enabling a richer variety of quantum simulations, and for achieving more efficient and error-tolerant quantum computation. Integrated photonics has recently become a leading platform for the compact, cost-efficient, and stable generation and processing of non-classical optical states. However, so far, integrated entangled quantum sources have been limited to qubits (D = 2). Here we demonstrate on-chip generation of entangled qudit states, where the photons are created in a coherent superposition of multiple high-purity frequency modes. In particular, we confirm the realization of a quantum system with at least one hundred dimensions, formed by two entangled qudits with D = 10. Furthermore, using state-of-the-art, yet off-the-shelf telecommunications components, we introduce a coherent manipulation platform with which to control frequency-entangled states, capable of performing deterministic high-dimensional gate operations. We validate this platform by measuring Bell inequality violations and performing quantum state tomography. Our work enables the generation and processing of high-dimensional quantum states in a single spatial mode.
Yue, Lei; Guan, Zailin; Saif, Ullah; Zhang, Fei; Wang, Hao
2016-01-01
Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems.
Covariance Method of the Tunneling Radiation from High Dimensional Rotating Black Holes
Li, Hui-Ling; Han, Yi-Wen; Chen, Shuai-Ru; Ding, Cong
2018-04-01
In this paper, Angheben-Nadalini-Vanzo-Zerbini (ANVZ) covariance method is used to study the tunneling radiation from the Kerr-Gödel black hole and Myers-Perry black hole with two independent angular momentum. By solving the Hamilton-Jacobi equation and separating the variables, the radial motion equation of a tunneling particle is obtained. Using near horizon approximation and the distance of the proper pure space, we calculate the tunneling rate and the temperature of Hawking radiation. Thus, the method of ANVZ covariance is extended to the research of high dimensional black hole tunneling radiation.
DEFF Research Database (Denmark)
Ding, Yunhong; Bacco, Davide; Dalgaard, Kjeld
2017-01-01
is intrinsically limited to 1 bit/photon. Here we propose and experimentally demonstrate, for the first time, a high-dimensional quantum key distribution protocol based on space division multiplexing in multicore fiber using silicon photonic integrated lightwave circuits. We successfully realized three mutually......-dimensional quantum states, and enables breaking the information efficiency limit of traditional quantum key distribution protocols. In addition, the silicon photonic circuits used in our work integrate variable optical attenuators, highly efficient multicore fiber couplers, and Mach-Zehnder interferometers, enabling...
High-dimensional chaos from self-sustained collisions of solitons
Energy Technology Data Exchange (ETDEWEB)
Yildirim, O. Ozgur, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Cavium, Inc., 600 Nickerson Rd., Marlborough, Massachusetts 01752 (United States); Ham, Donhee, E-mail: donhee@seas.harvard.edu, E-mail: oozgury@gmail.com [Harvard University, 33 Oxford St., Cambridge, Massachusetts 02138 (United States)
2014-06-16
We experimentally demonstrate chaos generation based on collisions of electrical solitons on a nonlinear transmission line. The nonlinear line creates solitons, and an amplifier connected to it provides gain to these solitons for their self-excitation and self-sustenance. Critically, the amplifier also provides a mechanism to enable and intensify collisions among solitons. These collisional interactions are of intrinsically nonlinear nature, modulating the phase and amplitude of solitons, thus causing chaos. This chaos generated by the exploitation of the nonlinear wave phenomena is inherently high-dimensional, which we also demonstrate.
Inferring biological tasks using Pareto analysis of high-dimensional data.
Hart, Yuval; Sheftel, Hila; Hausser, Jean; Szekely, Pablo; Ben-Moshe, Noa Bossel; Korem, Yael; Tendler, Avichai; Mayo, Avraham E; Alon, Uri
2015-03-01
We present the Pareto task inference method (ParTI; http://www.weizmann.ac.il/mcb/UriAlon/download/ParTI) for inferring biological tasks from high-dimensional biological data. Data are described as a polytope, and features maximally enriched closest to the vertices (or archetypes) allow identification of the tasks the vertices represent. We demonstrate that human breast tumors and mouse tissues are well described by tetrahedrons in gene expression space, with specific tumor types and biological functions enriched at each of the vertices, suggesting four key tasks.
A novel algorithm of artificial immune system for high-dimensional function numerical optimization
Institute of Scientific and Technical Information of China (English)
DU Haifeng; GONG Maoguo; JIAO Licheng; LIU Ruochen
2005-01-01
Based on the clonal selection theory and immune memory theory, a novel artificial immune system algorithm, immune memory clonal programming algorithm (IMCPA), is put forward. Using the theorem of Markov chain, it is proved that IMCPA is convergent. Compared with some other evolutionary programming algorithms (like Breeder genetic algorithm), IMCPA is shown to be an evolutionary strategy capable of solving complex machine learning tasks, like high-dimensional function optimization, which maintains the diversity of the population and avoids prematurity to some extent, and has a higher convergence speed.
Computing and visualizing time-varying merge trees for high-dimensional data
Energy Technology Data Exchange (ETDEWEB)
Oesterling, Patrick [Univ. of Leipzig (Germany); Heine, Christian [Univ. of Kaiserslautern (Germany); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Morozov, Dmitry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scheuermann, Gerik [Univ. of Leipzig (Germany)
2017-06-03
We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.
Kong, Shengchun; Nan, Bin
2014-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.
High-dimensional data: p >> n in mathematical statistics and bio-medical applications
Van De Geer, Sara A.; Van Houwelingen, Hans C.
2004-01-01
The workshop 'High-dimensional data: p >> n in mathematical statistics and bio-medical applications' was held at the Lorentz Center in Leiden from 9 to 20 September 2002. This special issue of Bernoulli contains a selection of papers presented at that workshop. ¶ The introduction of high-throughput micro-array technology to measure gene-expression levels and the publication of the pioneering paper by Golub et al. (1999) has brought to life a whole new branch of data analysis under the name of...
Directory of Open Access Journals (Sweden)
Lymn Joanne S
2008-06-01
Full Text Available Abstract Background Problem-based learning is recognised as promoting integration of knowledge and fostering a deeper approach to life-long learning, but is associated with significant resource implications. In order to encourage second year undergraduate medical students to integrate their pharmacological knowledge in a professionally relevant clinical context, with limited staff resources, we developed a novel clustered PBL approach. This paper utilises preliminary data from both the facilitator and student viewpoint to determine whether the use of this novel methodology is feasible with large groups of students. Methods Students were divided into 16 groups (20–21 students/group and were allocated a PBL facilitator. Each group was then divided into seven subgroups, or clusters, of 2 or 3 students wh each cluster being allocated a specific case. Each cluster was then provided with more detailed clinical information and studied an individual and distinct case-study. An electronic questionnaire was used to evaluate both student and facilitator perception of this clustered PBL format, with each being asked to rate the content, structure, facilitator effectiveness, and their personal view of the wider learning experience. Results Despite initial misgivings, facilitators managed this more complex clustered PBL methodology effectively within the time restraints and reported that they enjoyed the process. They felt that the cases effectively illustrated medical concepts and fitted and reinforced the students' pharmacological knowledge, but were less convinced that the scenario motivated students to use additional resources or stimulated their interest in pharmacology. Student feedback was broadly similar to that of the facilitators; although they were more positive about the scenario stimulating the use of additional resources and an interest in pharmacology. Conclusion This clustered PBL methodology can be successfully used with larger groups of
Energy Technology Data Exchange (ETDEWEB)
Storm, Emma; Weniger, Christoph [GRAPPA, Institute of Physics, University of Amsterdam, Science Park 904, 1090 GL Amsterdam (Netherlands); Calore, Francesca, E-mail: e.m.storm@uva.nl, E-mail: c.weniger@uva.nl, E-mail: francesca.calore@lapth.cnrs.fr [LAPTh, CNRS, 9 Chemin de Bellevue, BP-110, Annecy-le-Vieux, 74941, Annecy Cedex (France)
2017-08-01
We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (∼> 10{sup 5}) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that are motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |ℓ|<90{sup o} and | b |<20{sup o}, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.
Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying
2017-08-01
Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.
Directory of Open Access Journals (Sweden)
Ming Chen
2015-11-01
Full Text Available In multi-criteria group decision-making (MCGDM, one of the most important problems is to determine the weights of criteria and experts. This paper intends to present two Min-Max models to optimize the point estimates of the weights. Since each expert generally possesses a uniform viewpoint on the importance (weighted value of each criterion when he/she needs to rank the alternatives, the objective function in the first model is to minimize the maximum variation between the actual score vector and the ideal one for all the alternatives such that the optimal weights of criteria are consistent in ranking all the alternatives for the same expert. The second model is designed to optimize the weights of experts such that the obtained overall evaluation for each alternative can collect the perspectives of the experts as many as possible. Thus, the objective function in the second model is to minimize the maximum variation between the actual vector of evaluations and the ideal one for all the experts, such that the optimal weights can reduce the difference among the experts in evaluating the same alternative. For the constructed Min-Max models, another focus in this paper is on the development of an efficient algorithm for the optimal weights. Some applications are employed to show the significance of the models and algorithm. From the numerical results, it is clear that the developed Min-Max models more effectively solve the MCGDM problems including the ones with incomplete score matrices, compared with the methods available in the literature. Specifically, by the proposed method, (1 the evaluation uniformity of each expert on the same criteria is guaranteed; (2 The overall evaluation for each alternative can collect the judgements of the experts as many as possible; (3 The highest discrimination degree of the alternatives is obtained.
Rumbach, Anna F
2013-11-01
To determine the anatomical and physiological nature of voice problems and their treatment in those group fitness instructors (GFIs) who have sought a medical diagnosis; the impact of voice disorders on quality of life and their contribution to activity limitations and participation restrictions; and the perceived attitudes and level of support from the industry at large in response to instructor's voice disorders and need for treatment. Prospective self-completion questionnaire design. Thirty-eight individuals (3 males and 35 females) currently active in the Australian fitness industry who had been diagnosed with a voice disorder completed an online self-completion questionnaire administered via SurveyMonkey. Laryngeal pathology included vocal fold nodules (N = 24), vocal fold cysts (N = 2), vocal fold hemorrhage (N = 1), and recurrent chronic laryngitis (N = 3). Eight individuals reported vocal strain and muscle tension dysphonia without concurrent vocal fold pathology. Treatment methods were variable, with 73.68% (N = 28) receiving voice therapy alone, 7.89% (N = 3) having voice therapy in combination with surgery, and 10.53% (N = 4) having voice therapy in conjunction with medication. Three individuals (7.89%) received no treatment for their voice disorder. During treatment, 82% of the cohort altered their teaching practices. Half of the cohort reported that their voice problems led to social withdrawal, decreased job satisfaction, and emotional distress. Greater than 65% also reported being dissatisfied with the level of industry and coworker support during the period of voice recovery. This study identifies that GFIs are susceptible to a number of voice disorders that impact their social and professional lives, and there is a need for more proactive training and advice on voice care for instructors, as well as those in management positions within the industry to address mixed approaches and opinions regarding the importance of voice care. Copyright © 2013
Ghosts in high dimensional non-linear dynamical systems: The example of the hypercycle
International Nuclear Information System (INIS)
Sardanyes, Josep
2009-01-01
Ghost-induced delayed transitions are analyzed in high dimensional non-linear dynamical systems by means of the hypercycle model. The hypercycle is a network of catalytically-coupled self-replicating RNA-like macromolecules, and has been suggested to be involved in the transition from non-living to living matter in the context of earlier prebiotic evolution. It is demonstrated that, in the vicinity of the saddle-node bifurcation for symmetric hypercycles, the persistence time before extinction, T ε , tends to infinity as n→∞ (being n the number of units of the hypercycle), thus suggesting that the increase in the number of hypercycle units involves a longer resilient time before extinction because of the ghost. Furthermore, by means of numerical analysis the dynamics of three large hypercycle networks is also studied, focusing in their extinction dynamics associated to the ghosts. Such networks allow to explore the properties of the ghosts living in high dimensional phase space with n = 5, n = 10 and n = 15 dimensions. These hypercyclic networks, in agreement with other works, are shown to exhibit self-maintained oscillations governed by stable limit cycles. The bifurcation scenarios for these hypercycles are analyzed, as well as the effect of the phase space dimensionality in the delayed transition phenomena and in the scaling properties of the ghosts near bifurcation threshold
High-dimensional free-space optical communications based on orbital angular momentum coding
Zou, Li; Gu, Xiaofan; Wang, Le
2018-03-01
In this paper, we propose a high-dimensional free-space optical communication scheme using orbital angular momentum (OAM) coding. In the scheme, the transmitter encodes N-bits information by using a spatial light modulator to convert a Gaussian beam to a superposition mode of N OAM modes and a Gaussian mode; The receiver decodes the information through an OAM mode analyser which consists of a MZ interferometer with a rotating Dove prism, a photoelectric detector and a computer carrying out the fast Fourier transform. The scheme could realize a high-dimensional free-space optical communication, and decodes the information much fast and accurately. We have verified the feasibility of the scheme by exploiting 8 (4) OAM modes and a Gaussian mode to implement a 256-ary (16-ary) coding free-space optical communication to transmit a 256-gray-scale (16-gray-scale) picture. The results show that a zero bit error rate performance has been achieved.
The validation and assessment of machine learning: a game of prediction from high-dimensional data.
Directory of Open Access Journals (Sweden)
Tune H Pers
Full Text Available In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often implies that multiple methods are tested and compared on the same set of data. This is particularly difficult in situations that are prone to over-fitting where the number of subjects is low compared to the number of potential predictors. The article presents a game which provides some grounds for conducting a fair model comparison. Each player selects a modeling strategy for predicting individual response from potential predictors. A strictly proper scoring rule, bootstrap cross-validation, and a set of rules are used to make the results obtained with different strategies comparable. To illustrate the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively.
High-dimensional quantum key distribution with the entangled single-photon-added coherent state
Energy Technology Data Exchange (ETDEWEB)
Wang, Yang [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Wan-Su, E-mail: 2010thzz@sina.com [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China)
2017-04-25
High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.
A Feature Subset Selection Method Based On High-Dimensional Mutual Information
Directory of Open Access Journals (Sweden)
Chee Keong Kwoh
2011-04-01
Full Text Available Feature selection is an important step in building accurate classifiers and provides better understanding of the data sets. In this paper, we propose a feature subset selection method based on high-dimensional mutual information. We also propose to use the entropy of the class attribute as a criterion to determine the appropriate subset of features when building classifiers. We prove that if the mutual information between a feature set X and the class attribute Y equals to the entropy of Y , then X is a Markov Blanket of Y . We show that in some cases, it is infeasible to approximate the high-dimensional mutual information with algebraic combinations of pairwise mutual information in any forms. In addition, the exhaustive searches of all combinations of features are prerequisite for finding the optimal feature subsets for classifying these kinds of data sets. We show that our approach outperforms existing filter feature subset selection methods for most of the 24 selected benchmark data sets.
Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.
Balfer, Jenny; Hu, Ye; Bajorath, Jürgen
2014-08-01
Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Quantum secret sharing based on modulated high-dimensional time-bin entanglement
International Nuclear Information System (INIS)
Takesue, Hiroki; Inoue, Kyo
2006-01-01
We propose a scheme for quantum secret sharing (QSS) that uses a modulated high-dimensional time-bin entanglement. By modulating the relative phase randomly by {0,π}, a sender with the entanglement source can randomly change the sign of the correlation of the measurement outcomes obtained by two distant recipients. The two recipients must cooperate if they are to obtain the sign of the correlation, which is used as a secret key. We show that our scheme is secure against intercept-and-resend (IR) and beam splitting attacks by an outside eavesdropper thanks to the nonorthogonality of high-dimensional time-bin entangled states. We also show that a cheating attempt based on an IR attack by one of the recipients can be detected by changing the dimension of the time-bin entanglement randomly and inserting two 'vacant' slots between the packets. Then, cheating attempts can be detected by monitoring the count rate in the vacant slots. The proposed scheme has better experimental feasibility than previously proposed entanglement-based QSS schemes
Similarity measurement method of high-dimensional data based on normalized net lattice subspace
Institute of Scientific and Technical Information of China (English)
Li Wenfa; Wang Gongming; Li Ke; Huang Su
2017-01-01
The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity, leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals, and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this meth-od, three data types are used, and seven common similarity measurement methods are compared. The experimental result indicates that the relative difference of the method is increasing with the di-mensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition, the similarity range of this method in different dimensions is [0, 1], which is fit for similarity analysis after dimensionality reduction.
The cross-validated AUC for MCP-logistic regression with high-dimensional data.
Jiang, Dingfeng; Huang, Jian; Zhang, Ying
2013-10-01
We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.
Zhang, Yu; Wu, Jianxin; Cai, Jianfei
2016-05-01
In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.
High-dimensional quantum key distribution with the entangled single-photon-added coherent state
International Nuclear Information System (INIS)
Wang, Yang; Bao, Wan-Su; Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei
2017-01-01
High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.
High-Dimensional Single-Photon Quantum Gates: Concepts and Experiments.
Babazadeh, Amin; Erhard, Manuel; Wang, Feiran; Malik, Mehul; Nouroozi, Rahman; Krenn, Mario; Zeilinger, Anton
2017-11-03
Transformations on quantum states form a basic building block of every quantum information system. From photonic polarization to two-level atoms, complete sets of quantum gates for a variety of qubit systems are well known. For multilevel quantum systems beyond qubits, the situation is more challenging. The orbital angular momentum modes of photons comprise one such high-dimensional system for which generation and measurement techniques are well studied. However, arbitrary transformations for such quantum states are not known. Here we experimentally demonstrate a four-dimensional generalization of the Pauli X gate and all of its integer powers on single photons carrying orbital angular momentum. Together with the well-known Z gate, this forms the first complete set of high-dimensional quantum gates implemented experimentally. The concept of the X gate is based on independent access to quantum states with different parities and can thus be generalized to other photonic degrees of freedom and potentially also to other quantum systems.
Zhang, Bo; Chen, Zhen; Albert, Paul S
2012-01-01
High-dimensional biomarker data are often collected in epidemiological studies when assessing the association between biomarkers and human disease is of interest. We develop a latent class modeling approach for joint analysis of high-dimensional semicontinuous biomarker data and a binary disease outcome. To model the relationship between complex biomarker expression patterns and disease risk, we use latent risk classes to link the 2 modeling components. We characterize complex biomarker-specific differences through biomarker-specific random effects, so that different biomarkers can have different baseline (low-risk) values as well as different between-class differences. The proposed approach also accommodates data features that are common in environmental toxicology and other biomarker exposure data, including a large number of biomarkers, numerous zero values, and complex mean-variance relationship in the biomarkers levels. A Monte Carlo EM (MCEM) algorithm is proposed for parameter estimation. Both the MCEM algorithm and model selection procedures are shown to work well in simulations and applications. In applying the proposed approach to an epidemiological study that examined the relationship between environmental polychlorinated biphenyl (PCB) exposure and the risk of endometriosis, we identified a highly significant overall effect of PCB concentrations on the risk of endometriosis.
Tao, Chenyang; Nichols, Thomas E; Hua, Xue; Ching, Christopher R K; Rolls, Edmund T; Thompson, Paul M; Feng, Jianfeng
2017-01-01
We propose a generalized reduced rank latent factor regression model (GRRLF) for the analysis of tensor field responses and high dimensional covariates. The model is motivated by the need from imaging-genetic studies to identify genetic variants that are associated with brain imaging phenotypes, often in the form of high dimensional tensor fields. GRRLF identifies from the structure in the data the effective dimensionality of the data, and then jointly performs dimension reduction of the covariates, dynamic identification of latent factors, and nonparametric estimation of both covariate and latent response fields. After accounting for the latent and covariate effects, GRLLF performs a nonparametric test on the remaining factor of interest. GRRLF provides a better factorization of the signals compared with common solutions, and is less susceptible to overfitting because it exploits the effective dimensionality. The generality and the flexibility of GRRLF also allow various statistical models to be handled in a unified framework and solutions can be efficiently computed. Within the field of neuroimaging, it improves the sensitivity for weak signals and is a promising alternative to existing approaches. The operation of the framework is demonstrated with both synthetic datasets and a real-world neuroimaging example in which the effects of a set of genes on the structure of the brain at the voxel level were measured, and the results compared favorably with those from existing approaches. Copyright © 2016. Published by Elsevier Inc.
Challenges and Approaches to Statistical Design and Inference in High Dimensional Investigations
Garrett, Karen A.; Allison, David B.
2015-01-01
Summary Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other “omic” data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology, and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative. PMID:19588106
Challenges and approaches to statistical design and inference in high-dimensional investigations.
Gadbury, Gary L; Garrett, Karen A; Allison, David B
2009-01-01
Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other "omic" data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative.
Tikhonov, Mikhail; Monasson, Remi
2018-01-01
Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.
A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification
Directory of Open Access Journals (Sweden)
Yongjun Piao
2015-01-01
Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.
Vatcharavongvan, Pasitpon; Hepworth, Julie; Lim, Joanne; Marley, John
2014-02-01
This study explored the health needs, familial and social problems of Thai migrants in a local community in Brisbane, Australia. Five focus groups with Thai migrants were conducted. The qualitative data were examined using thematic content analysis that is specifically designed for focus group analysis. Four themes were identified: (1) positive experiences in Australia, (2) physical health problems, (3) mental health problems, and (4) familial and social health problems. This study revealed key health needs related to chronic disease and mental health, major barriers to health service use, such as language skills, and facilitating factors, such as the Thai Temple. We concluded that because the health needs, familial and social problems of Thai migrants were complex and culture bound, the development of health and community services for Thai migrants needs to take account of the ways in which Thai culture both negatively impacts health and offer positive solutions to problems.
Yu, Wenbao; Park, Taesung
2014-01-01
Motivation It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. Results We propose an AUC-based approach u...
Energy Technology Data Exchange (ETDEWEB)
Zagonel, Aldo A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Systems Engineering & Analysis; Andersen, David F. [University in Albany, NY (United States). The Rockefeller College of Public Affairs & Policy
2007-03-01
Based upon participant observation in group model building and content analysis of the system dynamics literature, we postulate that modeling efforts have a dual nature. On one hand, the modeling process aims to create a useful representation of a real-world system. This must be done, however, while aligning the clients’ mental models around a shared view of the system. There is significant overlap and confusion between these two goals and how they play out on a practical level. This research clarifies these distinctions by establishing an ideal-type dichotomy. To highlight the differences, we created two straw men: “micro world” characterizes a model that represents reality and “boundary object” represents a socially negotiated model. Using this framework, the literature was examined, revealing evidence for several competing views on problem definition and model conceptualization. The results are summarized in the text of this article, substantiated with strikingly polarized citations, often from the same authors. We also introduce hypotheses for the duality across the remaining phases of the modeling process. Finally, understanding and appreciation of the differences between these ideal types can promote constructive debate on their balance in system dynamics theory and practice.
International Nuclear Information System (INIS)
Hong, Ser Gi; Lee, Deokjung
2015-01-01
A highly accurate S 4 eigenfunction-based nodal method has been developed to solve multi-group discrete ordinate neutral particle transport problems with a linearly anisotropic scattering in slab geometry. The new method solves the even-parity form of discrete ordinates transport equation with an arbitrary S N order angular quadrature using two sub-cell balance equations and the S 4 eigenfunctions of within-group transport equation. The four eigenfunctions from S 4 approximation have been chosen as basis functions for the spatial expansion of the angular flux in each mesh. The constant and cubic polynomial approximations are adopted for the scattering source terms from other energy groups and fission source. A nodal method using the conventional polynomial expansion and the sub-cell balances was also developed to be used for demonstrating the high accuracy of the new methods. Using the new methods, a multi-group eigenvalue problem has been solved as well as fixed source problems. The numerical test results of one-group problem show that the new method has third-order accuracy as mesh size is finely refined and it has much higher accuracies for large meshes than the diamond differencing method and the nodal method using sub-cell balances and polynomial expansion of angular flux. For multi-group problems including eigenvalue problem, it was demonstrated that the new method using the cubic polynomial approximation of the sources could produce very accurate solutions even with large mesh sizes. (author)
High dimensional biological data retrieval optimization with NoSQL technology
2014-01-01
Background High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. Results In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. Conclusions The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data
Geraci, Joseph; Dharsee, Moyez; Nuin, Paulo; Haslehurst, Alexandria; Koti, Madhuri; Feilotter, Harriet E; Evans, Ken
2014-03-01
We introduce a novel method for visualizing high dimensional data via a discrete dynamical system. This method provides a 2D representation of the relationship between subjects according to a set of variables without geometric projections, transformed axes or principal components. The algorithm exploits a memory-type mechanism inherent in a certain class of discrete dynamical systems collectively referred to as the chaos game that are closely related to iterative function systems. The goal of the algorithm was to create a human readable representation of high dimensional patient data that was capable of detecting unrevealed subclusters of patients from within anticipated classifications. This provides a mechanism to further pursue a more personalized exploration of pathology when used with medical data. For clustering and classification protocols, the dynamical system portion of the algorithm is designed to come after some feature selection filter and before some model evaluation (e.g. clustering accuracy) protocol. In the version given here, a univariate features selection step is performed (in practice more complex feature selection methods are used), a discrete dynamical system is driven by this reduced set of variables (which results in a set of 2D cluster models), these models are evaluated for their accuracy (according to a user-defined binary classification) and finally a visual representation of the top classification models are returned. Thus, in addition to the visualization component, this methodology can be used for both supervised and unsupervised machine learning as the top performing models are returned in the protocol we describe here. Butterfly, the algorithm we introduce and provide working code for, uses a discrete dynamical system to classify high dimensional data and provide a 2D representation of the relationship between subjects. We report results on three datasets (two in the article; one in the appendix) including a public lung cancer
High dimensional biological data retrieval optimization with NoSQL technology.
Wang, Shicai; Pandis, Ioannis; Wu, Chao; He, Sijin; Johnson, David; Emam, Ibrahim; Guitton, Florian; Guo, Yike
2014-01-01
High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating
Penalized estimation for competing risks regression with applications to high-dimensional covariates
DEFF Research Database (Denmark)
Ambrogi, Federico; Scheike, Thomas H.
2016-01-01
of competing events. The direct binomial regression model of Scheike and others (2008. Predicting cumulative incidence probability by direct binomial regression. Biometrika 95: (1), 205-220) is reformulated in a penalized framework to possibly fit a sparse regression model. The developed approach is easily...... Research 19: (1), 29-51), the research regarding competing risks is less developed (Binder and others, 2009. Boosting for high-dimensional time-to-event data with competing risks. Bioinformatics 25: (7), 890-896). The aim of this work is to consider how to do penalized regression in the presence...... implementable using existing high-performance software to do penalized regression. Results from simulation studies are presented together with an application to genomic data when the endpoint is progression-free survival. An R function is provided to perform regularized competing risks regression according...
Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok
2016-12-05
High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.
Energy Technology Data Exchange (ETDEWEB)
Tahira, Rabia; Ikram, Manzoor; Zubairy, M Suhail [Centre for Quantum Physics, COMSATS Institute of Information Technology, Islamabad (Pakistan); Bougouffa, Smail [Department of Physics, Faculty of Science, Taibah University, PO Box 30002, Madinah (Saudi Arabia)
2010-02-14
We investigate the phenomenon of sudden death of entanglement in a high-dimensional bipartite system subjected to dissipative environments with an arbitrary initial pure entangled state between two fields in the cavities. We find that in a vacuum reservoir, the presence of the state where one or more than one (two) photons in each cavity are present is a necessary condition for the sudden death of entanglement. Otherwise entanglement remains for infinite time and decays asymptotically with the decay of individual qubits. For pure two-qubit entangled states in a thermal environment, we observe that sudden death of entanglement always occurs. The sudden death time of the entangled states is related to the number of photons in the cavities, the temperature of the reservoir and the initial preparation of the entangled states.
International Nuclear Information System (INIS)
Tahira, Rabia; Ikram, Manzoor; Zubairy, M Suhail; Bougouffa, Smail
2010-01-01
We investigate the phenomenon of sudden death of entanglement in a high-dimensional bipartite system subjected to dissipative environments with an arbitrary initial pure entangled state between two fields in the cavities. We find that in a vacuum reservoir, the presence of the state where one or more than one (two) photons in each cavity are present is a necessary condition for the sudden death of entanglement. Otherwise entanglement remains for infinite time and decays asymptotically with the decay of individual qubits. For pure two-qubit entangled states in a thermal environment, we observe that sudden death of entanglement always occurs. The sudden death time of the entangled states is related to the number of photons in the cavities, the temperature of the reservoir and the initial preparation of the entangled states.
Time–energy high-dimensional one-side device-independent quantum key distribution
International Nuclear Information System (INIS)
Bao Hai-Ze; Bao Wan-Su; Wang Yang; Chen Rui-Ke; Ma Hong-Xin; Zhou Chun; Li Hong-Wei
2017-01-01
Compared with full device-independent quantum key distribution (DI-QKD), one-side device-independent QKD (1sDI-QKD) needs fewer requirements, which is much easier to meet. In this paper, by applying recently developed novel time–energy entropic uncertainty relations, we present a time–energy high-dimensional one-side device-independent quantum key distribution (HD-QKD) and provide the security proof against coherent attacks. Besides, we connect the security with the quantum steering. By numerical simulation, we obtain the secret key rate for Alice’s different detection efficiencies. The results show that our protocol can performance much better than the original 1sDI-QKD. Furthermore, we clarify the relation among the secret key rate, Alice’s detection efficiency, and the dispersion coefficient. Finally, we simply analyze its performance in the optical fiber channel. (paper)
Inference for feature selection using the Lasso with high-dimensional data
DEFF Research Database (Denmark)
Brink-Jensen, Kasper; Ekstrøm, Claus Thorn
2014-01-01
Penalized regression models such as the Lasso have proved useful for variable selection in many fields - especially for situations with high-dimensional data where the numbers of predictors far exceeds the number of observations. These methods identify and rank variables of importance but do...... not generally provide any inference of the selected variables. Thus, the variables selected might be the "most important" but need not be significant. We propose a significance test for the selection found by the Lasso. We introduce a procedure that computes inference and p-values for features chosen...... by the Lasso. This method rephrases the null hypothesis and uses a randomization approach which ensures that the error rate is controlled even for small samples. We demonstrate the ability of the algorithm to compute $p$-values of the expected magnitude with simulated data using a multitude of scenarios...
Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data
Hu, Zongliang; Tong, Tiejun; Genton, Marc G.
2017-01-01
We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling's tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.
Wang, Zhiping; Chen, Jinyu; Yu, Benli
2017-02-20
We investigate the two-dimensional (2D) and three-dimensional (3D) atom localization behaviors via spontaneously generated coherence in a microwave-driven four-level atomic system. Owing to the space-dependent atom-field interaction, it is found that the detecting probability and precision of 2D and 3D atom localization behaviors can be significantly improved via adjusting the system parameters, the phase, amplitude, and initial population distribution. Interestingly, the atom can be localized in volumes that are substantially smaller than a cubic optical wavelength. Our scheme opens a promising way to achieve high-precision and high-efficiency atom localization, which provides some potential applications in high-dimensional atom nanolithography.
International Nuclear Information System (INIS)
Brooks, B.R.
1979-09-01
The Graphical Unitary Group Approach (GUGA) was cast into an extraordinarily powerful form by restructuring the Hamiltonian in terms of loop types. This restructuring allows the adoption of the loop-driven formulation which illuminates vast numbers of previously unappreciated relationships between otherwise distinct Hamiltonian matrix elements. The theoretical/methodological contributions made here include the development of the loop-driven formula generation algorithm, a solution of the upper walk problem used to develop a loop breakdown algorithm, the restriction of configuration space employed to the multireference interacting space, and the restructuring of the Hamiltonian in terms of loop types. Several other developments are presented and discussed. Among these developments are the use of new segment coefficients, improvements in the loop-driven algorithm, implicit generation of loops wholly within the external space adapted within the framework of the loop-driven methodology, and comparisons of the diagonalization tape method to the direct method. It is also shown how it is possible to implement the GUGA method without the time-consuming full (m 5 ) four-index transformation. A particularly promising new direction presented here involves the use of the GUGA methodology to obtain one-electron and two-electron density matrices. Once these are known, analytical gradients (first derivatives) of the CI potential energy are easily obtained. Several test calculations are examined in detail to illustrate the unique features of the method. Also included is a calculation on the asymmetric 2 1 A' state of SO 2 with 23,613 configurations to demonstrate methods for the diagonalization of very large matrices on a minicomputer. 6 figures, 6 tables
International Nuclear Information System (INIS)
Turinsky, P.J.; Al-Chalabi, R.M.K.; Engrand, P.; Sarsour, H.N.; Faure, F.X.; Guo, W.
1994-06-01
NESTLE is a FORTRAN77 code that solves the few-group neutron diffusion equation utilizing the Nodal Expansion Method (NEM). NESTLE can solve the eigenvalue (criticality); eigenvalue adjoint; external fixed-source steady-state; or external fixed-source. or eigenvalue initiated transient problems. The code name NESTLE originates from the multi-problem solution capability, abbreviating Nodal Eigenvalue, Steady-state, Transient, Le core Evaluator. The eigenvalue problem allows criticality searches to be completed, and the external fixed-source steady-state problem can search to achieve a specified power level. Transient problems model delayed neutrons via precursor groups. Several core properties can be input as time dependent. Two or four energy groups can be utilized, with all energy groups being thermal groups (i.e. upscatter exits) if desired. Core geometries modelled include Cartesian and Hexagonal. Three, two and one dimensional models can be utilized with various symmetries. The non-linear iterative strategy associated with the NEM method is employed. An advantage of the non-linear iterative strategy is that NSTLE can be utilized to solve either the nodal or Finite Difference Method representation of the few-group neutron diffusion equation
Ruffolo, Mary C; Kuhn, Mary T; Evans, Mary E
2006-01-01
Building on the respective strengths of parent-led and professional-led groups, a parent-professional team leadership model for group interventions was developed and evaluated for families of youths with emotional and behavioral problems. The model was developed based on feedback from 26 parents in focus group sessions and recommendations from mental health professionals in staff meetings. Evaluations of an implementation of the model in a support, empowerment, and education group intervention (S.E.E. group) have demonstrated the usefulness of this approach in work with families of children with behavioral and emotional problems. This article discusses the challenges of instituting the model in an S.E.E. group. It explores how parents and professionals build the team leadership model and the strengths of this approach in working with parents of youths with serious emotional disturbances.
Croonen, E.A.; Harmsen, M.; Burgt, I. van der; Draaisma, J.M.T.; Noordam, C.; Essink, M.; Nijhuis-Van der Sanden, M.W.G.
2016-01-01
Studies from a patient perspective on motor performance problems in Noonan syndrome in daily life are lacking. The aims of this study were to provide insight into the motor performance problems that people with Noonan syndrome and/or their relatives experienced, the major consequences they suffered,
Lamb, S E; Pepper, J; Lall, R; Jørstad-Stein, E C; Clark, M D; Hill, L; Fereday-Smith, J
2009-09-14
The aim was to compare effectiveness of group versus individual sessions of physiotherapy in terms of symptoms, quality of life, and costs, and to investigate the effect of patient preference on uptake and outcome of treatment. A pragmatic, multi-centre randomised controlled trial in five British National Health Service physiotherapy departments. 174 women with stress and/or urge incontinence were randomised to receive treatment from a physiotherapist delivered in a group or individual setting over three weekly sessions. Outcome were measured as Symptom Severity Index; Incontinence-related Quality of Life questionnaire; National Health Service costs, and out of pocket expenses. The majority of women expressed no preference (55%) or preference for individual treatment (36%). Treatment attendance was good, with similar attendance with both service delivery models. Overall, there were no statistically significant differences in symptom severity or quality of life outcomes between the models. Over 85% of women reported a subjective benefit of treatment, with a slightly higher rating in the individual compared with the group setting. When all health care costs were considered, average cost per patient was lower for group sessions (Mean cost difference 52.91 pounds 95%, confidence interval ( 25.82 pounds- 80.00 pounds)). Indications are that whilst some women may have an initial preference for individual treatment, there are no substantial differences in the symptom, quality of life outcomes or non-attendance. Because of the significant difference in mean cost, group treatment is recommended. ISRCTN 16772662.
Macgowan, Mark J.; Wagner, Eric F.
2005-01-01
Group therapy is the most popular approach in the treatment of adolescent substance use problems. Recently, concerns have mounted about possible iatrogenic effects of group therapy based on studies on adolescents with conduct disorder. This paper reviews three possible contributors to response to group treatment among adolescents, and proposes a model of the relations among these variables, specifically in regard to how they independently and interactively contribute to outcomes among youth w...
Directory of Open Access Journals (Sweden)
Clark MD
2009-09-01
Full Text Available Abstract Background The aim was to compare effectiveness of group versus individual sessions of physiotherapy in terms of symptoms, quality of life, and costs, and to investigate the effect of patient preference on uptake and outcome of treatment. Methods A pragmatic, multi-centre randomised controlled trial in five British National Health Service physiotherapy departments. 174 women with stress and/or urge incontinence were randomised to receive treatment from a physiotherapist delivered in a group or individual setting over three weekly sessions. Outcome were measured as Symptom Severity Index; Incontinence-related Quality of Life questionnaire; National Health Service costs, and out of pocket expenses. Results The majority of women expressed no preference (55% or preference for individual treatment (36%. Treatment attendance was good, with similar attendance with both service delivery models. Overall, there were no statistically significant differences in symptom severity or quality of life outcomes between the models. Over 85% of women reported a subjective benefit of treatment, with a slightly higher rating in the individual compared with the group setting. When all health care costs were considered, average cost per patient was lower for group sessions (Mean cost difference £52.91 95%, confidence interval (£25.82 - £80.00. Conclusion Indications are that whilst some women may have an initial preference for individual treatment, there are no substantial differences in the symptom, quality of life outcomes or non-attendance. Because of the significant difference in mean cost, group treatment is recommended. Trial Registration Trial Registration number: ISRCTN 16772662
A new test for the mean vector in high-dimensional data
Directory of Open Access Journals (Sweden)
Knavoot Jiamwattanapong
2015-08-01
Full Text Available For the testing of the mean vector where the data are drawn from a multivariate normal population, the renowned Hotelling’s T 2 test is no longer valid when the dimension of the data equals or exceeds the sample size. In this study, we consider the problem of testing the hypothesis H :μ 0 and propose a new test based on the idea of keeping more information from the sample covariance matrix. The development of the statistic is based on Hotelling’s T 2 distribution and the new test has invariance property under a group of scalar transformation. The asymptotic distribution is derived under the null hypothesis. The simulation results show that the proposed test performs well and is more powerful when the data dimension increases for a given sample size. An analysis of DNA microarray data with the new test is demonstrated.
Filippi, Anthony Matthew
For complex systems, sufficient a priori knowledge is often lacking about the mathematical or empirical relationship between cause and effect or between inputs and outputs of a given system. Automated machine learning may offer a useful solution in such cases. Coastal marine optical environments represent such a case, as the optical remote sensing inverse problem remains largely unsolved. A self-organizing, cybernetic mathematical modeling approach known as the group method of data handling (GMDH), a type of statistical learning network (SLN), was used to generate explicit spectral inversion models for optically shallow coastal waters. Optically shallow water light fields represent a particularly difficult challenge in oceanographic remote sensing. Several algorithm-input data treatment combinations were utilized in multiple experiments to automatically generate inverse solutions for various inherent optical property (IOP), bottom optical property (BOP), constituent concentration, and bottom depth estimations. The objective was to identify the optimal remote-sensing reflectance Rrs(lambda) inversion algorithm. The GMDH also has the potential of inductive discovery of physical hydro-optical laws. Simulated data were used to develop generalized, quasi-universal relationships. The Hydrolight numerical forward model, based on radiative transfer theory, was used to compute simulated above-water remote-sensing reflectance Rrs(lambda) psuedodata, matching the spectral channels and resolution of the experimental Naval Research Laboratory Ocean PHILLS (Portable Hyperspectral Imager for Low-Light Spectroscopy) sensor. The input-output pairs were for GMDH and artificial neural network (ANN) model development, the latter of which was used as a baseline, or control, algorithm. Both types of models were applied to in situ and aircraft data. Also, in situ spectroradiometer-derived Rrs(lambda) were used as input to an optimization-based inversion procedure. Target variables
Lamb, S. E. (Sallie E.); Pepper, Jo; Lall, Ranjit; Jørstad-Stein , Ellen C.; Clark, M. D. (Michael D.); Hill, Lesley; Fereday Smith, Jan
2009-01-01
Abstract Background The aim was to compare effectiveness of group versus individual sessions of physiotherapy in terms of symptoms, quality of life, and costs, and to investigate the effect of patient preference on uptake and outcome of treatment. Methods A pragmatic, multi-centre randomised controlled trial in five British National Health Service physiotherapy departments. 174 women with stress and/or urge incontinence were randomised to receive treatment from a physiotherapist delivered in ...
DEFF Research Database (Denmark)
Pham, Ninh Dang; Pagh, Rasmus
2012-01-01
projection-based technique that is able to estimate the angle-based outlier factor for all data points in time near-linear in the size of the data. Also, our approach is suitable to be performed in parallel environment to achieve a parallel speedup. We introduce a theoretical analysis of the quality...... neighbor are deteriorated in high-dimensional data. Following up on the work of Kriegel et al. (KDD '08), we investigate the use of angle-based outlier factor in mining high-dimensional outliers. While their algorithm runs in cubic time (with a quadratic time heuristic), we propose a novel random......Outlier mining in d-dimensional point sets is a fundamental and well studied data mining task due to its variety of applications. Most such applications arise in high-dimensional domains. A bottleneck of existing approaches is that implicit or explicit assessments on concepts of distance or nearest...
Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.
Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel
2011-05-09
Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM
Bonini, Nicolao; Grecucci, Alessandro; Nicolè, Manuel; Savadori, Lucia
2018-06-01
A group of pathological gamblers and a group of problem gamblers (i.e., gamblers at risk of becoming pathological) were compared to healthy controls on their risk-taking propensity after prior losses. Each participant played both the Balloon Analogue Risk Taking task (BART) and a modified version of the same task, where individuals face five repeated predetermined early losses at the onset of the game. No significant difference in risk-taking was found between groups on the standard BART task, while significant differences emerged when comparing behaviors in the two tasks: both pathological gamblers and controls reduced their risk-taking tendency after prior losses in the modified BART compared to the standard BART, whereas problem gamblers showed no reduction in risk-taking after prior losses. We interpret these results as a sign of a reduced sensitivity to negative feedback in problem gamblers which might contribute to explain their loss-chasing tendency.
Croonen, Ellen A; Harmsen, Mirjam; Van der Burgt, Ineke; Draaisma, Jos M; Noordam, Kees; Essink, Marlou; Nijhuis-van der Sanden, Maria W G
2016-09-01
Studies from a patient perspective on motor performance problems in Noonan syndrome in daily life are lacking. The aims of this study were to provide insight into the motor performance problems that people with Noonan syndrome and/or their relatives experienced, the major consequences they suffered, the benefits of interventions they experienced, and the experiences with healthcare professionals they mentioned. We interviewed 10 adults with Noonan syndrome (two were joined by their parent), and 23 mothers (five of whom had Noonan syndrome), nine fathers (one of whom had Noonan syndrome) and one cousin who reported on 28 children with Noonan syndrome. People with Noonan syndrome reported particular problems related to pain, decreased muscle strength, fatigue, and clumsiness, which had an evident impact on functioning in daily life. Most participants believed that problems with motor performance improved with exercise, appropriate physiotherapy guidance, and other supportive interventions. Nevertheless, people with Noonan syndrome and/or their relatives did not feel heard and supported and experienced no understanding of their problems by healthcare professionals. This was the first study from a patient perspective that described the motor performance problems in people with Noonan syndrome, the major consequences in daily life, the positive experiences of interventions and the miscommunication with healthcare professionals. To achieve optimal support, healthcare professionals, as well as people with Noonan syndrome and/or their relatives themselves, should be aware of these frequently presented problems with motor performance. Research on these different aspects is needed to better understand and support people with Noonan syndrome.© 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Bonete, Saray; Calero, María Dolores; Fernández-Parra, Antonio
2015-05-01
Adults with Asperger syndrome show persistent difficulties in social situations which psychosocial treatments may address. Despite the multiple studies focusing on social skills interventions, only some have focused specifically on problem-solving skills and have not targeted workplace adaptation training in the adult population. This study describes preliminary data from a group format manual-based intervention, the Interpersonal Problem-Solving for Workplace Adaptation Programme, aimed at improving the cognitive and metacognitive process of social problem-solving skills focusing on typical social situations in the workplace based on mediation as the main strategy. A total of 50 adults with Asperger syndrome received the programme and were compared with a control group of typical development. The feasibility and effectiveness of the treatment were explored. Participants were assessed at pre-treatment and post-treatment on a task of social problem-solving skills and two secondary measures of socialisation and work profile using self- and caregiver-report. Using a variety of methods, the results showed that scores were significantly higher at post-treatment in the social problem-solving task and socialisation skills based on reports by parents. Differences in comparison to the control group had decreased after treatment. The treatment was acceptable to families and subject adherence was high. The Interpersonal Problem-Solving for Workplace Adaptation Programme appears to be a feasible training programme. © The Author(s) 2014.
Vadlin, Sofia; Åslund, Cecilia; Nilsson, Kent W
2018-04-01
The aims of this study were to investigate the long-term stability of problematic gaming among adolescents and whether problematic gaming at wave 1 (W1) was associated with problem gambling at wave 2 (W2), three years later. Data from the SALVe cohort, including adolescents in Västmanland born in 1997 and 1999, were accessed and analyzed in two waves W2, N = 1576; 914 (58%) girls). At W1, the adolescents were 13 and 15 years old, and at W2, they were 16 and 18 years old. Adolescents self-rated on the Gaming Addiction Identification Test (GAIT), Problem Gambling Severity Index (PGSI), and gambling frequencies. Stability of gaming was determined using Gamma correlation, Spearman's rho, and McNemar. Logistic regression analysis and general linear model (GLM) analysis were performed and adjusted for sex, age, and ethnicity, frequency of gambling activities and gaming time at W1, with PGSI as the dependent variable, and GAIT as the independent variable, to investigate associations between problematic gaming and problem gambling. Problematic gaming was relative stable over time, γ = 0.739, p ≤ .001, ρ = 0.555, p ≤ .001, and McNemar p ≤ .001. Furthermore, problematic gaming at W1 increased the probability of having problem gambling three years later, logistic regression OR = 1.886 (95% CI 1.125-3.161), p = .016, GLM F = 10.588, η 2 = 0.007, p = .001. Problematic gaming seems to be relatively stable over time. Although associations between problematic gaming and later problem gambling were found, the low explained variance indicates that problematic gaming in an unlikely predictor for problem gambling within this sample.
Prediction-Oriented Marker Selection (PROMISE): With Application to High-Dimensional Regression.
Kim, Soyeon; Baladandayuthapani, Veerabhadran; Lee, J Jack
2017-06-01
In personalized medicine, biomarkers are used to select therapies with the highest likelihood of success based on an individual patient's biomarker/genomic profile. Two goals are to choose important biomarkers that accurately predict treatment outcomes and to cull unimportant biomarkers to reduce the cost of biological and clinical verifications. These goals are challenging due to the high dimensionality of genomic data. Variable selection methods based on penalized regression (e.g., the lasso and elastic net) have yielded promising results. However, selecting the right amount of penalization is critical to simultaneously achieving these two goals. Standard approaches based on cross-validation (CV) typically provide high prediction accuracy with high true positive rates but at the cost of too many false positives. Alternatively, stability selection (SS) controls the number of false positives, but at the cost of yielding too few true positives. To circumvent these issues, we propose prediction-oriented marker selection (PROMISE), which combines SS with CV to conflate the advantages of both methods. Our application of PROMISE with the lasso and elastic net in data analysis shows that, compared to CV, PROMISE produces sparse solutions, few false positives, and small type I + type II error, and maintains good prediction accuracy, with a marginal decrease in the true positive rates. Compared to SS, PROMISE offers better prediction accuracy and true positive rates. In summary, PROMISE can be applied in many fields to select regularization parameters when the goals are to minimize false positives and maximize prediction accuracy.
Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data
Hu, Zongliang
2017-10-27
We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling\\'s tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.
Biomarker identification and effect estimation on schizophrenia –a high dimensional data analysis
Directory of Open Access Journals (Sweden)
Yuanzhang eLi
2015-05-01
Full Text Available Biomarkers have been examined in schizophrenia research for decades. Medical morbidity and mortality rates, as well as personal and societal costs, are associated with schizophrenia patients. The identification of biomarkers and alleles, which often have a small effect individually, may help to develop new diagnostic tests for early identification and treatment. Currently, there is not a commonly accepted statistical approach to identify predictive biomarkers from high dimensional data. We used space Decomposition-Gradient-Regression method (DGR to select biomarkers, which are associated with the risk of schizophrenia. Then, we used the gradient scores, generated from the selected biomarkers, as the prediction factor in regression to estimate their effects. We also used an alternative approach, classification and regression tree (CART, to compare the biomarker selected by DGR and found about 70% of the selected biomarkers were the same. However, the advantage of DGR is that it can evaluate individual effects for each biomarker from their combined effect. In DGR analysis of serum specimens of US military service members with a diagnosis of schizophrenia from 1992 to 2005 and their controls, Alpha-1-Antitrypsin (AAT, Interleukin-6 receptor (IL-6r and Connective Tissue Growth Factor (CTGF were selected to identify schizophrenia for males; and Alpha-1-Antitrypsin (AAT, Apolipoprotein B (Apo B and Sortilin were selected for females. If these findings from military subjects are replicated by other studies, they suggest the possibility of a novel biomarker panel as an adjunct to earlier diagnosis and initiation of treatment.
A sparse grid based method for generative dimensionality reduction of high-dimensional data
Bohn, Bastian; Garcke, Jochen; Griebel, Michael
2016-03-01
Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.
International Nuclear Information System (INIS)
Snyder, Abigail C.; Jiao, Yu
2010-01-01
Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.
Directory of Open Access Journals (Sweden)
Enkelejda Miho
2018-02-01
Full Text Available The adaptive immune system recognizes antigens via an immense array of antigen-binding antibodies and T-cell receptors, the immune repertoire. The interrogation of immune repertoires is of high relevance for understanding the adaptive immune response in disease and infection (e.g., autoimmunity, cancer, HIV. Adaptive immune receptor repertoire sequencing (AIRR-seq has driven the quantitative and molecular-level profiling of immune repertoires, thereby revealing the high-dimensional complexity of the immune receptor sequence landscape. Several methods for the computational and statistical analysis of large-scale AIRR-seq data have been developed to resolve immune repertoire complexity and to understand the dynamics of adaptive immunity. Here, we review the current research on (i diversity, (ii clustering and network, (iii phylogenetic, and (iv machine learning methods applied to dissect, quantify, and compare the architecture, evolution, and specificity of immune repertoires. We summarize outstanding questions in computational immunology and propose future directions for systems immunology toward coupling AIRR-seq with the computational discovery of immunotherapeutics, vaccines, and immunodiagnostics.
Construction of high-dimensional neural network potentials using environment-dependent atom pairs.
Jose, K V Jovan; Artrith, Nongnuch; Behler, Jörg
2012-05-21
An accurate determination of the potential energy is the crucial step in computer simulations of chemical processes, but using electronic structure methods on-the-fly in molecular dynamics (MD) is computationally too demanding for many systems. Constructing more efficient interatomic potentials becomes intricate with increasing dimensionality of the potential-energy surface (PES), and for numerous systems the accuracy that can be achieved is still not satisfying and far from the reliability of first-principles calculations. Feed-forward neural networks (NNs) have a very flexible functional form, and in recent years they have been shown to be an accurate tool to construct efficient PESs. High-dimensional NN potentials based on environment-dependent atomic energy contributions have been presented for a number of materials. Still, these potentials may be improved by a more detailed structural description, e.g., in form of atom pairs, which directly reflect the atomic interactions and take the chemical environment into account. We present an implementation of an NN method based on atom pairs, and its accuracy and performance are compared to the atom-based NN approach using two very different systems, the methanol molecule and metallic copper. We find that both types of NN potentials provide an excellent description of both PESs, with the pair-based method yielding a slightly higher accuracy making it a competitive alternative for addressing complex systems in MD simulations.
Xia, Yin; Cai, Tianxi; Cai, T Tony
2018-01-01
Motivated by applications in genomics, we consider in this paper global and multiple testing for the comparisons of two high-dimensional linear regression models. A procedure for testing the equality of the two regression vectors globally is proposed and shown to be particularly powerful against sparse alternatives. We then introduce a multiple testing procedure for identifying unequal coordinates while controlling the false discovery rate and false discovery proportion. Theoretical justifications are provided to guarantee the validity of the proposed tests and optimality results are established under sparsity assumptions on the regression coefficients. The proposed testing procedures are easy to implement. Numerical properties of the procedures are investigated through simulation and data analysis. The results show that the proposed tests maintain the desired error rates under the null and have good power under the alternative at moderate sample sizes. The procedures are applied to the Framingham Offspring study to investigate the interactions between smoking and cardiovascular related genetic mutations important for an inflammation marker.
Energy Technology Data Exchange (ETDEWEB)
Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer; Michael Pernice; Robert Nourgaliev
2013-05-01
The next generation of methodologies for nuclear reactor Probabilistic Risk Assessment (PRA) explicitly accounts for the time element in modeling the probabilistic system evolution and uses numerical simulation tools to account for possible dependencies between failure events. The Monte-Carlo (MC) and the Dynamic Event Tree (DET) approaches belong to this new class of dynamic PRA methodologies. A challenge of dynamic PRA algorithms is the large amount of data they produce which may be difficult to visualize and analyze in order to extract useful information. We present a software tool that is designed to address these goals. We model a large-scale nuclear simulation dataset as a high-dimensional scalar function defined over a discrete sample of the domain. First, we provide structural analysis of such a function at multiple scales and provide insight into the relationship between the input parameters and the output. Second, we enable exploratory analysis for users, where we help the users to differentiate features from noise through multi-scale analysis on an interactive platform, based on domain knowledge and data characterization. Our analysis is performed by exploiting the topological and geometric properties of the domain, building statistical models based on its topological segmentations and providing interactive visual interfaces to facilitate such explorations. We provide a user’s guide to our software tool by highlighting its analysis and visualization capabilities, along with a use case involving dataset from a nuclear reactor safety simulation.
Schran, Christoph; Uhl, Felix; Behler, Jörg; Marx, Dominik
2018-03-01
The design of accurate helium-solute interaction potentials for the simulation of chemically complex molecules solvated in superfluid helium has long been a cumbersome task due to the rather weak but strongly anisotropic nature of the interactions. We show that this challenge can be met by using a combination of an effective pair potential for the He-He interactions and a flexible high-dimensional neural network potential (NNP) for describing the complex interaction between helium and the solute in a pairwise additive manner. This approach yields an excellent agreement with a mean absolute deviation as small as 0.04 kJ mol-1 for the interaction energy between helium and both hydronium and Zundel cations compared with coupled cluster reference calculations with an energetically converged basis set. The construction and improvement of the potential can be performed in a highly automated way, which opens the door for applications to a variety of reactive molecules to study the effect of solvation on the solute as well as the solute-induced structuring of the solvent. Furthermore, we show that this NNP approach yields very convincing agreement with the coupled cluster reference for properties like many-body spatial and radial distribution functions. This holds for the microsolvation of the protonated water monomer and dimer by a few helium atoms up to their solvation in bulk helium as obtained from path integral simulations at about 1 K.
Multi-Scale Factor Analysis of High-Dimensional Brain Signals
Ting, Chee-Ming
2017-05-18
In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive spatio-temporal data defined over the complex networks into a finite set of regional clusters. To achieve further dimension reduction, we represent the signals in each cluster by a small number of latent factors. The correlation matrix for all nodes in the network are approximated by lower-dimensional sub-structures derived from the cluster-specific factors. To estimate regional connectivity between numerous nodes (within each cluster), we apply principal components analysis (PCA) to produce factors which are derived as the optimal reconstruction of the observed signals under the squared loss. Then, we estimate global connectivity (between clusters or sub-networks) based on the factors across regions using the RV-coefficient as the cross-dependence measure. This gives a reliable and computationally efficient multi-scale analysis of both regional and global dependencies of the large networks. The proposed novel approach is applied to estimate brain connectivity networks using functional magnetic resonance imaging (fMRI) data. Results on resting-state fMRI reveal interesting modular and hierarchical organization of human brain networks during rest.
Meng, Xi; Nguyen, Bao D; Ridge, Clark; Shaka, A J
2009-01-01
High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to "reduced-dimensionality" strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the filter diagonalization method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra-dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths.
Feature Augmentation via Nonparametrics and Selection (FANS) in High-Dimensional Classification.
Fan, Jianqing; Feng, Yang; Jiang, Jiancheng; Tong, Xin
We propose a high dimensional classification method that involves nonparametric feature augmentation. Knowing that marginal density ratios are the most powerful univariate classifiers, we use the ratio estimates to transform the original feature measurements. Subsequently, penalized logistic regression is invoked, taking as input the newly transformed or augmented features. This procedure trains models equipped with local complexity and global simplicity, thereby avoiding the curse of dimensionality while creating a flexible nonlinear decision boundary. The resulting method is called Feature Augmentation via Nonparametrics and Selection (FANS). We motivate FANS by generalizing the Naive Bayes model, writing the log ratio of joint densities as a linear combination of those of marginal densities. It is related to generalized additive models, but has better interpretability and computability. Risk bounds are developed for FANS. In numerical analysis, FANS is compared with competing methods, so as to provide a guideline on its best application domain. Real data analysis demonstrates that FANS performs very competitively on benchmark email spam and gene expression data sets. Moreover, FANS is implemented by an extremely fast algorithm through parallel computing.
Multi-SOM: an Algorithm for High-Dimensional, Small Size Datasets
Directory of Open Access Journals (Sweden)
Shen Lu
2013-04-01
Full Text Available Since it takes time to do experiments in bioinformatics, biological datasets are sometimes small but with high dimensionality. From probability theory, in order to discover knowledge from a set of data, we have to have a sufficient number of samples. Otherwise, the error bounds can become too large to be useful. For the SOM (Self- Organizing Map algorithm, the initial map is based on the training data. In order to avoid the bias caused by the insufficient training data, in this paper we present an algorithm, called Multi-SOM. Multi-SOM builds a number of small self-organizing maps, instead of just one big map. Bayesian decision theory is used to make the final decision among similar neurons on different maps. In this way, we can better ensure that we can get a real random initial weight vector set, the map size is less of consideration and errors tend to average out. In our experiments as applied to microarray datasets which are highly intense data composed of genetic related information, the precision of Multi-SOMs is 10.58% greater than SOMs, and its recall is 11.07% greater than SOMs. Thus, the Multi-SOMs algorithm is practical.
Shapiro, Joan; And Others
1982-01-01
Compared the cost effectiveness of cognitive behavior group therapy, traditional process-oriented interpersonal group, and individual cognitive behavior therapy in dealing with depression and anxiety in a health maintenance organization population (N=44). Results suggest that cost considerations can become relatively important when decisions are…
Kuandykov A.A; Kassenkhan A.M; Mukazhanov N.K; Kozhamzharova D.K; Kalpeeva Zh.B; Sholpanbaev A.T
2013-01-01
This paper addresses the formalization of the problem domain in which there is a business process that is similar in structure.For research and formalization selected a specific type of subject area in which at random time occurs the business processes that are close on the abstract structure.
Passolunghi, Maria Chiara; Mammarella, Irene Cristina
2012-01-01
This study examines visual and spatial working memory skills in 35 third to fifth graders with both mathematics learning disabilities (MLD) and poor problem-solving skills and 35 of their peers with typical development (TD) on tasks involving both low and high attentional control. Results revealed that children with MLD, relative to TD children,…
Trach, Jessica; Lee, Matthew; Hymel, Shelley
2018-01-01
A substantial body of evidence verifies that social-emotional learning (SEL) can be effectively taught in schools and can reduce the prevalence and impact of emotional and behavioral problems (EBP) among children and youth. Although the positive effects of SEL on individual student's emotional, behavioral, and academic outcomes have been…
Directory of Open Access Journals (Sweden)
Jan Vavřina
2012-01-01
Full Text Available Each company is surrounded by the micro- and macro-environment affecting also its economic performance. These factors are not only individual accounting entries, but also analytical inputs as the internal company processes, management of costs or short-term financial decisions and specifically in the case of agriculture within the EU also the public subsidy schemes implemented through the EU Common Agricultural Policy. Groups of agricultural producers are created as a response to current market dynamics and the opportunity for each agricultural enterprise regardless the size. In this paper, the basis for agricultural cooperation is provided, traditional economic performance measures are presented and their applicability on the sample of agricultural producers’ groups and wholesale entities is empirically verified. Wholesale entities are analysed by its business activity and performance features to consider whether they are suitable peer group for comparing economic performance of examined agricultural producers’ group. Since the economic performance of agricultural producers’ groups directly affects the economic performance of all participating entities, and vice versa, their economic performance measurement may involve specific constraints. According to the structure and characteristics of agricultural producers’ groups may be inferred that whilst the common performance measurement techniques are applicable on the majority of companies, agricultural producers’ groups represent specific entities and therefore need adjusted performance measurement approach.
International Nuclear Information System (INIS)
Antonov, N.V.; Borisenok, S.V.; Girina, V.I.
1996-01-01
Within the framework of the renormalization group approach to the theory of fully developed turbulence we consider the problem of possible IR relevant corrections to the Navier-Stokes equation. We formulate an exact criterion of the actual IR relevance of the corrections. In accordance with this criterion we verify the IR relevance for certain classes of composite operators. 17 refs., 2 tabs
Molenaar, Ivo W.
The technical problems involved in obtaining Bayesian model estimates for the regression parameters in m similar groups are studied. The available computer programs, BPREP (BASIC), and BAYREG, both written in FORTRAN, require an amount of computer processing that does not encourage regular use. These programs are analyzed so that the performance…
Eissa, Mourad Ali; Mostafa, Amaal Ahmed
2013-01-01
This study investigated the effect of using differentiated instruction by integrating multiple intelligences and learning styles on solving problems, achievement in, and attitudes towards math in six graders with learning disabilities in cooperative groups. A total of 60 students identified with LD were invited to participate. The sample was…
International Nuclear Information System (INIS)
Guerrieri, A.
2009-01-01
In this report the largest Lyapunov characteristic exponent of a high dimensional atmospheric global circulation model of intermediate complexity has been estimated numerically. A sensitivity analysis has been carried out by varying the equator-to-pole temperature difference, the space resolution and the value of some parameters employed by the model. Chaotic and non-chaotic regimes of circulation have been found. [it
Singaram, V S; Dolmans, D H J M; Lachman, N; van der Vleuten, C P M
2008-07-01
A key aspect of the success of a PBL curriculum is the effective implementation of its small group tutorials. Diversity among students participating in tutorials may affect the effectiveness of the tutorials and may require different implementation strategies. To determine how students from diverse backgrounds perceive the effectiveness of the processes and content of the PBL tutorials. This study also aims to explore the relationship between students' perceptions of their PBL tutorials and their gender, age, language, prior educational training, and secondary schooling. Data were survey results from 244 first-year student-respondents at the Nelson Mandela School of Medicine at the University of KwaZulu-Natal in South Africa. Exploratory factor analysis was conducted to verify scale constructs in the questionnaire. Relationships between independent and dependent variables were investigated in an analysis of variance. The average scores for the items measured varied between 3.3 and 3.8 (scale value 1 indicated negative regard and 5 indicated positive regard). Among process measures, approximately two-thirds of students felt that learning in a group was neither frustrating nor stressful and that they enjoyed learning how to work with students from different social and cultural backgrounds. Among content measures, 80% of the students felt that they learned to work successfully with students from different social and cultural groups and 77% felt that they benefited from the input of other group members. Mean ratings on these measures did not vary with students' gender, age, first language, prior educational training, and the types of schools they had previously attended. Medical students of the University of KwaZulu-Natal, regardless of their backgrounds, generally have positive perceptions of small group learning. These findings support previous studies in highlighting the role that small group tutorials can play in overcoming cultural barriers and promoting unity and
Evaluation of a new high-dimensional miRNA profiling platform
Directory of Open Access Journals (Sweden)
Lamblin Anne-Francoise
2009-08-01
Full Text Available Abstract Background MicroRNAs (miRNAs are a class of approximately 22 nucleotide long, widely expressed RNA molecules that play important regulatory roles in eukaryotes. To investigate miRNA function, it is essential that methods to quantify their expression levels be available. Methods We evaluated a new miRNA profiling platform that utilizes Illumina's existing robust DASL chemistry as the basis for the assay. Using total RNA from five colon cancer patients and four cell lines, we evaluated the reproducibility of miRNA expression levels across replicates and with varying amounts of input RNA. The beta test version was comprised of 735 miRNA targets of Illumina's miRNA profiling application. Results Reproducibility between sample replicates within a plate was good (Spearman's correlation 0.91 to 0.98 as was the plate-to-plate reproducibility replicates run on different days (Spearman's correlation 0.84 to 0.98. To determine whether quality data could be obtained from a broad range of input RNA, data obtained from amounts ranging from 25 ng to 800 ng were compared to those obtained at 200 ng. No effect across the range of RNA input was observed. Conclusion These results indicate that very small amounts of starting material are sufficient to allow sensitive miRNA profiling using the Illumina miRNA high-dimensional platform. Nonlinear biases were observed between replicates, indicating the need for abundance-dependent normalization. Overall, the performance characteristics of the Illumina miRNA profiling system were excellent.
Multivariate linear regression of high-dimensional fMRI data with multiple target variables.
Valente, Giancarlo; Castellanos, Agustin Lage; Vanacore, Gianluca; Formisano, Elia
2014-05-01
Multivariate regression is increasingly used to study the relation between fMRI spatial activation patterns and experimental stimuli or behavioral ratings. With linear models, informative brain locations are identified by mapping the model coefficients. This is a central aspect in neuroimaging, as it provides the sought-after link between the activity of neuronal populations and subject's perception, cognition or behavior. Here, we show that mapping of informative brain locations using multivariate linear regression (MLR) may lead to incorrect conclusions and interpretations. MLR algorithms for high dimensional data are designed to deal with targets (stimuli or behavioral ratings, in fMRI) separately, and the predictive map of a model integrates information deriving from both neural activity patterns and experimental design. Not accounting explicitly for the presence of other targets whose associated activity spatially overlaps with the one of interest may lead to predictive maps of troublesome interpretation. We propose a new model that can correctly identify the spatial patterns associated with a target while achieving good generalization. For each target, the training is based on an augmented dataset, which includes all remaining targets. The estimation on such datasets produces both maps and interaction coefficients, which are then used to generalize. The proposed formulation is independent of the regression algorithm employed. We validate this model on simulated fMRI data and on a publicly available dataset. Results indicate that our method achieves high spatial sensitivity and good generalization and that it helps disentangle specific neural effects from interaction with predictive maps associated with other targets. Copyright © 2013 Wiley Periodicals, Inc.
Gomez, Luis J; Yücel, Abdulkadir C; Hernandez-Garcia, Luis; Taylor, Stephan F; Michielssen, Eric
2015-01-01
A computational framework for uncertainty quantification in transcranial magnetic stimulation (TMS) is presented. The framework leverages high-dimensional model representations (HDMRs), which approximate observables (i.e., quantities of interest such as electric (E) fields induced inside targeted cortical regions) via series of iteratively constructed component functions involving only the most significant random variables (i.e., parameters that characterize the uncertainty in a TMS setup such as the position and orientation of TMS coils, as well as the size, shape, and conductivity of the head tissue). The component functions of HDMR expansions are approximated via a multielement probabilistic collocation (ME-PC) method. While approximating each component function, a quasi-static finite-difference simulator is used to compute observables at integration/collocation points dictated by the ME-PC method. The proposed framework requires far fewer simulations than traditional Monte Carlo methods for providing highly accurate statistical information (e.g., the mean and standard deviation) about the observables. The efficiency and accuracy of the proposed framework are demonstrated via its application to the statistical characterization of E-fields generated by TMS inside cortical regions of an MRI-derived realistic head model. Numerical results show that while uncertainties in tissue conductivities have negligible effects on TMS operation, variations in coil position/orientation and brain size significantly affect the induced E-fields. Our numerical results have several implications for the use of TMS during depression therapy: 1) uncertainty in the coil position and orientation may reduce the response rates of patients; 2) practitioners should favor targets on the crest of a gyrus to obtain maximal stimulation; and 3) an increasing scalp-to-cortex distance reduces the magnitude of E-fields on the surface and inside the cortex.
From Ambiguities to Insights: Query-based Comparisons of High-Dimensional Data
Kowalski, Jeanne; Talbot, Conover; Tsai, Hua L.; Prasad, Nijaguna; Umbricht, Christopher; Zeiger, Martha A.
2007-11-01
Genomic technologies will revolutionize drag discovery and development; that much is universally agreed upon. The high dimension of data from such technologies has challenged available data analytic methods; that much is apparent. To date, large-scale data repositories have not been utilized in ways that permit their wealth of information to be efficiently processed for knowledge, presumably due in large part to inadequate analytical tools to address numerous comparisons of high-dimensional data. In candidate gene discovery, expression comparisons are often made between two features (e.g., cancerous versus normal), such that the enumeration of outcomes is manageable. With multiple features, the setting becomes more complex, in terms of comparing expression levels of tens of thousands transcripts across hundreds of features. In this case, the number of outcomes, while enumerable, become rapidly large and unmanageable, and scientific inquiries become more abstract, such as "which one of these (compounds, stimuli, etc.) is not like the others?" We develop analytical tools that promote more extensive, efficient, and rigorous utilization of the public data resources generated by the massive support of genomic studies. Our work innovates by enabling access to such metadata with logically formulated scientific inquires that define, compare and integrate query-comparison pair relations for analysis. We demonstrate our computational tool's potential to address an outstanding biomedical informatics issue of identifying reliable molecular markers in thyroid cancer. Our proposed query-based comparison (QBC) facilitates access to and efficient utilization of metadata through logically formed inquires expressed as query-based comparisons by organizing and comparing results from biotechnologies to address applications in biomedicine.
Yu, Wenbao; Park, Taesung
2014-01-01
It is common to get an optimal combination of markers for disease classification and prediction when multiple markers are available. Many approaches based on the area under the receiver operating characteristic curve (AUC) have been proposed. Existing works based on AUC in a high-dimensional context depend mainly on a non-parametric, smooth approximation of AUC, with no work using a parametric AUC-based approach, for high-dimensional data. We propose an AUC-based approach using penalized regression (AucPR), which is a parametric method used for obtaining a linear combination for maximizing the AUC. To obtain the AUC maximizer in a high-dimensional context, we transform a classical parametric AUC maximizer, which is used in a low-dimensional context, into a regression framework and thus, apply the penalization regression approach directly. Two kinds of penalization, lasso and elastic net, are considered. The parametric approach can avoid some of the difficulties of a conventional non-parametric AUC-based approach, such as the lack of an appropriate concave objective function and a prudent choice of the smoothing parameter. We apply the proposed AucPR for gene selection and classification using four real microarray and synthetic data. Through numerical studies, AucPR is shown to perform better than the penalized logistic regression and the nonparametric AUC-based method, in the sense of AUC and sensitivity for a given specificity, particularly when there are many correlated genes. We propose a powerful parametric and easily-implementable linear classifier AucPR, for gene selection and disease prediction for high-dimensional data. AucPR is recommended for its good prediction performance. Beside gene expression microarray data, AucPR can be applied to other types of high-dimensional omics data, such as miRNA and protein data.
Barker, David H; Swenson, Rebecca R; Brown, Larry K; Stanton, Bonita F; Vanable, Peter A; Carey, Michael P; Valois, Robert F; Diclemente, Ralph J; Salazar, Laura F; Romer, Daniel
2012-04-01
HIV-related stigma has been shown to impede HIV-antibody testing and safer sexual practices in adults. Less is known about its effects on prevention programs among at-risk youth. This study examined the longitudinal relationships between HIV-stigma and HIV-knowledge following completion of a validated group-based intervention. Data were provided by 1,654 African-American adolescents who participated in a large multi-city prevention trial (Project iMPACCS). Participants were randomly assigned to an empirically-validated skill-based intervention or a general health promotion control group. Both stigma and knowledge were assessed at baseline and post-intervention. Results suggested that adolescents participating in the intervention showed improvements in knowledge and decreases in stigma when compared to controls. Improvements in stigma appeared to be partly driven by improvements in knowledge. Higher baseline stigma was shown to reduce gains in knowledge in both the treatment and control groups. Results suggest that HIV-stigma can interfere with how youth identify with and internalize messages from group-based prevention trials.
Zhang, Tuohong; Tang, Shenglan; Jun, Gao; Whitehead, Margaret
2007-02-08
Large-scale Tuberculosis (TB) control programmes in China have been hailed a success. Concerns remain, however, about whether the programme is reaching all sections of the population, particularly poorer groups within rural communities, and whether there are hidden costs. This study takes a household perspective to investigate receipt of appropriate care and affordability of services for different socio-economic groups with TB symptoms in rural China. Secondary analysis of Chinese National Household Health Survey for 2003: 40,000 rural households containing 143,991 individuals, 2,308 identified as TB suspects. use of services and expenditure of TB suspects, by gender and socio-economic position, indicated by household income, education, material assets, and insurance status. 37% of TB suspects did not seek any professional care, with low-income groups less likely to seek care than more affluent counterparts. Of those seeking care, only 35% received any of the recommended diagnostic tests. Of the 182 patients with a confirmed TB diagnosis, 104 (57%) received treatment at the recommended level, less likely if lacking health insurance or material assets. The burden of payment for services amounted to 45% of annual household income for the low-income group, 16% for the high-income group. Access to appropriate, affordable TB services is still problematic in some rural areas of China, and receipt of care and affordability declines with declining socio-economic position. These findings highlight the current shortcomings of the national TB control programme in China and the formidable challenge it faces if it is to reach all sections of the population, including the poor with the highest burden of disease.
Fjeldstad, Anette; Høglend, Per; Lorentzen, Steinar
2017-05-01
In this study, we compared the patterns of change in interpersonal problems between short-term and long-term psychodynamic group therapy. A total of 167 outpatients with mixed diagnoses were randomized to 20 or 80 weekly sessions of group therapy. Interpersonal problems were assessed with the Inventory of Interpersonal Problems at six time points during the 3-year study period. Using linear mixed models, change was linearly modelled in two steps. Earlier (within the first 6 months) and later (during the last 2.5 years) changes in five subscales were estimated. Contrary to what we expected, short-term therapy induced a significantly larger early change than long-term therapy on the cold subscale and there was a trend on the socially avoidant subscale, using a Bonferroni-adjusted alpha. There was no significant difference between short-term and long-term group therapy for improving problems in the areas cold, socially avoidant, nonassertive, exploitable, and overly nurturant over the 3 years.
Directory of Open Access Journals (Sweden)
Raftery Adrian E
2009-02-01
Full Text Available Abstract Background Microarray technology is increasingly used to identify potential biomarkers for cancer prognostics and diagnostics. Previously, we have developed the iterative Bayesian Model Averaging (BMA algorithm for use in classification. Here, we extend the iterative BMA algorithm for application to survival analysis on high-dimensional microarray data. The main goal in applying survival analysis to microarray data is to determine a highly predictive model of patients' time to event (such as death, relapse, or metastasis using a small number of selected genes. Our multivariate procedure combines the effectiveness of multiple contending models by calculating the weighted average of their posterior probability distributions. Our results demonstrate that our iterative BMA algorithm for survival analysis achieves high prediction accuracy while consistently selecting a small and cost-effective number of predictor genes. Results We applied the iterative BMA algorithm to two cancer datasets: breast cancer and diffuse large B-cell lymphoma (DLBCL data. On the breast cancer data, the algorithm selected a total of 15 predictor genes across 84 contending models from the training data. The maximum likelihood estimates of the selected genes and the posterior probabilities of the selected models from the training data were used to divide patients in the test (or validation dataset into high- and low-risk categories. Using the genes and models determined from the training data, we assigned patients from the test data into highly distinct risk groups (as indicated by a p-value of 7.26e-05 from the log-rank test. Moreover, we achieved comparable results using only the 5 top selected genes with 100% posterior probabilities. On the DLBCL data, our iterative BMA procedure selected a total of 25 genes across 3 contending models from the training data. Once again, we assigned the patients in the validation set to significantly distinct risk groups (p
Bullington, Jennifer; Cronqvist, Agneta
2018-03-01
In primary health care, efficacious treatment strategies are lacking for these patients, although the most prominent symptoms accounting for consultation in primary care often cannot be related to any biological causes. The aim was to explore whether group supervision from a specific phenomenological theory of psychosomatics could provide healthcare professionals treating patients with psychosomatic health issues within primary care a deeper understanding of these conditions and stimulate profession-specific treatment strategies. Our research questions were as follows: (i) What is the healthcare professionals' understanding of psychosomatics before and after the intervention? (ii) What are the treatment strategies for this group of patients before and after the intervention? The study was an explorative qualitative intervention pilot study. The six participants from a primary healthcare setting in a medium-sized city in Sweden participated in the study. A supervision group was formed, based on a mix of professions, age, gender and years of clinical experience. Supervision consisted of one 75-minutes meeting every month during the course of 6 months. Participants were interviewed before and after the supervision intervention. The study showed two distinct categories emerged from the data. One category of healthcare professionals espoused a psycho-educative approach, while the other lacked a cohesive approach. The supervision improved the second category of healthcare professionals' understanding of psychosomatics. The psycho-educative group did not change their understanding of psychosomatics, although they felt strengthened in their approach by the supervision. Profession-specific strategies were not developed. This pilot study indicates that a relatively short supervision intervention can aid clinicians in their clinical encounters with these patients; however, further research is necessary to ascertain the value of the specific phenomenologically based
International Nuclear Information System (INIS)
Fedotovskii, V.S.
1988-02-01
The vibration of tanks with liquid and non deformed cylindrical or spherical inclusions are considered. It is shown that for calculating dynamic characteristics of such systems it is advisable to use continual approach i.e. consider-heterogeneous media formed by liquid and weighted inclusions in it as homogeneous media with effective or vibroreological properties. On the base of the problem on vibrations of the tank, containing liquid and localized inclusions, rod assemblies vibrations are considered and relationships for the added mass and resistance coefficient determining dynamic characteristics of such systems are obtained. Considered are also liquid tank vibrations containing spherical inclusions. The results obtained are used for calculating dynamic characteristics of two-phase flow pipelines at bubble and annular flow mode. The theoretical relationships are compared with available experimental data [fr
Integrating high dimensional bi-directional parsing models for gene mention tagging.
Hsu, Chun-Nan; Chang, Yu-Ming; Kuo, Cheng-Ju; Lin, Yu-Shi; Huang, Han-Shen; Chung, I-Fang
2008-07-01
Tagging gene and gene product mentions in scientific text is an important initial step of literature mining. In this article, we describe in detail our gene mention tagger participated in BioCreative 2 challenge and analyze what contributes to its good performance. Our tagger is based on the conditional random fields model (CRF), the most prevailing method for the gene mention tagging task in BioCreative 2. Our tagger is interesting because it accomplished the highest F-scores among CRF-based methods and second over all. Moreover, we obtained our results by mostly applying open source packages, making it easy to duplicate our results. We first describe in detail how we developed our CRF-based tagger. We designed a very high dimensional feature set that includes most of information that may be relevant. We trained bi-directional CRF models with the same set of features, one applies forward parsing and the other backward, and integrated two models based on the output scores and dictionary filtering. One of the most prominent factors that contributes to the good performance of our tagger is the integration of an additional backward parsing model. However, from the definition of CRF, it appears that a CRF model is symmetric and bi-directional parsing models will produce the same results. We show that due to different feature settings, a CRF model can be asymmetric and the feature setting for our tagger in BioCreative 2 not only produces different results but also gives backward parsing models slight but constant advantage over forward parsing model. To fully explore the potential of integrating bi-directional parsing models, we applied different asymmetric feature settings to generate many bi-directional parsing models and integrate them based on the output scores. Experimental results show that this integrated model can achieve even higher F-score solely based on the training corpus for gene mention tagging. Data sets, programs and an on-line service of our gene
Teodorovich, E. V.
2018-03-01
In order to find the shape of energy spectrum within the framework of the model of stationary homogeneous isotropic turbulence, the renormalization-group equations, which reflect the Markovian nature of the mechanism of energy transfer along the wavenumber spectrum, are used in addition to the dimensional considerations and the energy balance equation. For the spectrum, the formula depends on three parameters, namely, the wavenumber, which determines the upper boundary of the range of the turbulent energy production, the spectral flux through this boundary, and the fluid kinematic viscosity.
Hassmiller Lich, Kristen; Urban, Jennifer Brown; Frerichs, Leah; Dave, Gaurav
2017-02-01
Group concept mapping (GCM) has been successfully employed in program planning and evaluation for over 25 years. The broader set of systems thinking methodologies (of which GCM is one), have only recently found their way into the field. We present an overview of systems thinking emerging from a system dynamics (SD) perspective, and illustrate the potential synergy between GCM and SD. As with GCM, participatory processes are frequently employed when building SD models; however, it can be challenging to engage a large and diverse group of stakeholders in the iterative cycles of divergent thinking and consensus building required, while maintaining a broad perspective on the issue being studied. GCM provides a compelling resource for overcoming this challenge, by richly engaging a diverse set of stakeholders in broad exploration, structuring, and prioritization. SD provides an opportunity to extend GCM findings by embedding constructs in a testable hypothesis (SD model) describing how system structure and changes in constructs affect outcomes over time. SD can be used to simulate the hypothesized dynamics inherent in GCM concept maps. We illustrate the potential of the marriage of these methodologies in a case study of BECOMING, a federally-funded program aimed at strengthening the cross-sector system of care for youth with severe emotional disturbances. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
Trout, Joseph; Bland, Jared
2013-03-01
In this pilot project, one hour of lecture time was replaced with one hour of in-class assignments, which groups of students collaborated on. These in-class assignments consisted of problems or projects selected for the calculus-based introductory physics students The first problem was at a level of difficulty that the majority of the students could complete with a small to moderate amount of difficulty. Each successive problem was increasingly more difficult, the last problem being having a level of difficulty that was beyond the capabilities of the majority of the students and required some instructor intervention. The students were free to choose their own groups. Students were encouraged to interact and help each other understand. The success of the in-class exercises were measured using pre-tests and post-tests. The pre-test and post-test were completed by each student independently. Statistics were also compiled on each student's attendance record and the amount of time spent reading and studying, as reported by the student. Statistics were also completed on the student responses when asked if they had sufficient time to complete the pre-test and post-test and if they would have completed the test with the correct answers if they had more time. The pre-tests and post-tests were not used in the computation of the grades of the students.
International Nuclear Information System (INIS)
Alcaras, J.A.C.; Ferreira, J.L.
1975-01-01
A derivation of an angular basis for the A-body problem, suitable for the K-harmonics method, is presented. Those angular functions are obtained from homogeneous and harmonic polynomials, which are completely specified by labels associated to eigenvalues of the Casimir invariants of subgroups of the 3(A-1)-dimensional orthogonal group, among them, the total angular momentum and its z-projection [pt
International Nuclear Information System (INIS)
Williams, M.M.R.
2003-01-01
A two group integral equation derived using transport theory, which describes the fuel distribution necessary for a flat thermal flux and minimum critical mass, is solved by the classical end-point method. This method has a number of advantages and in particular highlights the changing behaviour of the fissile mass distribution function in the neighbourhood of the core-reflector interface. We also show how the reflector thermal flux behaves and explain the origin of the maximum which arises when the critical size is less than that corresponding to minimum critical mass. A comparison is made with diffusion theory and the necessary and somewhat artificial presence of surface delta functions in the fuel distribution is shown to be analogous to the edge transients that arise naturally in transport theory
Directory of Open Access Journals (Sweden)
Lydia eMorris
2016-02-01
Full Text Available Background: Increasingly, research supports the utility of a transdiagnostic understanding of psychopathology. However, there is no consensus regarding the theoretical approach that best explains this. Transdiagnostic interventions can offer service delivery advantages; this is explored in the current review, focusing on group modalities and primary care settings. Objective: This review seeks to explore whether a Perceptual Control Theory (PCT explanation of psychopathology across disorders is a valid one. Further, this review illustrates the process of developing a novel transdiagnostic intervention (Take Control Course; TCC from a PCT theory of functioning.Method: Narrative review.Results and Conclusions: Considerable evidence supports key tenets of PCT. Further, PCT offers a novel perspective regarding the mechanisms by which a number of familiar techniques, such as exposure and awareness, are effective. However, additional research is required to directly test the relative contribution of some PCT mechanisms predicted to underlie psychopathology. Directions for future research are considered.
Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks.
Vlachas, Pantelis R; Byeon, Wonmin; Wan, Zhong Y; Sapsis, Themistoklis P; Koumoutsakos, Petros
2018-05-01
We introduce a data-driven forecasting method for high-dimensional chaotic systems using long short-term memory (LSTM) recurrent neural networks. The proposed LSTM neural networks perform inference of high-dimensional dynamical systems in their reduced order space and are shown to be an effective set of nonlinear approximators of their attractor. We demonstrate the forecasting performance of the LSTM and compare it with Gaussian processes (GPs) in time series obtained from the Lorenz 96 system, the Kuramoto-Sivashinsky equation and a prototype climate model. The LSTM networks outperform the GPs in short-term forecasting accuracy in all applications considered. A hybrid architecture, extending the LSTM with a mean stochastic model (MSM-LSTM), is proposed to ensure convergence to the invariant measure. This novel hybrid method is fully data-driven and extends the forecasting capabilities of LSTM networks.
International Nuclear Information System (INIS)
Oganesian, A.G.
1998-01-01
A method is proposed for estimating unknown vacuum expectation values of high-dimensional operators. The method is based on the idea that the factorization hypothesis is self-consistent. Results are obtained for all vacuum expectation values of dimension-7 operators, and some estimates for dimension-10 operators are presented as well. The resulting values are used to compute corrections of higher dimensions to the Bjorken and Ellis-Jaffe sum rules
Nam, Julia EunJu; Mueller, Klaus
2013-02-01
Gaining a true appreciation of high-dimensional space remains difficult since all of the existing high-dimensional space exploration techniques serialize the space travel in some way. This is not so foreign to us since we, when traveling, also experience the world in a serial fashion. But we typically have access to a map to help with positioning, orientation, navigation, and trip planning. Here, we propose a multivariate data exploration tool that compares high-dimensional space navigation with a sightseeing trip. It decomposes this activity into five major tasks: 1) Identify the sights: use a map to identify the sights of interest and their location; 2) Plan the trip: connect the sights of interest along a specifyable path; 3) Go on the trip: travel along the route; 4) Hop off the bus: experience the location, look around, zoom into detail; and 5) Orient and localize: regain bearings in the map. We describe intuitive and interactive tools for all of these tasks, both global navigation within the map and local exploration of the data distributions. For the latter, we describe a polygonal touchpad interface which enables users to smoothly tilt the projection plane in high-dimensional space to produce multivariate scatterplots that best convey the data relationships under investigation. Motion parallax and illustrative motion trails aid in the perception of these transient patterns. We describe the use of our system within two applications: 1) the exploratory discovery of data configurations that best fit a personal preference in the presence of tradeoffs and 2) interactive cluster analysis via cluster sculpting in N-D.
Stocks, Jennifer Dugan; Taneja, Baldeo K; Baroldi, Paolo; Findling, Robert L
2012-04-01
To evaluate safety and tolerability of four doses of immediate-release molindone hydrochloride in children with attention-deficit/hyperactivity disorder (ADHD) and serious conduct problems. This open-label, parallel-group, dose-ranging, multicenter trial randomized children, aged 6-12 years, with ADHD and persistent, serious conduct problems to receive oral molindone thrice daily for 9-12 weeks in four treatment groups: Group 1-10 mg (5 mg if weight conduct problems. Secondary outcome measures included change in Nisonger Child Behavior Rating Form-Typical Intelligence Quotient (NCBRF-TIQ) Conduct Problem subscale scores, change in Clinical Global Impressions-Severity (CGI-S) and -Improvement (CGI-I) subscale scores from baseline to end point, and Swanson, Nolan, and Pelham rating scale-revised (SNAP-IV) ADHD-related subscale scores. The study randomized 78 children; 55 completed the study. Treatment with molindone was generally well tolerated, with no clinically meaningful changes in laboratory or physical examination findings. The most common treatment-related adverse events (AEs) included somnolence (n=9), weight increase (n=8), akathisia (n=4), sedation (n=4), and abdominal pain (n=4). Mean weight increased by 0.54 kg, and mean body mass index by 0.24 kg/m(2). The incidence of AEs and treatment-related AEs increased with increasing dose. NCBRF-TIQ subscale scores improved in all four treatment groups, with 34%, 34%, 32%, and 55% decreases from baseline in groups 1, 2, 3, and 4, respectively. CGI-S and SNAP-IV scores improved over time in all treatment groups, and CGI-I scores improved to the greatest degree in group 4. Molindone at doses of 5-20 mg/day (children weighing <30 kg) and 20-40 mg (≥ 30 kg) was well tolerated, and preliminary efficacy results suggest that molindone produces dose-related behavioral improvements over 9-12 weeks. Additional double-blind, placebo-controlled trials are needed to further investigate molindone in this pediatric population.
Kritz, Marlene; Gschwandtner, Manfred; Stefanov, Veronika; Hanbury, Allan; Samwald, Matthias
2013-06-26
There is a large body of research suggesting that medical professionals have unmet information needs during their daily routines. To investigate which online resources and tools different groups of European physicians use to gather medical information and to identify barriers that prevent the successful retrieval of medical information from the Internet. A detailed Web-based questionnaire was sent out to approximately 15,000 physicians across Europe and disseminated through partner websites. 500 European physicians of different levels of academic qualification and medical specialization were included in the analysis. Self-reported frequency of use of different types of online resources, perceived importance of search tools, and perceived search barriers were measured. Comparisons were made across different levels of qualification (qualified physicians vs physicians in training, medical specialists without professorships vs medical professors) and specialization (general practitioners vs specialists). Most participants were Internet-savvy, came from Austria (43%, 190/440) and Switzerland (31%, 137/440), were above 50 years old (56%, 239/430), stated high levels of medical work experience, had regular patient contact and were employed in nonacademic health care settings (41%, 177/432). All groups reported frequent use of general search engines and cited "restricted accessibility to good quality information" as a dominant barrier to finding medical information on the Internet. Physicians in training reported the most frequent use of Wikipedia (56%, 31/55). Specialists were more likely than general practitioners to use medical research databases (68%, 185/274 vs 27%, 24/88; χ²₂=44.905, Presources on the Internet and frequent reliance on general search engines and social media among physicians require further attention. Possible solutions may be increased governmental support for the development and popularization of user-tailored medical search tools and open
International Nuclear Information System (INIS)
Saxe, P.; Fox, D.J.; Schaefer, H.F. III; Handy, N.C.
1982-01-01
A new method for the approximate solution of Schroedinger's equation for many electron molecular systems is outlined. The new method is based on the unitary group approach (UGA) and exploits in particular the shape of loops appearing in Shavitt's graphical representation for the UGA. The method is cast in the form of a direct CI, makes use of Siegbahn's external space simplifications, and is suitable for very large configuration interaction (CI) wave functions. The ethylene molecule was chosen, as a prototype of unsaturated organic molecules, for the variational study of genuine many (i.e.,>2) body correlation effects. With a double zeta plus polarization basis set, the largest CI included all valence electron single and double excitations with respect to a 703 configuration natural orbital reference function. This variational calculation, involving 1 046 758 spin- and space-adapted 1 A/sub g/ configurations, was carried out on a minicomputer. Triple excitations are found to contribute 2.3% of the correlation energy and quadruple excitations 6.4%
Bhadra, Anindya
2013-04-22
We describe a Bayesian technique to (a) perform a sparse joint selection of significant predictor variables and significant inverse covariance matrix elements of the response variables in a high-dimensional linear Gaussian sparse seemingly unrelated regression (SSUR) setting and (b) perform an association analysis between the high-dimensional sets of predictors and responses in such a setting. To search the high-dimensional model space, where both the number of predictors and the number of possibly correlated responses can be larger than the sample size, we demonstrate that a marginalization-based collapsed Gibbs sampler, in combination with spike and slab type of priors, offers a computationally feasible and efficient solution. As an example, we apply our method to an expression quantitative trait loci (eQTL) analysis on publicly available single nucleotide polymorphism (SNP) and gene expression data for humans where the primary interest lies in finding the significant associations between the sets of SNPs and possibly correlated genetic transcripts. Our method also allows for inference on the sparse interaction network of the transcripts (response variables) after accounting for the effect of the SNPs (predictor variables). We exploit properties of Gaussian graphical models to make statements concerning conditional independence of the responses. Our method compares favorably to existing Bayesian approaches developed for this purpose. © 2013, The International Biometric Society.
Pololi, Linda H; Evans, Arthur T
2015-01-01
To address a dearth of mentoring and to avoid the pitfalls of dyadic mentoring, the authors implemented and evaluated a novel collaborative group peer mentoring program in a large academic department of medicine. The mentoring program aimed to facilitate faculty in their career planning, and targeted either early-career or midcareer faculty in 5 cohorts over 4 years, from 2010 to 2014. Each cohort of 9-12 faculty participated in a yearlong program with foundations in adult learning, relationship formation, mindfulness, and culture change. Participants convened for an entire day, once a month. Sessions incorporated facilitated stepwise and values-based career planning, skill development, and reflective practice. Early-career faculty participated in an integrated writing program and midcareer faculty in leadership development. Overall attendance of the 51 participants was 96%, and only 3 of 51 faculty who completed the program left the medical school during the 4 years. All faculty completed a written detailed structured academic development plan. Participants experienced an enhanced, inclusive, and appreciative culture; clarified their own career goals, values, strengths and priorities; enhanced their enthusiasm for collaboration; and developed skills. The program results highlight the need for faculty to personally experience the power of forming deep relationships with their peers for fostering successful career development and vitality. The outcomes of faculty humanity, vitality, professionalism, relationships, appreciation of diversity, and creativity are essential to the multiple missions of academic medicine. © 2015 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on Continuing Medical Education, Association for Hospital Medical Education.
Directory of Open Access Journals (Sweden)
Akbar Hassanzadeh
2017-01-01
Full Text Available Objective. The current study is aimed at investigating the association between stressful life events and psychological problems in a large sample of Iranian adults. Method. In a cross-sectional large-scale community-based study, 4763 Iranian adults, living in Isfahan, Iran, were investigated. Grouped outcomes latent factor regression on latent predictors was used for modeling the association of psychological problems (depression, anxiety, and psychological distress, measured by Hospital Anxiety and Depression Scale (HADS and General Health Questionnaire (GHQ-12, as the grouped outcomes, and stressful life events, measured by a self-administered stressful life events (SLEs questionnaire, as the latent predictors. Results. The results showed that the personal stressors domain has significant positive association with psychological distress (β=0.19, anxiety (β=0.25, depression (β=0.15, and their collective profile score (β=0.20, with greater associations in females (β=0.28 than in males (β=0.13 (all P<0.001. In addition, in the adjusted models, the regression coefficients for the association of social stressors domain and psychological problems profile score were 0.37, 0.35, and 0.46 in total sample, males, and females, respectively (P<0.001. Conclusion. Results of our study indicated that different stressors, particularly those socioeconomic related, have an effective impact on psychological problems. It is important to consider the social and cultural background of a population for managing the stressors as an effective approach for preventing and reducing the destructive burden of psychological problems.
Hassanzadeh, Akbar; Heidari, Zahra; Hassanzadeh Keshteli, Ammar; Afshar, Hamid
2017-01-01
Objective The current study is aimed at investigating the association between stressful life events and psychological problems in a large sample of Iranian adults. Method In a cross-sectional large-scale community-based study, 4763 Iranian adults, living in Isfahan, Iran, were investigated. Grouped outcomes latent factor regression on latent predictors was used for modeling the association of psychological problems (depression, anxiety, and psychological distress), measured by Hospital Anxiety and Depression Scale (HADS) and General Health Questionnaire (GHQ-12), as the grouped outcomes, and stressful life events, measured by a self-administered stressful life events (SLEs) questionnaire, as the latent predictors. Results The results showed that the personal stressors domain has significant positive association with psychological distress (β = 0.19), anxiety (β = 0.25), depression (β = 0.15), and their collective profile score (β = 0.20), with greater associations in females (β = 0.28) than in males (β = 0.13) (all P < 0.001). In addition, in the adjusted models, the regression coefficients for the association of social stressors domain and psychological problems profile score were 0.37, 0.35, and 0.46 in total sample, males, and females, respectively (P < 0.001). Conclusion Results of our study indicated that different stressors, particularly those socioeconomic related, have an effective impact on psychological problems. It is important to consider the social and cultural background of a population for managing the stressors as an effective approach for preventing and reducing the destructive burden of psychological problems. PMID:29312459
Talib, Imran; Belgacem, Fethi Bin Muhammad; Asif, Naseer Ahmad; Khalil, Hammad
2017-01-01
In this research article, we derive and analyze an efficient spectral method based on the operational matrices of three dimensional orthogonal Jacobi polynomials to solve numerically the mixed partial derivatives type multi-terms high dimensions generalized class of fractional order partial differential equations. We transform the considered fractional order problem to an easily solvable algebraic equations with the aid of the operational matrices. Being easily solvable, the associated algebraic system leads to finding the solution of the problem. Some test problems are considered to confirm the accuracy and validity of the proposed numerical method. The convergence of the method is ensured by comparing our Matlab software simulations based obtained results with the exact solutions in the literature, yielding negligible errors. Moreover, comparative results discussed in the literature are extended and improved in this study.
Duque, María Osley Garzón; Bernal, Diana Restrepo; Cardona, Doris Alejandra Segura; Vargas, Alejandra Valencia; Salas, Ivony Agudelo; Quintero, Lina Marcela Salazar
2014-01-01
To examine, from the point of view of a group of epidemiologists in training, their life experiences and work related to addressing mental health problems and mental health issues. An exploratory qualitative-descriptive study was conducted using ethnographic tools, non-participant observation, note-taking, and group interviews (FG). The participants mentioned that mental health and mental health issues are managed and poorly differentiated either by them and the community in general. They also said they were not ready to handle mental problems, or have the support of services for patient care, as mental health issues have not yet been clearly dimensioned by society. Epidemiology has its limitations, it focuses on knowledge of the physical-biological aspects and the use of quantitative approach with poor integration of the qualitative approach, thus hindering the understanding of a phenomenon that exceeds the limits of a research approach. This approach to issues of health and mental illness widens the view of knowledge from only a single focus. It includes an understanding of the qualitative approach as an option to advance the knowledge and recognition of a public health problem overshadowed by stigma and apathy of society. Copyright © 2014 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.
DEFF Research Database (Denmark)
Müller, Emmanuel; Assent, Ira; Günnemann, Stephan
2011-01-01
comparative studies on the advantages and disadvantages of the different algorithms exist. Part of the underlying problem is the lack of available open source implementations that could be used by researchers to understand, compare, and extend subspace and projected clustering algorithms. In this work, we...
Multisymplectic Structure－Preserving in Simple Finite Element Method in High Dimensional Case
Institute of Scientific and Technical Information of China (English)
BAIYong-Qiang; LIUZhen; PEIMing; ZHENGZhu-Jun
2003-01-01
In this paper, we study a finite element scheme of some semi-linear elliptic boundary value problems in high-dhnensjonal space. With uniform mesh, we find that, the numerical scheme derived from finite element method can keep a preserved multisymplectic structure.
Multisymplectic Structure-Preserving in Simple Finite Element Method in High Dimensional Case
Institute of Scientific and Technical Information of China (English)
BAI Yong-Qiang; LIU Zhen; PEI Ming; ZHENG Zhu-Jun
2003-01-01
In this paper, we study a finite element scheme of some semi-linear elliptic boundary value problems inhigh-dimensional space. With uniform mesh, we find that, the numerical scheme derived from finite element method cankeep a preserved multisymplectic structure.
Directory of Open Access Journals (Sweden)
Rutkowska Katarzyna
2017-12-01
Full Text Available Purpose. One of the keys to identifying health problems from the holistic perspective is the knowledge of Type D personality (distressed personality. Diagnosing this personality disorder among female football players may help sports psychologists, coaches, parents/caregivers, and all those engaged in training new sports entrants develop guidelines on how to resolve the problem. Methods. The study involved female footballers representing a Polish Ekstraliga football club, AZS-PSW Biała Podlaska, and was conducted with the use of the Polish adaptation of the DS14 scale. Results. In a group of 21 footballers, 7 (33.3% were diagnosed with Type D personality. Besides, a negative correlation was noted between the level of satisfaction with playing football and one of the dimensions of Type D personality - negative emotionality. Conclusions. The results of the study may be applicable in formulating practical recommendations while preparing mental training programmes.
Furlong, Mairead; McGilloway, Sinead; Bywater, Tracey; Hutchings, Judy; Smith, Susan M; Donnelly, Michael
2013-03-07
Early-onset child conduct problems are common and costly. A large number of studies and some previous reviews have focused on behavioural and cognitive-behavioural group-based parenting interventions, but methodological limitations are commonplace and evidence for the effectiveness and cost-effectiveness of these programmes has been unclear. To assess the effectiveness and cost-effectiveness of behavioural and cognitive-behavioural group-based parenting programmes for improving child conduct problems, parental mental health and parenting skills. We searched the following databases between 23 and 31 January 2011: CENTRAL (2011, Issue 1), MEDLINE (1950 to current), EMBASE (1980 to current), CINAHL (1982 to current), PsycINFO (1872 to current), Social Science Citation Index (1956 to current), ASSIA (1987 to current), ERIC (1966 to current), Sociological Abstracts (1963 to current), Academic Search Premier (1970 to current), Econlit (1969 to current), PEDE (1980 to current), Dissertations and Theses Abstracts (1980 to present), NHS EED (searched 31 January 2011), HEED (searched 31 January 2011), DARE (searched 31 January 2011), HTA (searched 31 January 2011), mRCT (searched 29 January 2011). We searched the following parent training websites on 31 January 2011: Triple P Library, Incredible Years Library and Parent Management Training. We also searched the reference lists of studies and reviews. We included studies if: (1) they involved randomised controlled trials (RCTs) or quasi-randomised controlled trials of behavioural and cognitive-behavioural group-based parenting interventions for parents of children aged 3 to 12 years with conduct problems, and (2) incorporated an intervention group versus a waiting list, no treatment or standard treatment control group. We only included studies that used at least one standardised instrument to measure child conduct problems. Two authors independently assessed the risk of bias in the trials and the methodological quality of
Haroon, Munib
2013-03-07
This is a commentary on a Cochrane review, published in this issue of EBCH, first published as: Furlong M, McGilloway S, Bywater T, Hutchings J, Smith SM, Donnelly M. Behavioural and cognitive-behavioural group-based parenting programmes for early-onset conduct problems in children aged 3 to 12 years. Cochrane Database of Systematic Reviews 2012, Issue 2. Art. No.: CD008225. DoI: 10.1002/14651858.CD008225.pub2. Copyright © 2013 The Cochrane Collaboration. Published by John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Igor V. Karyakin
2017-01-01
Full Text Available From 19 to 24 September, 2016 VII International Conference of the Working Group on Raptors of Northern Eurasia “Birds of prey of Northern Eurasia: problems and adaptation under modern conditions” was held on the basis of the Sochi National Park. Materials for the conference were presented by 198 ornithologists from Russia, Ukraine, Belarus, Kazakhstan, Moldova, Turkmenistan, Austria, Great Britain, Hungary, Mongolia, Poland, Estonia and the USA, who published 148 articles in two collections “Birds of prey of Northern Eurasia” and “Palearctic Harriers”.
Shahiri, Amirah Mohamed; Husain, Wahidah; Rashid, Nur'Aini Abd
2017-10-01
Huge amounts of data in educational datasets may cause the problem in producing quality data. Recently, data mining approach are increasingly used by educational data mining researchers for analyzing the data patterns. However, many research studies have concentrated on selecting suitable learning algorithms instead of performing feature selection process. As a result, these data has problem with computational complexity and spend longer computational time for classification. The main objective of this research is to provide an overview of feature selection techniques that have been used to analyze the most significant features. Then, this research will propose a framework to improve the quality of students' dataset. The proposed framework uses filter and wrapper based technique to support prediction process in future study.
On estimation of the noise variance in high-dimensional linear models
Golubev, Yuri; Krymova, Ekaterina
2017-01-01
We consider the problem of recovering the unknown noise variance in the linear regression model. To estimate the nuisance (a vector of regression coefficients) we use a family of spectral regularisers of the maximum likelihood estimator. The noise estimation is based on the adaptive normalisation of the squared error. We derive the upper bound for the concentration of the proposed method around the ideal estimator (the case of zero nuisance).
Bönisch, A; Dorn, M; Ehlebracht-König, I
2012-01-01
To analyze the short-term efficacy of the Vocational Perspective programme for patients identified as having extensive work-related problems during rheumatology or orthopaedic inpatient rehabilitation. The primary objectives of the programme on patient level are to convey information about the legal provisions regarding earning incapacity and occupational reintegration, to suggest strategies for dealing with one's own occupational situation, and to strengthen the motivation to stay employed. The programme is explicitly designed for patients who wish to retire or have applied for a pension. On the systemic level, the main goals are to facilitate doctor-patient communication and to increase rehabilitation teams' awareness of occupational problems. In a controlled quasi-experimental design, 359 subjects were consecutively assigned to either the control group (CG, n=177) or the intervention group (IG, n=182). The control group received standard care only, whereas the intervention group additionally participated in the 5-part Vocational Perspective programme. Evaluation criteria were assessed by questionnaire at the beginning (t1) and at end of rehabilitation (t2). Survey participation was 92.2% at t2. The socio-medically relevant knowledge status was objectively documented using a specially designed knowledge questionnaire. Aspects of treatment satisfaction were evaluated using individual items, and the subjective prognosis of gainful employment was assessed using the Subjective Prognosis of Gainful Employment (SPE) scale. Facilitation of communication between doctor and patient was operationalized at patient level in terms of patient satisfaction with medical care, and increased awareness of the rehabilitation team was operationalized in terms of the rate of recommendations to apply for vocational reintegration (LTA) services at discharge. Emotional and functional parameters were exploratively analyzed (anxiety and depression using the IRES 3.1 scales, and
International Nuclear Information System (INIS)
Zhang, Liangwei; Lin, Jing; Karim, Ramin
2015-01-01
The accuracy of traditional anomaly detection techniques implemented on full-dimensional spaces degrades significantly as dimensionality increases, thereby hampering many real-world applications. This work proposes an approach to selecting meaningful feature subspace and conducting anomaly detection in the corresponding subspace projection. The aim is to maintain the detection accuracy in high-dimensional circumstances. The suggested approach assesses the angle between all pairs of two lines for one specific anomaly candidate: the first line is connected by the relevant data point and the center of its adjacent points; the other line is one of the axis-parallel lines. Those dimensions which have a relatively small angle with the first line are then chosen to constitute the axis-parallel subspace for the candidate. Next, a normalized Mahalanobis distance is introduced to measure the local outlier-ness of an object in the subspace projection. To comprehensively compare the proposed algorithm with several existing anomaly detection techniques, we constructed artificial datasets with various high-dimensional settings and found the algorithm displayed superior accuracy. A further experiment on an industrial dataset demonstrated the applicability of the proposed algorithm in fault detection tasks and highlighted another of its merits, namely, to provide preliminary interpretation of abnormality through feature ordering in relevant subspaces. - Highlights: • An anomaly detection approach for high-dimensional reliability data is proposed. • The approach selects relevant subspaces by assessing vectorial angles. • The novel ABSAD approach displays superior accuracy over other alternatives. • Numerical illustration approves its efficacy in fault detection applications
Garashchuk, Sophya; Rassolov, Vitaly A
2008-07-14
Semiclassical implementation of the quantum trajectory formalism [J. Chem. Phys. 120, 1181 (2004)] is further developed to give a stable long-time description of zero-point energy in anharmonic systems of high dimensionality. The method is based on a numerically cheap linearized quantum force approach; stabilizing terms compensating for the linearization errors are added into the time-evolution equations for the classical and nonclassical components of the momentum operator. The wave function normalization and energy are rigorously conserved. Numerical tests are performed for model systems of up to 40 degrees of freedom.
Benediktsson, J. A.; Swain, P. H.; Ersoy, O. K.
1993-01-01
Application of neural networks to classification of remote sensing data is discussed. Conventional two-layer backpropagation is found to give good results in classification of remote sensing data but is not efficient in training. A more efficient variant, based on conjugate-gradient optimization, is used for classification of multisource remote sensing and geographic data and very-high-dimensional data. The conjugate-gradient neural networks give excellent performance in classification of multisource data, but do not compare as well with statistical methods in classification of very-high-dimentional data.
Kobuke, Yuko
2017-01-01
In the pharmaceutical education model core curriculums revision, "basic qualities required as a pharmacist" are clearly shown, and "the method based on learning outcomes" has been adopted. One of the 10 qualities (No. 7) is "Practical ability of the health and medical care in the community". In the large item "F. Pharmaceutical clinical" of the model core curriculums, "participation in the home (visit) medical care and nursing care" is written in "participation in the health, medical care, and welfare of the community", and it is an important problem to offer opportunities of home medical care education at university. In our university, we launched a working group to create "home clinical cases for education" from the educational point of view to pharmacy students to learn home medical care, in collaboration with university faculty members and pharmacists, who are practitioners of home care. Through its working group activities, we would like to organize the present conditions and problems of home care education in pharmaceutical education and to examine the possibility of using "home clinical case studies" in home care education at university.
A Third-Order p-Laplacian Boundary Value Problem Solved by an SL(3,ℝ Lie-Group Shooting Method
Directory of Open Access Journals (Sweden)
Chein-Shan Liu
2013-01-01
Full Text Available The boundary layer problem for power-law fluid can be recast to a third-order p-Laplacian boundary value problem (BVP. In this paper, we transform the third-order p-Laplacian into a new system which exhibits a Lie-symmetry SL(3,ℝ. Then, the closure property of the Lie-group is used to derive a linear transformation between the boundary values at two ends of a spatial interval. Hence, we can iteratively solve the missing left boundary conditions, which are determined by matching the right boundary conditions through a finer tuning of r∈[0,1]. The present SL(3,ℝ Lie-group shooting method is easily implemented and is efficient to tackle the multiple solutions of the third-order p-Laplacian. When the missing left boundary values can be determined accurately, we can apply the fourth-order Runge-Kutta (RK4 method to obtain a quite accurate numerical solution of the p-Laplacian.
Pastore, Valentina; Colombo, Katia; Maestroni, Deborah; Galbiati, Susanna; Villa, Federica; Recla, Monica; Locatelli, Federica; Strazzer, Sandra
2015-01-01
This study aims to describe psychological problems, self-esteem difficulties and body dissatisfaction in a sample of adolescents with acquired brain lesions and to compare them with an age- and gender-matched control group. In an experimental design, the psychological profile of 26 adolescents with brain lesions of traumatic or vascular aetiology, aged 12-18 years, was compared with that of 18 typically-developing subjects. Moreover, within the clinical group, patients with TBI were compared with patients with vascular lesions. The psychological and adaptive profile of the adolescents was assessed by a specific protocol, including CBCL, VABS, RSES, EDI-2 and BES. Adolescents with brain lesions showed more marked psychological problems than their healthy peers; they also presented with a greater impairment of adaptive skills and a lower self-esteem. No significant differences were found between patients with traumatic lesions and patients with vascular lesions. Adolescents with acquired brain lesions were at higher risk to develop psychological and behavioural difficulties. Furthermore, in the clinical sample, some variables such as the long hospitalization and isolation from family and peers were associated to a greater psychological burden than the aetiology of the brain damage.
González-Alcaide, Gregorio; Castelló-Cogollos, Lourdes; Castellano-Gómez, Miguel; Agullo-Calatayud, Víctor; Aleixandre-Benavent, Rafael; Alvarez, Francisco Javier; Valderrama-Zurián, Juan Carlos
2013-01-01
The research of alcohol consumption-related problems is a multidisciplinary field. The aim of this study is to analyze the worldwide scientific production in the area of alcohol-drinking and alcohol-related problems from 2005 to 2009. A MEDLINE and Scopus search on alcohol (alcohol-drinking and alcohol-related problems) published from 2005 to 2009 was carried out. Using bibliometric indicators, the distribution of the publications was determined within the journals that publish said articles, specialty of the journal (broad subject terms), article type, language of the publication, and country where the journal is published. Also, authorship characteristics were assessed (collaboration index and number of authors who have published more than 9 documents). The existing research groups were also determined. About 24,100 documents on alcohol, published in 3,862 journals, and authored by 69,640 authors were retrieved from MEDLINE and Scopus between the years 2005 and 2009. The collaboration index of the articles was 4.83 ± 3.7. The number of consolidated research groups in the field was identified as 383, with 1,933 authors. Documents on alcohol were published mainly in journals covering the field of "Substance-Related Disorders," 23.18%, followed by "Medicine," 8.7%, "Psychiatry," 6.17%, and "Gastroenterology," 5.25%. Research on alcohol is a consolidated field, with an average of 4,820 documents published each year between 2005 and 2009 in MEDLINE and Scopus. Alcohol-related publications have a marked multidisciplinary nature. Collaboration was common among alcohol researchers. There is an underrepresentation of alcohol-related publications in languages other than English and from developing countries, in MEDLINE and Scopus databases. Copyright © 2012 by the Research Society on Alcoholism.
Lo Coco, Gianluca; Mannino, Giuseppe; Salerno, Laura; Oieni, Veronica; Di Fratello, Carla; Profita, Gabriele; Gullo, Salvatore
2018-01-01
All versions of the Inventory of Interpersonal Problems (IIP) are broadly used to measure people's interpersonal functioning. The aims of the current study are: (a) to examine the psychometric properties and factor structure of the Italian version of the Inventory of Interpersonal Problems-short version (IIP-32); and (b) to evaluate its associations with core symptoms of different eating disorders. One thousand two hundred and twenty three participants ( n = 623 non-clinical and n = 600 clinical participants with eating disorders and obesity) filled out the Inventory of Interpersonal Problems-short version (IIP-32) along with measures of self-esteem (Rosenberg Self-Esteem Scale, RSES), psychological functioning (Outcome Questionnaire, OQ-45), and eating disorders (Eating Disorder Inventory, EDI-3). The present study examined the eight-factor structure of the IIP-32 with Confirmatory Factor Analysis (CFA) and Exploratory Structural Equation Modeling (ESEM). ESEM was also used to test the measurement invariance of the IIP-32 across clinical and non-clinical groups. It was found that CFA had unsatisfactory model fit, whereas the corresponding ESEM solution provided a better fit to the observed data. However, six target factor loadings tend to be modest, and ten items showed cross-loadings higher than 0.30. The configural and metric invariance as well as the scalar and partial strict invariance of the IIP-32 were supported across clinical and non-clinical groups. The internal consistency of the IIP-32 was acceptable and the construct validity was confirmed by significant correlations between IIP-32, RSES, and OQ-45. Furthermore, overall interpersonal difficulties were consistently associated with core eating disorder symptoms, whereas interpersonal styles that reflect the inability to form close relationships, social awkwardness, the inability to be assertive, and a tendency to self-sacrificing were positively associated with general psychological maladjustment
Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data
Dong, Kai
2015-09-16
DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.
Sparse Learning of the Disease Severity Score for High-Dimensional Data
Directory of Open Access Journals (Sweden)
Ivan Stojkovic
2017-01-01
Full Text Available Learning disease severity scores automatically from collected measurements may aid in the quality of both healthcare and scientific understanding. Some steps in that direction have been taken and machine learning algorithms for extracting scoring functions from data have been proposed. Given the rapid increase in both quantity and diversity of data measured and stored, the large amount of information is becoming one of the challenges for learning algorithms. In this work, we investigated the direction of the problem where the dimensionality of measured variables is large. Learning the severity score in such cases brings the issue of which of measured features are relevant. We have proposed a novel approach by combining desirable properties of existing formulations, which compares favorably to alternatives in accuracy and especially in the robustness of the learned scoring function. The proposed formulation has a nonsmooth penalty that induces sparsity. This problem is solved by addressing a dual formulation which is smooth and allows an efficient optimization. The proposed approach might be used as an effective and reliable tool for both scoring function learning and biomarker discovery, as demonstrated by identifying a stable set of genes related to influenza symptoms’ severity, which are enriched in immune-related processes.
Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data
Dong, Kai; Pang, Herbert; Tong, Tiejun; Genton, Marc G.
2015-01-01
DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.
Robust Learning of High-dimensional Biological Networks with Bayesian Networks
Nägele, Andreas; Dejori, Mathäus; Stetter, Martin
Structure learning of Bayesian networks applied to gene expression data has become a potentially useful method to estimate interactions between genes. However, the NP-hardness of Bayesian network structure learning renders the reconstruction of the full genetic network with thousands of genes unfeasible. Consequently, the maximal network size is usually restricted dramatically to a small set of genes (corresponding with variables in the Bayesian network). Although this feature reduction step makes structure learning computationally tractable, on the downside, the learned structure might be adversely affected due to the introduction of missing genes. Additionally, gene expression data are usually very sparse with respect to the number of samples, i.e., the number of genes is much greater than the number of different observations. Given these problems, learning robust network features from microarray data is a challenging task. This chapter presents several approaches tackling the robustness issue in order to obtain a more reliable estimation of learned network features.
International Nuclear Information System (INIS)
Tercariol, Cesar Augusto Sangaletti; Kiipper, Felipe de Moura; Martinez, Alexandre Souto
2007-01-01
Consider that the coordinates of N points are randomly generated along the edges of a d-dimensional hypercube (random point problem). The probability P (d,N) m,n that an arbitrary point is the mth nearest neighbour to its own nth nearest neighbour (Cox probabilities) plays an important role in spatial statistics. Also, it has been useful in the description of physical processes in disordered media. Here we propose a simpler derivation of Cox probabilities, where we stress the role played by the system dimensionality d. In the limit d → ∞, the distances between pair of points become independent (random link model) and closed analytical forms for the neighbourhood probabilities are obtained both for the thermodynamic limit and finite-size system. Breaking the distance symmetry constraint drives us to the random map model, for which the Cox probabilities are obtained for two cases: whether a point is its own nearest neighbour or not
Integrative Modeling and Inference in High Dimensional Genomic and Metabolic Data
DEFF Research Database (Denmark)
Brink-Jensen, Kasper
in Manuscript I preserves the attributes of the compounds found in LC–MS samples while identifying genes highly associated with these. The main obstacles that must be overcome with this approach are dimension reduction and variable selection, here done with PARAFAC and LASSO respectively. One important drawback...... of the LASSO has been the lack of inference, the variables selected could potentially just be the most important from a set of non–important variables. Manuscript II addresses this problem with a permutation based significance test for the variables chosen by the LASSO. Once a set of relevant variables has......, particularly it scales to many lists and it provides an intuitive interpretation of the measure....
Directory of Open Access Journals (Sweden)
Omid Hamidi
2014-01-01
Full Text Available Microarray technology results in high-dimensional and low-sample size data sets. Therefore, fitting sparse models is substantial because only a small number of influential genes can reliably be identified. A number of variable selection approaches have been proposed for high-dimensional time-to-event data based on Cox proportional hazards where censoring is present. The present study applied three sparse variable selection techniques of Lasso, smoothly clipped absolute deviation and the smooth integration of counting, and absolute deviation for gene expression survival time data using the additive risk model which is adopted when the absolute effects of multiple predictors on the hazard function are of interest. The performances of used techniques were evaluated by time dependent ROC curve and bootstrap .632+ prediction error curves. The selected genes by all methods were highly significant (P<0.001. The Lasso showed maximum median of area under ROC curve over time (0.95 and smoothly clipped absolute deviation showed the lowest prediction error (0.105. It was observed that the selected genes by all methods improved the prediction of purely clinical model indicating the valuable information containing in the microarray features. So it was concluded that used approaches can satisfactorily predict survival based on selected gene expression measurements.
Energy Technology Data Exchange (ETDEWEB)
Zawadzka-Kazimierczuk, Anna; Kozminski, Wiktor [University of Warsaw, Faculty of Chemistry (Poland); Billeter, Martin, E-mail: martin.billeter@chem.gu.se [University of Gothenburg, Biophysics Group, Department of Chemistry and Molecular Biology (Sweden)
2012-09-15
While NMR studies of proteins typically aim at structure, dynamics or interactions, resonance assignments represent in almost all cases the initial step of the analysis. With increasing complexity of the NMR spectra, for example due to decreasing extent of ordered structure, this task often becomes both difficult and time-consuming, and the recording of high-dimensional data with high-resolution may be essential. Random sampling of the evolution time space, combined with sparse multidimensional Fourier transform (SMFT), allows for efficient recording of very high dimensional spectra ({>=}4 dimensions) while maintaining high resolution. However, the nature of this data demands for automation of the assignment process. Here we present the program TSAR (Tool for SMFT-based Assignment of Resonances), which exploits all advantages of SMFT input. Moreover, its flexibility allows to process data from any type of experiments that provide sequential connectivities. The algorithm was tested on several protein samples, including a disordered 81-residue fragment of the {delta} subunit of RNA polymerase from Bacillus subtilis containing various repetitive sequences. For our test examples, TSAR achieves a high percentage of assigned residues without any erroneous assignments.
Ren, Jie; He, Tao; Li, Ye; Liu, Sai; Du, Yinhao; Jiang, Yu; Wu, Cen
2017-05-16
Over the past decades, the prevalence of type 2 diabetes mellitus (T2D) has been steadily increasing around the world. Despite large efforts devoted to better understand the genetic basis of the disease, the identified susceptibility loci can only account for a small portion of the T2D heritability. Some of the existing approaches proposed for the high dimensional genetic data from the T2D case-control study are limited by analyzing a few number of SNPs at a time from a large pool of SNPs, by ignoring the correlations among SNPs and by adopting inefficient selection techniques. We propose a network constrained regularization method to select important SNPs by taking the linkage disequilibrium into account. To accomodate the case control study, an iteratively reweighted least square algorithm has been developed within the coordinate descent framework where optimization of the regularized logistic loss function is performed with respect to one parameter at a time and iteratively cycle through all the parameters until convergence. In this article, a novel approach is developed to identify important SNPs more effectively through incorporating the interconnections among them in the regularized selection. A coordinate descent based iteratively reweighed least squares (IRLS) algorithm has been proposed. Both the simulation study and the analysis of the Nurses's Health Study, a case-control study of type 2 diabetes data with high dimensional SNP measurements, demonstrate the advantage of the network based approach over the competing alternatives.
Privacy-Preserving Distributed Linear Regression on High-Dimensional Data
Directory of Open Access Journals (Sweden)
Gascón Adrià
2017-10-01
Full Text Available We propose privacy-preserving protocols for computing linear regression models, in the setting where the training dataset is vertically distributed among several parties. Our main contribution is a hybrid multi-party computation protocol that combines Yao’s garbled circuits with tailored protocols for computing inner products. Like many machine learning tasks, building a linear regression model involves solving a system of linear equations. We conduct a comprehensive evaluation and comparison of different techniques for securely performing this task, including a new Conjugate Gradient Descent (CGD algorithm. This algorithm is suitable for secure computation because it uses an efficient fixed-point representation of real numbers while maintaining accuracy and convergence rates comparable to what can be obtained with a classical solution using floating point numbers. Our technique improves on Nikolaenko et al.’s method for privacy-preserving ridge regression (S&P 2013, and can be used as a building block in other analyses. We implement a complete system and demonstrate that our approach is highly scalable, solving data analysis problems with one million records and one hundred features in less than one hour of total running time.
He, Ling Yan; Wang, Tie-Jun; Wang, Chuan
2016-07-11
High-dimensional quantum system provides a higher capacity of quantum channel, which exhibits potential applications in quantum information processing. However, high-dimensional universal quantum logic gates is difficult to achieve directly with only high-dimensional interaction between two quantum systems and requires a large number of two-dimensional gates to build even a small high-dimensional quantum circuits. In this paper, we propose a scheme to implement a general controlled-flip (CF) gate where the high-dimensional single photon serve as the target qudit and stationary qubits work as the control logic qudit, by employing a three-level Λ-type system coupled with a whispering-gallery-mode microresonator. In our scheme, the required number of interaction times between the photon and solid state system reduce greatly compared with the traditional method which decomposes the high-dimensional Hilbert space into 2-dimensional quantum space, and it is on a shorter temporal scale for the experimental realization. Moreover, we discuss the performance and feasibility of our hybrid CF gate, concluding that it can be easily extended to a 2n-dimensional case and it is feasible with current technology.
Directory of Open Access Journals (Sweden)
Ottavia eDipasquale
2015-02-01
Full Text Available High dimensional independent component analysis (ICA, compared to low dimensional ICA, allows performing a detailed parcellation of the resting state networks. The purpose of this study was to give further insight into functional connectivity (FC in Alzheimer’s disease (AD using high dimensional ICA. For this reason, we performed both low and high dimensional ICA analyses of resting state fMRI (rfMRI data of 20 healthy controls and 21 AD patients, focusing on the primarily altered default mode network (DMN and exploring the sensory motor network (SMN. As expected, results obtained at low dimensionality were in line with previous literature. Moreover, high dimensional results allowed us to observe either the presence of within-network disconnections and FC damage confined to some of the resting state sub-networks. Due to the higher sensitivity of the high dimensional ICA analysis, our results suggest that high-dimensional decomposition in sub-networks is very promising to better localize FC alterations in AD and that FC damage is not confined to the default mode network.
International Nuclear Information System (INIS)
Alvarenga, M.A.B.
1980-12-01
An analytical procedure to solve the neutron diffusion equation in two dimensions and two energy groups was developed. The response matrix method was used coupled with an expansion of the neutron flux in finite Fourier series. A computer code 'MRF2D' was elaborated to implement the above mentioned procedure for PWR reactor core calculations. Different core symmetry options are allowed by the code, which is also flexible enough to allow for improvements by means of algorithm optimization. The code performance was compared with a corner mesh finite difference code named TVEDIM by using a International Atomic Energy Agency (IAEA) standard problem. Computer processing time 12,7% smaller is required by the MRF2D code to reach the same precision on criticality eigenvalue. (Author) [pt