Effects of dependence in high-dimensional multiple testing problems
Kim, K.I.; van de Wiel, M.A.
2008-01-01
Background: We consider effects of dependence among variables of high-dimensional data in multiple hypothesis testing problems, in particular the False Discovery Rate (FDR) control procedures. Recent simulation studies consider only simple correlation structures among variables, which is hardly
Inverse Problems and High-Dimensional Estimation
Alquier, Pierre; Stoltz, Gilles
2011-01-01
The "Stats in the Chateau" summer school was held at the CRC chateau on the campus of HEC Paris, Jouy-en-Josas, France, from August 31 to September 4, 2009. This event was organized jointly by faculty members of three French academic institutions a" ENSAE ParisTech, the Ecole Polytechnique ParisTech, and HEC Paris a" which cooperate through a scientific foundation devoted to the decision sciences. The scientific content of the summer school was conveyed in two courses, one by Laurent Cavalier (Universite Aix-Marseille I) on "Ill-posed Inverse Problems", and one by
Effects of dependence in high-dimensional multiple testing problems
Directory of Open Access Journals (Sweden)
van de Wiel Mark A
2008-02-01
Full Text Available Abstract Background We consider effects of dependence among variables of high-dimensional data in multiple hypothesis testing problems, in particular the False Discovery Rate (FDR control procedures. Recent simulation studies consider only simple correlation structures among variables, which is hardly inspired by real data features. Our aim is to systematically study effects of several network features like sparsity and correlation strength by imposing dependence structures among variables using random correlation matrices. Results We study the robustness against dependence of several FDR procedures that are popular in microarray studies, such as Benjamin-Hochberg FDR, Storey's q-value, SAM and resampling based FDR procedures. False Non-discovery Rates and estimates of the number of null hypotheses are computed from those methods and compared. Our simulation study shows that methods such as SAM and the q-value do not adequately control the FDR to the level claimed under dependence conditions. On the other hand, the adaptive Benjamini-Hochberg procedure seems to be most robust while remaining conservative. Finally, the estimates of the number of true null hypotheses under various dependence conditions are variable. Conclusion We discuss a new method for efficient guided simulation of dependent data, which satisfy imposed network constraints as conditional independence structures. Our simulation set-up allows for a structural study of the effect of dependencies on multiple testing criterions and is useful for testing a potentially new method on π0 or FDR estimation in a dependency context.
Sparse group lasso and high dimensional multinomial classification
DEFF Research Database (Denmark)
Vincent, Martin; Hansen, N.R.
2014-01-01
The sparse group lasso optimization problem is solved using a coordinate gradient descent algorithm. The algorithm is applicable to a broad class of convex loss functions. Convergence of the algorithm is established, and the algorithm is used to investigate the performance of the multinomial sparse...... group lasso classifier. On three different real data examples the multinomial group lasso clearly outperforms multinomial lasso in terms of achieved classification error rate and in terms of including fewer features for the classification. An implementation of the multinomial sparse group lasso...
Multiple Group Testing Procedures for Analysis of High-Dimensional Genomic Data
Directory of Open Access Journals (Sweden)
Hyoseok Ko
2016-12-01
Full Text Available In genetic association studies with high-dimensional genomic data, multiple group testing procedures are often required in order to identify disease/trait-related genes or genetic regions, where multiple genetic sites or variants are located within the same gene or genetic region. However, statistical testing procedures based on an individual test suffer from multiple testing issues such as the control of family-wise error rate and dependent tests. Moreover, detecting only a few of genes associated with a phenotype outcome among tens of thousands of genes is of main interest in genetic association studies. In this reason regularization procedures, where a phenotype outcome regresses on all genomic markers and then regression coefficients are estimated based on a penalized likelihood, have been considered as a good alternative approach to analysis of high-dimensional genomic data. But, selection performance of regularization procedures has been rarely compared with that of statistical group testing procedures. In this article, we performed extensive simulation studies where commonly used group testing procedures such as principal component analysis, Hotelling's T2 test, and permutation test are compared with group lasso (least absolute selection and shrinkage operator in terms of true positive selection. Also, we applied all methods considered in simulation studies to identify genes associated with ovarian cancer from over 20,000 genetic sites generated from Illumina Infinium HumanMethylation27K Beadchip. We found a big discrepancy of selected genes between multiple group testing procedures and group lasso.
Franck, I. M.; Koutsourelakis, P. S.
2017-01-01
This paper is concerned with the numerical solution of model-based, Bayesian inverse problems. We are particularly interested in cases where the cost of each likelihood evaluation (forward-model call) is expensive and the number of unknown (latent) variables is high. This is the setting in many problems in computational physics where forward models with nonlinear PDEs are used and the parameters to be calibrated involve spatio-temporarily varying coefficients, which upon discretization give rise to a high-dimensional vector of unknowns. One of the consequences of the well-documented ill-posedness of inverse problems is the possibility of multiple solutions. While such information is contained in the posterior density in Bayesian formulations, the discovery of a single mode, let alone multiple, poses a formidable computational task. The goal of the present paper is two-fold. On one hand, we propose approximate, adaptive inference strategies using mixture densities to capture multi-modal posteriors. On the other, we extend our work in [1] with regard to effective dimensionality reduction techniques that reveal low-dimensional subspaces where the posterior variance is mostly concentrated. We validate the proposed model by employing Importance Sampling which confirms that the bias introduced is small and can be efficiently corrected if the analyst wishes to do so. We demonstrate the performance of the proposed strategy in nonlinear elastography where the identification of the mechanical properties of biological materials can inform non-invasive, medical diagnosis. The discovery of multiple modes (solutions) in such problems is critical in achieving the diagnostic objectives.
A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem
Directory of Open Access Journals (Sweden)
Zekić-Sušac Marijana
2014-09-01
Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.
The feature selection bias problem in relation to high-dimensional gene data.
Krawczuk, Jerzy; Łukaszuk, Tomasz
2016-01-01
Feature selection is a technique widely used in data mining. The aim is to select the best subset of features relevant to the problem being considered. In this paper, we consider feature selection for the classification of gene datasets. Gene data is usually composed of just a few dozen objects described by thousands of features. For this kind of data, it is easy to find a model that fits the learning data. However, it is not easy to find one that will simultaneously evaluate new data equally well as learning data. This overfitting issue is well known as regards classification and regression, but it also applies to feature selection. We address this problem and investigate its importance in an empirical study of four feature selection methods applied to seven high-dimensional gene datasets. We chose datasets that are well studied in the literature-colon cancer, leukemia and breast cancer. All the datasets are characterized by a significant number of features and the presence of exactly two decision classes. The feature selection methods used are ReliefF, minimum redundancy maximum relevance, support vector machine-recursive feature elimination and relaxed linear separability. Our main result reveals the existence of positive feature selection bias in all 28 experiments (7 datasets and 4 feature selection methods). Bias was calculated as the difference between validation and test accuracies and ranges from 2.6% to as much as 41.67%. The validation accuracy (biased accuracy) was calculated on the same dataset on which the feature selection was performed. The test accuracy was calculated for data that was not used for feature selection (by so called external cross-validation). This work provides evidence that using the same dataset for feature selection and learning is not appropriate. We recommend using cross-validation for feature selection in order to reduce selection bias. Copyright © 2015 Elsevier B.V. All rights reserved.
Clustering high dimensional data
DEFF Research Database (Denmark)
Assent, Ira
2012-01-01
High-dimensional data, i.e., data described by a large number of attributes, pose specific challenges to clustering. The so-called ‘curse of dimensionality’, coined originally to describe the general increase in complexity of various computational problems as dimensionality increases, is known...... for clustering are required. Consequently, recent research has focused on developing techniques and clustering algorithms specifically for high-dimensional data. Still, open research issues remain. Clustering is a data mining task devoted to the automatic grouping of data based on mutual similarity. Each cluster...... groups objects that are similar to one another, whereas dissimilar objects are assigned to different clusters, possibly separating out noise. In this manner, clusters describe the data structure in an unsupervised manner, i.e., without the need for class labels. A number of clustering paradigms exist...
High dimensional classifiers in the imbalanced case
DEFF Research Database (Denmark)
Bak, Britta Anker; Jensen, Jens Ledet
We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...
Srivastava, Ashok, N.; Akella, Ram; Diev, Vesselin; Kumaresan, Sakthi Preethi; McIntosh, Dawn M.; Pontikakis, Emmanuel D.; Xu, Zuobing; Zhang, Yi
2006-01-01
This paper describes the results of a significant research and development effort conducted at NASA Ames Research Center to develop new text mining techniques to discover anomalies in free-text reports regarding system health and safety of two aerospace systems. We discuss two problems of significant importance in the aviation industry. The first problem is that of automatic anomaly discovery about an aerospace system through the analysis of tens of thousands of free-text problem reports that are written about the system. The second problem that we address is that of automatic discovery of recurring anomalies, i.e., anomalies that may be described m different ways by different authors, at varying times and under varying conditions, but that are truly about the same part of the system. The intent of recurring anomaly identification is to determine project or system weakness or high-risk issues. The discovery of recurring anomalies is a key goal in building safe, reliable, and cost-effective aerospace systems. We address the anomaly discovery problem on thousands of free-text reports using two strategies: (1) as an unsupervised learning problem where an algorithm takes free-text reports as input and automatically groups them into different bins, where each bin corresponds to a different unknown anomaly category; and (2) as a supervised learning problem where the algorithm classifies the free-text reports into one of a number of known anomaly categories. We then discuss the application of these methods to the problem of discovering recurring anomalies. In fact the special nature of recurring anomalies (very small cluster sizes) requires incorporating new methods and measures to enhance the original approach for anomaly detection. ?& pant 0-
Directory of Open Access Journals (Sweden)
Shouheng Tuo
Full Text Available Harmony Search (HS and Teaching-Learning-Based Optimization (TLBO as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.
Tuo, Shouheng; Yong, Longquan; Deng, Fang'an; Li, Yanhai; Lin, Yong; Lu, Qiuju
2017-01-01
Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.
Fang, Yao-Hwei; Wang, Jie-Huei; Hsiung, Chao A
2017-11-15
Identification of single nucleotide polymorphism (SNP) interactions is an important and challenging topic in genome-wide association studies (GWAS). Many approaches have been applied to detecting whole-genome interactions. However, these approaches to interaction analysis tend to miss causal interaction effects when the individual marginal effects are uncorrelated to trait, while their interaction effects are highly associated with the trait. A grouped variable selection technique, called two-stage grouped sure independence screening (TS-GSIS), is developed to study interactions that may not have marginal effects. The proposed TS-GSIS is shown to be very helpful in identifying not only causal SNP effects that are uncorrelated to trait but also their corresponding SNP-SNP interaction effects. The benefit of TS-GSIS are gaining detection of interaction effects by taking the joint information among the SNPs and determining the size of candidate sets in the model. Simulation studies under various scenarios are performed to compare performance of TS-GSIS and current approaches. We also apply our approach to a real rheumatoid arthritis (RA) dataset. Both the simulation and real data studies show that the TS-GSIS performs very well in detecting SNP-SNP interactions. R-package is delivered through CRAN and is available at: https://cran.r-project.org/web/packages/TSGSIS/index.html. hsiung@nhri.org.tw. Supplementary data are available at Bioinformatics online.
Directory of Open Access Journals (Sweden)
Mohamed Amine Bouhlel
2016-01-01
Full Text Available During the last years, kriging has become one of the most popular methods in computer simulation and machine learning. Kriging models have been successfully used in many engineering applications, to approximate expensive simulation models. When many input variables are used, kriging is inefficient mainly due to an exorbitant computational time required during its construction. To handle high-dimensional problems (100+, one method is recently proposed that combines kriging with the Partial Least Squares technique, the so-called KPLS model. This method has shown interesting results in terms of saving CPU time required to build model while maintaining sufficient accuracy, on both academic and industrial problems. However, KPLS has provided a poor accuracy compared to conventional kriging on multimodal functions. To handle this issue, this paper proposes adding a new step during the construction of KPLS to improve its accuracy for multimodal functions. When the exponential covariance functions are used, this step is based on simple identification between the covariance function of KPLS and kriging. The developed method is validated especially by using a multimodal academic function, known as Griewank function in the literature, and we show the gain in terms of accuracy and computer time by comparing with KPLS and kriging.
CSIR Research Space (South Africa)
Mc
2012-07-01
Full Text Available stream_source_info McLaren_2012.pdf.txt stream_content_type text/plain stream_size 2190 Content-Encoding ISO-8859-1 stream_name McLaren_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 High dimensional... entanglement M. McLAREN1,2, F.S. ROUX1 & A. FORBES1,2,3 1. CSIR National Laser Centre, PO Box 395, Pretoria 0001 2. School of Physics, University of the Stellenbosch, Private Bag X1, 7602, Matieland 3. School of Physics, University of Kwazulu...
High dimensional multiclass classification with applications to cancer diagnosis
DEFF Research Database (Denmark)
Vincent, Martin
Probabilistic classifiers are introduced and it is shown that the only regular linear probabilistic classifier with convex risk is multinomial regression. Penalized empirical risk minimization is introduced and used to construct supervised learning methods for probabilistic classifiers. A sparse...... group lasso penalized approach to high dimensional multinomial classification is presented. On different real data examples it is found that this approach clearly outperforms multinomial lasso in terms of error rate and features included in the model. An efficient coordinate descent algorithm...... is developed and the convergence is established. This algorithm is implemented in the msgl R package. Examples of high dimensional multiclass problems are studied, in particular examples of multiclass classification based on gene expression measurements. One such example is the clinically important - problem...
Group Design Problems in Engineering Design Graphics.
Kelley, David
2001-01-01
Describes group design techniques used within the engineering design graphics sequence at Western Washington University. Engineering and design philosophies such as concurrent engineering place an emphasis on group collaboration for the solving of design problems. (Author/DDR)
High dimensional neurocomputing growth, appraisal and applications
Tripathi, Bipin Kumar
2015-01-01
The book presents a coherent understanding of computational intelligence from the perspective of what is known as "intelligent computing" with high-dimensional parameters. It critically discusses the central issue of high-dimensional neurocomputing, such as quantitative representation of signals, extending the dimensionality of neuron, supervised and unsupervised learning and design of higher order neurons. The strong point of the book is its clarity and ability of the underlying theory to unify our understanding of high-dimensional computing where conventional methods fail. The plenty of application oriented problems are presented for evaluating, monitoring and maintaining the stability of adaptive learning machine. Author has taken care to cover the breadth and depth of the subject, both in the qualitative as well as quantitative way. The book is intended to enlighten the scientific community, ranging from advanced undergraduates to engineers, scientists and seasoned researchers in computational intelligenc...
Understanding high-dimensional spaces
Skillicorn, David B
2012-01-01
High-dimensional spaces arise as a way of modelling datasets with many attributes. Such a dataset can be directly represented in a space spanned by its attributes, with each record represented as a point in the space with its position depending on its attribute values. Such spaces are not easy to work with because of their high dimensionality: our intuition about space is not reliable, and measures such as distance do not provide as clear information as we might expect. There are three main areas where complex high dimensionality and large datasets arise naturally: data collected by online ret
Heller, Patricia; Hollabaugh, Mark
1992-07-01
A supportive environment based on cooperative grouping was developed to foster students' learning of an effective problem-solving strategy. Experiments to adapt the technique of cooperative grouping to physics problem solving were carried out in two diverse settings: a large introductory course at state university, and a small modern physics class at a community college. Groups were more likely to use an effective problem-solving strategy when given context-rich problems to solve than when given standard textbook problems. Well-functioning cooperative groups were found to result from specific structural and management procedures governing group members' interactions. Group size, the gender and ability composition of groups, seating arrangement, role assignment, textbook use, and group as well as individual testing were all found to contribute to the problem-solving performance of cooperative groups.
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-07
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Group invariance in engineering boundary value problems
Seshadri, R
1985-01-01
REFEREN CES . 156 9 Transforma.tion of a Boundary Value Problem to an Initial Value Problem . 157 9.0 Introduction . 157 9.1 Blasius Equation in Boundary Layer Flow . 157 9.2 Longitudinal Impact of Nonlinear Viscoplastic Rods . 163 9.3 Summary . 168 REFERENCES . . . . . . . . . . . . . . . . . . 168 . 10 From Nonlinear to Linear Differential Equa.tions Using Transformation Groups. . . . . . . . . . . . . . 169 . 10.1 From Nonlinear to Linear Differential Equations . 170 10.2 Application to Ordinary Differential Equations -Bernoulli's Equation . . . . . . . . . . . 173 10.3 Application to Partial Differential Equations -A Nonlinear Chemical Exchange Process . 178 10.4 Limitations of the Inspectional Group Method . 187 10.5 Summary . 188 REFERENCES . . . . 188 11 Miscellaneous Topics . 190 11.1 Reduction of Differential Equations to Algebraic Equations 190 11.2 Reduction of Order of an Ordinary Differential Equation . 191 11.3 Transformat.ion From Ordinary to Partial Differential Equations-Search for First Inte...
High-dimensional covariance estimation with high-dimensional data
Pourahmadi, Mohsen
2013-01-01
Methods for estimating sparse and large covariance matrices Covariance and correlation matrices play fundamental roles in every aspect of the analysis of multivariate data collected from a variety of fields including business and economics, health care, engineering, and environmental and physical sciences. High-Dimensional Covariance Estimation provides accessible and comprehensive coverage of the classical and modern approaches for estimating covariance matrices as well as their applications to the rapidly developing areas lying at the intersection of statistics and mac
Glover, Jason; Man, Tsz-Kwong; Barkauskas, Donald A; Hall, David; Tello, Tanya; Sullivan, Mary Beth; Gorlick, Richard; Janeway, Katherine; Grier, Holcombe; Lau, Ching; Toretsky, Jeffrey A; Borinstein, Scott C; Khanna, Chand; Fan, Timothy M
2017-01-01
The prospective banking of osteosarcoma tissue samples to promote research endeavors has been realized through the establishment of a nationally centralized biospecimen repository, the Children's Oncology Group (COG) biospecimen bank located at the Biopathology Center (BPC)/Nationwide Children's Hospital in Columbus, Ohio. Although the physical inventory of osteosarcoma biospecimens is substantive (>15,000 sample specimens), the nature of these resources remains exhaustible. Despite judicious allocation of these high-value biospecimens for conducting sarcoma-related research, a deeper understanding of osteosarcoma biology, in particular metastases, remains unrealized. In addition the identification and development of novel diagnostics and effective therapeutics remain elusive. The QuadW-COG Childhood Sarcoma Biostatistics and Annotation Office (CSBAO) has developed the High Dimensional Data (HDD) platform to complement the existing physical inventory and to promote in silico hypothesis testing in sarcoma biology. The HDD is a relational biologic database derived from matched osteosarcoma biospecimens in which diverse experimental readouts have been generated and digitally deposited. As proof-of-concept, we demonstrate that the HDD platform can be utilized to address previously unrealized biologic questions though the systematic juxtaposition of diverse datasets derived from shared biospecimens. The continued population of the HDD platform with high-value, high-throughput and mineable datasets allows a shared and reusable resource for researchers, both experimentalists and bioinformatics investigators, to propose and answer questions in silico that advance our understanding of osteosarcoma biology.
Glover, Jason; Man, Tsz-Kwong; Barkauskas, Donald A.; Hall, David; Tello, Tanya; Sullivan, Mary Beth; Gorlick, Richard; Janeway, Katherine; Grier, Holcombe; Lau, Ching; Toretsky, Jeffrey A.; Borinstein, Scott C.; Khanna, Chand
2017-01-01
The prospective banking of osteosarcoma tissue samples to promote research endeavors has been realized through the establishment of a nationally centralized biospecimen repository, the Children’s Oncology Group (COG) biospecimen bank located at the Biopathology Center (BPC)/Nationwide Children’s Hospital in Columbus, Ohio. Although the physical inventory of osteosarcoma biospecimens is substantive (>15,000 sample specimens), the nature of these resources remains exhaustible. Despite judicious allocation of these high-value biospecimens for conducting sarcoma-related research, a deeper understanding of osteosarcoma biology, in particular metastases, remains unrealized. In addition the identification and development of novel diagnostics and effective therapeutics remain elusive. The QuadW-COG Childhood Sarcoma Biostatistics and Annotation Office (CSBAO) has developed the High Dimensional Data (HDD) platform to complement the existing physical inventory and to promote in silico hypothesis testing in sarcoma biology. The HDD is a relational biologic database derived from matched osteosarcoma biospecimens in which diverse experimental readouts have been generated and digitally deposited. As proof-of-concept, we demonstrate that the HDD platform can be utilized to address previously unrealized biologic questions though the systematic juxtaposition of diverse datasets derived from shared biospecimens. The continued population of the HDD platform with high-value, high-throughput and mineable datasets allows a shared and reusable resource for researchers, both experimentalists and bioinformatics investigators, to propose and answer questions in silico that advance our understanding of osteosarcoma biology. PMID:28732082
Likelihood ratio based verification in high dimensional spaces
Hendrikse, A.J.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan
The increase of the dimensionality of data sets often lead to problems during estimation, which are denoted as the curse of dimensionality. One of the problems of Second Order Statistics (SOS) estimation in high dimensional data is that the resulting covariance matrices are not full rank, so their
Sparse High Dimensional Models in Economics.
Fan, Jianqing; Lv, Jinchi; Qi, Lei
2011-09-01
This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed.
[Mental health problems in ethnic minority groups].
Kucharska, Justyna
2012-01-01
The aim of this paper is to provide an insight into the specificity of mental health issues as experienced by ethnic minority groups' representatives. A substantial body of evidence clearly indicates the differences in incidence of psychosis, affective disorders and suicidal tendencies in members of minority groups compared to the rest of the population. Relevant statistical data will be presented and examined from both a biological and socio-cultural point of view. Hoffman's Social Deafferentation Hypothesis will be introduced as a possible explanation of high incidence of psychotic disorders in immigrants. Subsequently, socio-cultural factors will receive attention. Acculturation and identity issues will be taken into account with regards to the data suggesting that these are second generation immigrants that suffer from mental health disorders most. The fact of being discriminated against and being exposed to negative social messages regarding one's group of reference will also be taken into consideration. Moreover, ethnic minorities will be compared on this dimension with other groups discriminated against, such as women and sexual minorities.
Irregular grid methods for pricing high-dimensional American options
Berridge, S.J.
2004-01-01
This thesis proposes and studies numerical methods for pricing high-dimensional American options; important examples being basket options, Bermudan swaptions and real options. Four new methods are presented and analysed, both in terms of their application to various test problems, and in terms of
High-dimensional model estimation and model selection
CERN. Geneva
2015-01-01
I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.
Group Development Phases as Working through Six Fundamental Human Problems.
Burnand, Gordon
1990-01-01
Following Bennis and Shepard's work, groups are thought to become preoccupied with problems of gaining reassurance about six basic human tasks in turn. One can show that these problems, called focal problems, have two forms, inclusive and narrowed, and that progressing through the problems requires three subphases. (Author/ABL)
A Novel High Dimensional and High Speed Data Streams Algorithm: HSDStream
Irshad Ahmed; Irfan Ahmed; Waseem Shahzad
2016-01-01
This paper presents a novel high speed clustering scheme for high-dimensional data stream. Data stream clustering has gained importance in different applications, for example, network monitoring, intrusion detection, and real-time sensing. High dimensional stream data is inherently more complex when used for clustering because the evolving nature of the stream data and high dimensionality make it non-trivial. In order to tackle this problem, projected subspace within the high dimensions and l...
Quantifying Photonic High-Dimensional Entanglement
Martin, Anthony; Guerreiro, Thiago; Tiranov, Alexey; Designolle, Sébastien; Fröwis, Florian; Brunner, Nicolas; Huber, Marcus; Gisin, Nicolas
2017-03-01
High-dimensional entanglement offers promising perspectives in quantum information science. In practice, however, the main challenge is to devise efficient methods to characterize high-dimensional entanglement, based on the available experimental data which is usually rather limited. Here we report the characterization and certification of high-dimensional entanglement in photon pairs, encoded in temporal modes. Building upon recently developed theoretical methods, we certify an entanglement of formation of 2.09(7) ebits in a time-bin implementation, and 4.1(1) ebits in an energy-time implementation. These results are based on very limited sets of local measurements, which illustrates the practical relevance of these methods.
Asymptotically Honest Confidence Regions for High Dimensional
DEFF Research Database (Denmark)
Caner, Mehmet; Kock, Anders Bredahl
While variable selection and oracle inequalities for the estimation and prediction error have received considerable attention in the literature on high-dimensional models, very little work has been done in the area of testing and construction of confidence bands in high-dimensional models. However...... of the asymptotic covariance matrix of an increasing number of parameters which is robust against conditional heteroskedasticity. To our knowledge we are the first to do so. Next, we show that our confidence bands are honest over sparse high-dimensional sub vectors of the parameter space and that they contract...... at the optimal rate. All our results are valid in high-dimensional models. Our simulations reveal that the desparsified conservative Lasso estimates the parameters much more precisely than the desparsified Lasso, has much better size properties and produces confidence bands with markedly superior coverage rates....
Emergent Leadership in Children's Cooperative Problem Solving Groups
Sun, Jingjng; Anderson, Richard C.; Perry, Michelle; Lin, Tzu-Jung
2017-01-01
Social skills involved in leadership were examined in a problem-solving activity in which 252 Chinese 5th-graders worked in small groups on a spatial-reasoning puzzle. Results showed that students who engaged in peer-managed small-group discussions of stories prior to problem solving produced significantly better solutions and initiated…
Engineering two-photon high-dimensional states through quantum interference
CSIR Research Space (South Africa)
Zhang, YI
2016-02-01
Full Text Available problems. An alternative strategy is to consider a lesser number of particles that exist in high-dimensional states. The spatial modes of light are one such candidate that provides access to high-dimensional quantum states, and thus they increase...
Optimal Feature Selection in High-Dimensional Discriminant Analysis.
Kolar, Mladen; Liu, Han
2015-02-01
We consider the high-dimensional discriminant analysis problem. For this problem, different methods have been proposed and justified by establishing exact convergence rates for the classification risk, as well as the ℓ 2 convergence results to the discriminative rule. However, sharp theoretical analysis for the variable selection performance of these procedures have not been established, even though model interpretation is of fundamental importance in scientific data analysis. This paper bridges the gap by providing sharp sufficient conditions for consistent variable selection using the sparse discriminant analysis (Mai et al., 2012). Through careful analysis, we establish rates of convergence that are significantly faster than the best known results and admit an optimal scaling of the sample size n , dimensionality p , and sparsity level s in the high-dimensional setting. Sufficient conditions are complemented by the necessary information theoretic limits on the variable selection problem in the context of high-dimensional discriminant analysis. Exploiting a numerical equivalence result, our method also establish the optimal results for the ROAD estimator (Fan et al., 2012) and the sparse optimal scaling estimator (Clemmensen et al., 2011). Furthermore, we analyze an exhaustive search procedure, whose performance serves as a benchmark, and show that it is variable selection consistent under weaker conditions. Extensive simulations demonstrating the sharpness of the bounds are also provided.
High-dimensional multispectral image fusion: classification by neural network
He, Mingyi; Xia, Jiantao
2003-06-01
Advances in sensor technology for Earth observation make it possible to collect multispectral data in much higher dimensionality. Such high dimensional data will it possible to classify more classes. However, it will also have several impacts on processing technology. First, because of its huge data, more processing power will be needed to process such high dimensional data. Second, because of its high dimensionality and the limited training samples, it is very difficult for Bayes method to estimate the parameters accurately. So the classification accuracy cannot be high enough. Neural Network is an intelligent signal processing method. MLFNN (Multi-Layer Feedforward Neural Network) directly learn from training samples and the probability model needs not to be estimated, the classification may be conducted through neural network fusion of multispectral images. The latent information about different classes can be extracted from training samples by MLFNN. However, because of the huge data and high dimensionality, MLFNN will face some serious difficulties: (1) There are many local minimal points in the error surface of MLFNN; (2) Over-fitting phenomena. These two difficulties depress the classification accuracy and generalization performance of MLFNN. In order to overcome these difficulties, the author proposed DPFNN (Double Parallel Feedforward Neural Networks) used to classify the high dimensional multispectral images. The model and learning algorithm of DPFNN with strong generalization performance are proposed, with emphases on the regularization of output weights and improvement of the generalization performance of DPFNN. As DPFNN is composed of MLFNN and SLFNN (Single-Layer Feedforward Neural Network), it has the advantages of MLFNN and SLFNN: (1) Good nonlinear mapping capability; (2) High learning speed for linear-like problem. Experimental results with generated data, 64-band practical multispectral images and 220-band multispectral images show that the new
Group Testing: Four Student Solutions to a Classic Optimization Problem
Teague, Daniel
2006-01-01
This article describes several creative solutions developed by calculus and modeling students to the classic optimization problem of testing in groups to find a small number of individuals who test positive in a large population.
Group Planning and Task Efficiency with Complex Problems. Final Report.
Lawson, E. D.
One hundred eighty 4-man groups (90 of men and 90 of women) using 3 types of net (All-Channel, Wheel and Circle) under 3 conditions (Planning Period (PP), Rest Period (RP) and Control) were run in a single session with 5 complex problems to determine whether a single 2-minute planning period after solution of the first problem would result in…
Group Decision-Making on an Optimal Stopping Problem
Lee, Michael D.; Paradowski, Michael J.
2007-01-01
We consider group decision-making on an optimal stopping problem, for which large and stable individual differences have previously been established. In the problem, people are presented with a sequence of five random numbers between 0 and 100, one at a time, and are required to choose the maximum of the sequence, without being allowed to return…
The Off-line Group Seat Reservation Problem
DEFF Research Database (Denmark)
Clausen, Tommy; Hjorth, Allan Nordlunde; Nielsen, Morten
2010-01-01
In this paper we address the problem of assigning seats in a train for a group of people traveling together. We consider two variants of the problem. One is a special case of two-dimensional knapsack where we consider the train as having fixed size and the objective is to maximize the utilization...
An FPTAS for the fractional group Steiner tree problem
Directory of Open Access Journals (Sweden)
Slobodan Jelić
2015-10-01
Full Text Available This paper considers a linear relaxation of the cut-based integer programming formulation for the group Steiner tree problem (FGST. We combine the approach of Koufogiannakis and Young (2013 with the nearly-linear time approximation scheme for the minimum cut problem of Christiano et. al (2011 in order to develop a fully polynomial time approximation scheme for FGST problem. Our algorithm returns the solution to FGST where the objective function value is a maximum of 1+6ε times the optimal, for ε ∈〈0;1/6] in Õ(mk(m+n^4/3 ε^–16/3/ε^2 time, where n, m and k are the numbers of vertices, edges and groups in the group Steiner tree instance, respectively. This algorithm has a better worst-case running time than algorithm by Garg and Khandekar (2002 where the number of groups is sufficiently large.
Evaluating Clustering in Subspace Projections of High Dimensional Data
DEFF Research Database (Denmark)
Müller, Emmanuel; Günnemann, Stephan; Assent, Ira
2009-01-01
Clustering high dimensional data is an emerging research field. Subspace clustering or projected clustering group similar objects in subspaces, i.e. projections, of the full space. In the past decade, several clustering paradigms have been developed in parallel, without thorough evaluation and co...... and create a common baseline for future developments and comparable evaluations in the field. For repeatability, all implementations, data sets and evaluation measures are available on our website....
Innovation, imitation, and problem-solving in a networked group.
Wisdom, Thomas N; Goldstone, Robert L
2011-04-01
We implemented a problem-solving task in which groups of participants simultaneously played a simple innovation game in a complex problem space, with score feedback provided after each of a number of rounds. Each participant in a group was allowed to view and imitate the guesses of others during the game. The results showed the use of social learning strategies previously studied in other species, and demonstrated benefits of social learning and nonlinear effects of group size on strategy and performance. Rather than simply encouraging conformity, groups provided information to each individual about the distribution of useful innovations in the problem space. Imitation facilitated innovation rather than displacing it, because the former allowed good solutions to be propagated and preserved for further cumulative innovations in the group. Participants generally improved their solutions through the use of fairly conservative strategies, such as changing only a small portion of one's solution at a time, and tending to imitate solutions similar to one's own. Changes in these strategies over time had the effect of making solutions increasingly entrenched, both at individual and group levels. These results showed evidence of nonlinear dynamics in the decentralization of innovation, the emergence of group phenomena from complex interactions of individual efforts, stigmergy in the use of social information, and dynamic tradeoffs between exploration and exploitation of solutions. These results also support the idea that innovation and creativity can be recognized at the group level even when group members are generally cautious and imitative.
High dimensional data driven statistical mechanics.
Adachi, Yoshitaka; Sadamatsu, Sunao
2014-11-01
In "3D4D materials science", there are five categories such as (a) Image acquisition, (b) Processing, (c) Analysis, (d) Modelling, and (e) Data sharing. This presentation highlights the core of these categories [1]. Analysis and modellingA three-dimensional (3D) microstructure image contains topological features such as connectivity in addition to metric features. Such more microstructural information seems to be useful for more precise property prediction. There are two ways for microstructure-based property prediction (Fig. 1A). One is 3D image data based modelling such as micromechanics or crystal plasticity finite element method. The other one is a numerical microstructural features driven machine learning approach such as artificial neural network or Bayesian estimation method. It is the key to convert the 3D image data into numerals in order to apply the dataset to property prediction. As a numerical feature of microstructures, grain size, number of density, of particles, connectivity of particles, grain boundary connectivity, stacking degree, clustering etc. should be taken into consideration. These microstructural features are so-called "materials genome". Among those materials genome, we have to find out dominant factors to determine a focused property. The dominant factorzs are defined as "descriptor(s)" in high dimensional data driven statistical mechanics.jmicro;63/suppl_1/i4/DFU086F1F1DFU086F1Fig. 1.(a) A concept of 3D4D materials science. (b) Fully-automated serial sectioning 3D microscope "Genus_3D". (c) Materials Genome Archive (JSPS). Image acquisitionIt is important for researchers to choice a 3D microscope from various microscopes depending on a length-scale of a focused microstructure. There is a long term request to acquire a 3D microstructure image more conveniently. Therefore a fully automated serial sectioning 3D optical microscope "Genus_3D" (Fig. 1B) has been developed and nowadays it is commercially available. A user can get a good
7th High Dimensional Probability Meeting
Mason, David; Reynaud-Bouret, Patricia; Rosinski, Jan
2016-01-01
This volume collects selected papers from the 7th High Dimensional Probability meeting held at the Institut d'Études Scientifiques de Cargèse (IESC) in Corsica, France. High Dimensional Probability (HDP) is an area of mathematics that includes the study of probability distributions and limit theorems in infinite-dimensional spaces such as Hilbert spaces and Banach spaces. The most remarkable feature of this area is that it has resulted in the creation of powerful new tools and perspectives, whose range of application has led to interactions with other subfields of mathematics, statistics, and computer science. These include random matrices, nonparametric statistics, empirical processes, statistical learning theory, concentration of measure phenomena, strong and weak approximations, functional estimation, combinatorial optimization, and random graphs. The contributions in this volume show that HDP theory continues to thrive and develop new tools, methods, techniques and perspectives to analyze random phenome...
Social and behavioral problems among five gambling severity groups.
Moghaddam, Jacquelene F; Yoon, Gihyun; Campos, Michael D; Fong, Timothy W
2015-12-15
Gambling has been associated with various social and behavioral problems, but previous analyses have been limited by sample bias regarding gambling symptom severity range and the role of antisocial personality disorder (ASPD). This study utilized a nationally representative data set and examined various characteristics of behavioral problems and ASPD among five gambling severity groups. Participants were 42,038 individuals who took part in the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) and provided information on social and behavioral problems, ASPD, and gambling. Using DSM-IV criteria, we derived five gambling groups from the total sample: non-gambling, low-risk, at-risk, problem, and pathological gambling. Associations between all problematic behaviors and nearly every gambling severity level were significant prior to adjustment for sociodemographic variables and ASPD. Following the adjustment, all significant associations persisted, with the exception of sexual coercion. In the adjusted model, the financially oriented behaviors had the strongest associations with gambling. All gambling severity levels were associated with an increased risk for a number of problematic behaviors and social problems in comparison to non-gamblers.Further examination of gambling problems in financial and criminal justice settings is recommended. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Bayesian Analysis of High Dimensional Classification
Mukhopadhyay, Subhadeep; Liang, Faming
2009-12-01
Modern data mining and bioinformatics have presented an important playground for statistical learning techniques, where the number of input variables is possibly much larger than the sample size of the training data. In supervised learning, logistic regression or probit regression can be used to model a binary output and form perceptron classification rules based on Bayesian inference. In these cases , there is a lot of interest in searching for sparse model in High Dimensional regression(/classification) setup. we first discuss two common challenges for analyzing high dimensional data. The first one is the curse of dimensionality. The complexity of many existing algorithms scale exponentially with the dimensionality of the space and by virtue of that algorithms soon become computationally intractable and therefore inapplicable in many real applications. secondly, multicollinearities among the predictors which severely slowdown the algorithm. In order to make Bayesian analysis operational in high dimension we propose a novel 'Hierarchical stochastic approximation monte carlo algorithm' (HSAMC), which overcomes the curse of dimensionality, multicollinearity of predictors in high dimension and also it possesses the self-adjusting mechanism to avoid the local minima separated by high energy barriers. Models and methods are illustrated by simulation inspired from from the feild of genomics. Numerical results indicate that HSAMC can work as a general model selection sampler in high dimensional complex model space.
48 Clothing Problems of Upper Middle Socio-Economic Group ...
African Journals Online (AJOL)
Nekky Umera
Another problem of this group is that they are highly involved in fashion. Fashion involvement is a consumer's perceived importance of fashion clothing (O'Cass, 2001). Tigert (1976) found that fashion involvement is composed of five dimensions of fashion adoption-related behavior: a) fashion innovativeness and time of ...
Wellness Incentives, Equity, and the 5 Groups Problem
2012-01-01
Wellness incentives are an increasingly popular means of encouraging participation in prevention programs, but they may not benefit all groups equally. To assist those planning, conducting, and evaluating incentive programs, I describe the impact of incentives on 5 groups: the “lucky ones,” the “yes-I-can” group, the “I'll-do-it-tomorrow” group, the “unlucky ones,” and the “leave-me-alone” group. The 5 groups problem concerns the question of when disparities in the capacity to use incentive programs constitute unfairness and how policymakers ought to respond. I outline 4 policy options: to continue to offer incentives universally, to offer them universally but with modifications, to offer targeted rather than universal programs, and to abandon incentive programs altogether. PMID:22095346
Group Work Tests for Context-Rich Problems
Meyer, Chris
2016-05-01
The group work test is an assessment strategy that promotes higher-order thinking skills for solving context-rich problems. With this format, teachers are able to pose challenging, nuanced questions on a test, while providing the support weaker students need to get started and show their understanding. The test begins with a group discussion phase, when students are given a "number-free" version of the problem. This phase allows students to digest the story-like problem, explore solution ideas, and alleviate some test anxiety. After 10-15 minutes of discussion, students inform the instructor of their readiness for the individual part of the test. What follows next is a pedagogical phase change from lively group discussion to quiet individual work. The group work test is a natural continuation of the group work in our daily physics classes and helps reinforce the importance of collaboration. This method has met with success at York Mills Collegiate Institute, in Toronto, Ontario, where it has been used consistently for unit tests and the final exam of the grade 12 university preparation physics course.
Epistemic Impact on Group Problem Solving for Different Science Majors
Mason, Andrew J
2016-01-01
Implementation of cognitive apprenticeship in an introductory physics lab group problem solving exercise may be mitigated by epistemic views toward physics of non-physics science majors. Quantitative pre-post data of the Force Concept Inventory (FCI) and Colorado Learning Attitudes About Science Survey (CLASS) of 39 students of a first-semester algebra-based introductory physics course, while describing typical results for a traditional-format course overall (g = +0.14), suggest differences in epistemic views between health science majors and life science majors which may correlate with differences in pre-post conceptual understanding. Audiovisual data of student lab groups working on a context-rich problem and students' written reflections described each group's typical dynamics and invoked epistemic games. We examined the effects of framework-based orientation (favored by biology majors) and performance-based orientation (favored by computer science, chemistry, and health science majors) on pre-post attitud...
Introduction to high-dimensional statistics
Giraud, Christophe
2015-01-01
Ever-greater computing technologies have given rise to an exponentially growing volume of data. Today massive data sets (with potentially thousands of variables) play an important role in almost every branch of modern human activity, including networks, finance, and genetics. However, analyzing such data has presented a challenge for statisticians and data analysts and has required the development of new statistical methods capable of separating the signal from the noise.Introduction to High-Dimensional Statistics is a concise guide to state-of-the-art models, techniques, and approaches for ha
Modeling high dimensional multichannel brain signals
Hu, Lechuan
2017-03-27
In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.
The management of social problems talk in a support group
Directory of Open Access Journals (Sweden)
Andrezza Gomes Peretti
2013-01-01
Full Text Available The comprehension of the health-disease process from a multifactorial perspective has allowed important transformations in the healthcare practices. In this article, we discuss the use of the support group as a resource for mental health care, analyzing how conversations about social issues are managed in this context. Based on contributions from the social constructionist movement, we analyzed the transcripts of the conversations developed in meetings of a support group offered to patients of a mental health outpatient clinic. The analysis of the process of meaning making indicates that the discourse of the social influence on mental health is not legitimized, due to a predominant individualistic discourse, which psychologizes care and is centered on the emotional analysis of the problems of the quotidian. We argue that this mode of management brings limits to the construction of the group as a device for promoting autonomy and encouraging the social transformation processes.
Solving problems with group work in problem-based learning: hold on to the philosophy.
Dolmans, D H; Wolfhagen, I H; van der Vleuten, C P; Wijnen, W H
2001-09-01
Problem-based learning (PBL) has gained a foothold within many schools in higher education as a response to the problems faced within traditional education. Working with PBL tutorial groups is assumed to have positive effects on student learning. Several studies provide empirical evidence that PBL stimulates cognitive effects and leads to restructuring of knowledge and enhanced intrinsic interest in the subject matter. However, staff members do not always experience the positive effects of group work which they had hoped for. When confronted with problems in group work, such as students who only maintain an appearance of being actively involved and students who let others do the work, teachers all too often implement solutions which can be characterized as teacher- directed rather than student-directed. Teachers tend to choose solutions which are familiar from their own experience during professional training, i.e. using the teacher-directed model. These solutions are not effective in improving group work and the negative experiences persist. It is argued that teachers should hold on to the underlying educational philosophy when solving problems arising from group work in PBL, by choosing actions which are consistent with the student-directed view of education in PBL.
Exact multilocal renormalization group and applications to disordered problems
Chauve, Pascal; Le Doussal, Pierre
2001-11-01
We develop a method, the exact multilocal renormalization group (EMRG) which applies to a broad set of theories. It is based on the systematic multilocal expansion of the Polchinski-Wilson exact renormalization group (ERG) equation together with a scheme to compute correlation functions. Integrating out explicitly the nonlocal interactions, we reduce the ERG equation obeyed by the full interaction functional to a flow equation for a function, its local part. This is done perturbatively around fixed points, but exactly to any given order in the local part. It is thus controlled, at variance with projection methods, e.g., derivative expansions or local potential approximations. Our EMRG method is well-suited to problems such as the pinning of disordered elastic systems, previously described via functional renormalization group (FRG) approach based on a hard cutoff scheme. Since it involves arbitrary cutoff functions, we explicitly verify universality to O(ɛ=4-D), both of the T=0 FRG equation and of correlations. Extension to finite temperature T yields the finite size (L) susceptibility fluctuations characterizing mesoscopic behavior (Δχ)2¯~Lθ/T, where θ is the energy exponent. Finally, we obtain the universal scaling function to O(ɛ1/3) which describes the ground state of a domain wall in a random field confined by a field gradient, compare with exact results and variational method. Explicit two loop exact RG equations are derived and the application to the FRG problem is sketched.
Resonating-group method for nuclear many-body problems
Energy Technology Data Exchange (ETDEWEB)
Tang, Y.C.; LeMere, M.; Thompson, D.R.
1977-01-01
The resonating-group method is a microscopic method which uses fully antisymmetric wave functions, treats correctly the motion of the total center of mass, and takes cluster correlation into consideration. In this review, the formulation of this method is discussed for various nuclear many-body problems, and a complex-generator-coordinate technique which has been employed to evaluate matrix elements required in resonating-group calculations is described. Several illustrative examples of bound-state, scattering, and reaction calculations, which serve to demonstrate the usefulness of this method, are presented. Finally, by utilization of the results of these calculations, the role played by the Pauli principle in nuclear scattering and reaction processes is discussed. 21 figures, 2 tables, 185 references.
Scalable Nearest Neighbor Algorithms for High Dimensional Data.
Muja, Marius; Lowe, David G
2014-11-01
For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.
Applications of Asymptotic Sampling on High Dimensional Structural Dynamic Problems
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian
2011-01-01
The paper represents application of the asymptotic sampling on various structural models subjected to random excitations. A detailed study on the effect of different distributions of the so-called support points is performed. This study shows that the distribution of the support points has...... considerable effect on the final estimations of the method, in particular on the coefficient of variation of the estimated failure probability. Based on these observations, a simple optimization algorithm is proposed which distributes the support points so that the coefficient of variation of the method...... is minimized. Next, the method is applied on different cases of linear and nonlinear systems with a large number of random variables representing the dynamic excitation. The results show that asymptotic sampling is capable of providing good approximations of low failure probability events for very high...
solving the cell formation problem in group technology
Directory of Open Access Journals (Sweden)
Prafulla Joglekar
2001-01-01
Full Text Available Over the last three decades, numerous algorithms have been proposed to solve the work-cell formation problem. For practicing manufacturing managers it would be nice to know as to which algorithm would be most effective and efficient for their specific situation. While several studies have attempted to fulfill this need, most have not resulted in any definitive recommendations and a better methodology of evaluation of cell formation algorithms is urgently needed. Prima facie, the methodology underlying Miltenburg and Zhang's (M&Z (1991 evaluation of nine well-known cell formation algorithms seems very promising. The primary performance measure proposed by M&Z effectively captures the objectives of a good solution to a cell formation problem and is worthy of use in future studies. Unfortunately, a critical review of M&Z's methodology also reveals certain important flaws in M&Z's methodology. For example, M&Z may not have duplicated each algorithm precisely as the developer(s of that algorithm intended. Second, M&Z's misrepresent Chandrasekharan and Rajagopalan's [C&R's] (1986 grouping efficiency measure. Third, M&Z's secondary performance measures lead them to unnecessarily ambivalent results. Fourth, several of M&Z's empirical conclusions can be theoretically deduced. It is hoped that future evaluations of cell formation algorithms will benefit from both the strengths and weaknesses of M&Z's work.
Modeling High-Dimensional Multichannel Brain Signals
Hu, Lechuan
2017-12-12
Our goal is to model and measure functional and effective (directional) connectivity in multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The difficulties from analyzing these data mainly come from two aspects: first, there are major statistical and computational challenges for modeling and analyzing high-dimensional multichannel brain signals; second, there is no set of universally agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with potentially high lag order so that complex lead-lag temporal dynamics between the channels can be captured. Estimates of the VAR model will be obtained by our proposed hybrid LASSLE (LASSO + LSE) method which combines regularization (to control for sparsity) and least squares estimation (to improve bias and mean-squared error). Then we employ some measures of connectivity but put an emphasis on partial directed coherence (PDC) which can capture the directional connectivity between channels. PDC is a frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative to all possible receivers in the network. The proposed modeling approach provided key insights into potential functional relationships among simultaneously recorded sites during performance of a complex memory task. Specifically, this novel method was successful in quantifying patterns of effective connectivity across electrode locations, and in capturing how these patterns varied across trial epochs and trial types.
Chen, L. X.; Wu, Q. P.
2012-10-01
Recently, Dada et al. reported on the experimental entanglement concentration and violation of generalized Bell inequalities with orbital angular momentum (OAM) [Nat. Phys. 7, 677 (2011)]. Here we demonstrate that the high-dimensional entanglement concentration can be performed in arbitrary OAM subspaces with selectivity. Instead of violating the generalized Bell inequalities, the working principle of present entanglement concentration is visualized by the biphoton OAM Klyshko picture, and its good performance is confirmed and quantified through the experimental Shannon dimensionalities after concentration.
Testing the mean matrix in high-dimensional transposable data.
Touloumis, Anestis; Tavaré, Simon; Marioni, John C
2015-03-01
The structural information in high-dimensional transposable data allows us to write the data recorded for each subject in a matrix such that both the rows and the columns correspond to variables of interest. One important problem is to test the null hypothesis that the mean matrix has a particular structure without ignoring the dependence structure among and/or between the row and column variables. To address this, we develop a generic and computationally inexpensive nonparametric testing procedure to assess the hypothesis that, in each predefined subset of columns (rows), the column (row) mean vector remains constant. In simulation studies, the proposed testing procedure seems to have good performance and, unlike simple practical approaches, it preserves the nominal size and remains powerful even if the row and/or column variables are not independent. Finally, we illustrate the use of the proposed methodology via two empirical examples from gene expression microarrays. © 2015, The International Biometric Society.
Class prediction for high-dimensional class-imbalanced data
Directory of Open Access Journals (Sweden)
Lusa Lara
2010-10-01
Full Text Available Abstract Background The goal of class prediction studies is to develop rules to accurately predict the class membership of new samples. The rules are derived using the values of the variables available for each subject: the main characteristic of high-dimensional data is that the number of variables greatly exceeds the number of samples. Frequently the classifiers are developed using class-imbalanced data, i.e., data sets where the number of samples in each class is not equal. Standard classification methods used on class-imbalanced data often produce classifiers that do not accurately predict the minority class; the prediction is biased towards the majority class. In this paper we investigate if the high-dimensionality poses additional challenges when dealing with class-imbalanced prediction. We evaluate the performance of six types of classifiers on class-imbalanced data, using simulated data and a publicly available data set from a breast cancer gene-expression microarray study. We also investigate the effectiveness of some strategies that are available to overcome the effect of class imbalance. Results Our results show that the evaluated classifiers are highly sensitive to class imbalance and that variable selection introduces an additional bias towards classification into the majority class. Most new samples are assigned to the majority class from the training set, unless the difference between the classes is very large. As a consequence, the class-specific predictive accuracies differ considerably. When the class imbalance is not too severe, down-sizing and asymmetric bagging embedding variable selection work well, while over-sampling does not. Variable normalization can further worsen the performance of the classifiers. Conclusions Our results show that matching the prevalence of the classes in training and test set does not guarantee good performance of classifiers and that the problems related to classification with class
Scaffolding Cards: A Strategy for Facilitating Groups in Problem Solving
Toh, Pee Choon; Dindyal, Jaguthsing; Ho, Foo Him
2013-01-01
Problem solving task design is not only the design of a non-routine problem to be solves by the students. Our task design also requires a supporting document, the practical worksheet, which would act as a cognitive scaffold for the students in the initial stages of the problem solving process before they can internalize the metacognitive…
Global aspects of the renormalization group and the hierarchy problem
Patrascu, Andrei T.
2017-10-01
The discovery of the Higgs boson by the ATLAS and CMS collaborations allowed us to precisely determine its mass being 125.09 ± 0.24 GeV. This value is intriguing as it lies at the frontier between the regions of stability and meta-stability of the standard model vacuum. It is known that the hierarchy problem can be interpreted in terms of the near criticality between the two phases. The coefficient of the Higgs bilinear in the scalar potential, m2, is pushed by quantum corrections away from zero, towards the extremes of the interval [ - MPl2, MPl2 ] where MPl is the Planck mass. In this article, I show that demanding topological invariance for the renormalisation group allows us to extend the beta functions such that the particular value of the Higgs mass parameter observed in our universe regains naturalness. In holographic terms, invariance to changes of topology in the bulk is dual to a natural large hierarchy in the boundary quantum field theory. The demand of invariance to topology changes in the bulk appears to be strongly tied to the invariance of string theory to T-duality in the presence of H-fluxes.
Random Group Problem-Based Learning in Engineering Dynamics
Fleischfresser, Luciano
2014-01-01
Dynamics problem solving is highly specific to the problem at hand and to develop the general mind framework to become an effective problem solver requires ingenuity and creativity on top of a solid grounding on theoretical and conceptual knowledge. A blended approach with prototype demo, problem-based learning, and an opinion questionnaire was used during first semester of 2013. Students working in randomly selected teams had to interact with classmates while solving a randomly selected problem. The approach helps improve awareness of what is important to learn in this class while reducing grading load. It also provides a more rewarding contact time for both pupils and instructor.
A Cure for Variance Inflation in High Dimensional Kernel Principal Component Analysis
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2011-01-01
Small sample high-dimensional principal component analysis (PCA) suffers from variance inflation and lack of generalizability. It has earlier been pointed out that a simple leave-one-out variance renormalization scheme can cure the problem. In this paper we generalize the cure in two directions...
Global communication schemes for the numerical solution of high-dimensional PDEs
DEFF Research Database (Denmark)
Hupp, Philipp; Heene, Mario; Jacob, Riko
2016-01-01
The numerical treatment of high-dimensional partial differential equations is among the most compute-hungry problems and in urgent need for current and future high-performance computing (HPC) systems. It is thus also facing the grand challenges of exascale computing such as the requirement to red...
Engineering two-photon high-dimensional states through quantum interference.
Zhang, Yingwen; Roux, Filippus S; Konrad, Thomas; Agnew, Megan; Leach, Jonathan; Forbes, Andrew
2016-02-01
Many protocols in quantum science, for example, linear optical quantum computing, require access to large-scale entangled quantum states. Such systems can be realized through many-particle qubits, but this approach often suffers from scalability problems. An alternative strategy is to consider a lesser number of particles that exist in high-dimensional states. The spatial modes of light are one such candidate that provides access to high-dimensional quantum states, and thus they increase the storage and processing potential of quantum information systems. We demonstrate the controlled engineering of two-photon high-dimensional states entangled in their orbital angular momentum through Hong-Ou-Mandel interference. We prepare a large range of high-dimensional entangled states and implement precise quantum state filtering. We characterize the full quantum state before and after the filter, and are thus able to determine that only the antisymmetric component of the initial state remains. This work paves the way for high-dimensional processing and communication of multiphoton quantum states, for example, in teleportation beyond qubits.
[Preface to the population problems of minority groups in China].
Wakabayashi, K
1988-04-01
The population of minority groups in China totals 67,233,300 according to the 1982 census. This comprises about 6.7% of the whole population of China. These minority groups are grouped into 55 identifiably distinct groups and 879,201 unidentified non-Han people. The presence of minority groups is very important in terms of political and economic integration within China, as well as for military reasons. Most of these groups live near the national borders and in the areas where mining resources exist. It is important to realize that their population has been increasing since 1978. 1 reason for this is that family planning has not been strictly enforced for these groups. Moreover, economics in minority group areas has been improved so that infant death rates have plunged. With respect to the 2nd factor, since 1978, minority groups have been treated warmly by the government, with governmental policies much like affirmative action plans for blacks in the US. This policy is most advantageous in terms of school entrance, job-seeking, delivery of children, and promotion to positions of party leadership. Thus, many hidden minorities and the Han-Chinese have begun using the names of minority groups. Examination of the groups in greater detail, especially their kinship and customs, has become an issue of the 1st priority to help pave the way to a smooth modernization. (author's modified)
Communication on a problem solving task in cooperative learning groups
Sadler, Jo; Fawns, Rod
1992-12-01
There is some evidence from this study that reflectivity within cooperative learning groups develops over time. Preliminary observations suggest that Slavin's third and fourth levels of skills, those of reflection and reasoning and reconception and reformulation and Kempa and Ayob's higher levels of explanation and insight appear more advanced in groups strategically managed by teachers for such outcomes. Later analyses will permit more detailed accounts of the relationships between the teacher's management strategies, and reflection within groups of different gender composition.
Clothing Problems of Upper Middle Socio-Economic Group ...
African Journals Online (AJOL)
Recommendations for improving the clothing problems include that rather than ban imported clothing, Government should issue importation license for them and impose huge taxes on them. This will help in making the desired imported clothing available for the women, reducing the smuggling of these textiles and as well ...
Energy Technology Data Exchange (ETDEWEB)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
A well-known challenge in uncertainty quantification (UQ) is the "curse of dimensionality". However, many high-dimensional UQ problems are essentially low-dimensional, because the randomness of the quantity of interest (QoI) is caused only by uncertain parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace. Motivated by this observation, we propose and demonstrate in this paper an inverse regression-based UQ approach (IRUQ) for high-dimensional problems. Specifically, we use an inverse regression procedure to estimate the SDR subspace and then convert the original problem to a low-dimensional one, which can be efficiently solved by building a response surface model such as a polynomial chaos expansion. The novelty and advantages of the proposed approach is seen in its computational efficiency and practicality. Comparing with Monte Carlo, the traditionally preferred approach for high-dimensional UQ, IRUQ with a comparable cost generally gives much more accurate solutions even for high-dimensional problems, and even when the dimension reduction is not exactly sufficient. Theoretically, IRUQ is proved to converge twice as fast as the approach it uses seeking the SDR subspace. For example, while a sliced inverse regression method converges to the SDR subspace at the rate of $O(n^{-1/2})$, the corresponding IRUQ converges at $O(n^{-1})$. IRUQ also provides several desired conveniences in practice. It is non-intrusive, requiring only a simulator to generate realizations of the QoI, and there is no need to compute the high-dimensional gradient of the QoI. Finally, error bars can be derived for the estimation results reported by IRUQ.
Group theory and its application to physical problems
Hamermesh, Morton
1989-01-01
This excellent text, long considered one of the best- written, most skillful expositions of group theory and its physical applications, is directed primarily to advanced undergraduate and graduate students in physics, especially quantum physics. No knowledge of group theory is assumed, but the reader is expected to be familiar with quantum mechanics.
Multivariate statistics high-dimensional and large-sample approximations
Fujikoshi, Yasunori; Shimizu, Ryoichi
2010-01-01
A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic
Progress in high-dimensional percolation and random graphs
Heydenreich, Markus
2017-01-01
This text presents an engaging exposition of the active field of high-dimensional percolation that will likely provide an impetus for future work. With over 90 exercises designed to enhance the reader’s understanding of the material, as well as many open problems, the book is aimed at graduate students and researchers who wish to enter the world of this rich topic. The text may also be useful in advanced courses and seminars, as well as for reference and individual study. Part I, consisting of 3 chapters, presents a general introduction to percolation, stating the main results, defining the central objects, and proving its main properties. No prior knowledge of percolation is assumed. Part II, consisting of Chapters 4–9, discusses mean-field critical behavior by describing the two main techniques used, namely, differential inequalities and the lace expansion. In Parts I and II, all results are proved, making this the first self-contained text discussing high-dimensiona l percolation. Part III, consist...
Inference for High-dimensional Differential Correlation Matrices.
Cai, T Tony; Zhang, Anru
2016-01-01
Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.
Bayesian Variable Selection in High-dimensional Applications
V. Rockova (Veronika)
2013-01-01
markdownabstract__Abstract__ Advances in research technologies over the past few decades have encouraged the proliferation of massive datasets, revolutionizing statistical perspectives on high-dimensionality. Highthroughput technologies have become pervasive in diverse scientific disciplines
Food Security Problems in Various Income Groups of Kenya | Olielo ...
African Journals Online (AJOL)
This research was carried out to assess consumption of foods and characteristics of various income groups and determine factors that cause food insecurity. ... food production as well as improved health status and sanitation to enable earning of high income to reduce poverty and consequently enhance food security and ...
Directory of Open Access Journals (Sweden)
L.V. Arun Shalin
2016-01-01
Full Text Available Clustering is a process of grouping elements together, designed in such a way that the elements assigned to similar data points in a cluster are more comparable to each other than the remaining data points in a cluster. During clustering certain difficulties related when dealing with high dimensional data are ubiquitous and abundant. Works concentrated using anonymization method for high dimensional data spaces failed to address the problem related to dimensionality reduction during the inclusion of non-binary databases. In this work we study methods for dimensionality reduction for non-binary database. By analyzing the behavior of dimensionality reduction for non-binary database, results in performance improvement with the help of tag based feature. An effective multi-clustering anonymization approach called Discrete Component Task Specific Multi-Clustering (DCTSM is presented for dimensionality reduction on non-binary database. To start with we present the analysis of attribute in the non-binary database and cluster projection identifies the sparseness degree of dimensions. Additionally with the quantum distribution on multi-cluster dimension, the solution for relevancy of attribute and redundancy on non-binary data spaces is provided resulting in performance improvement on the basis of tag based feature. Multi-clustering tag based feature reduction extracts individual features and are correspondingly replaced by the equivalent feature clusters (i.e. tag clusters. During training, the DCTSM approach uses multi-clusters instead of individual tag features and then during decoding individual features is replaced by corresponding multi-clusters. To measure the effectiveness of the method, experiments are conducted on existing anonymization method for high dimensional data spaces and compared with the DCTSM approach using Statlog German Credit Data Set. Improved tag feature extraction and minimum error rate compared to conventional anonymization
Hall, Kimberly R.; Rushing, Jeri Lynn; Khurshid, Ayesha
2011-01-01
Problem-focused interventions are considered to be one of the most effective group counseling strategies with adolescents. This article describes a problem-focused group counseling model, Solving Problems Together (SPT), that focuses on working with students who struggle with negative peer pressure. Adapted from the teaching philosophy of…
Brauer groups and obstruction problems moduli spaces and arithmetic
Hassett, Brendan; Várilly-Alvarado, Anthony; Viray, Bianca
2017-01-01
The contributions in this book explore various contexts in which the derived category of coherent sheaves on a variety determines some of its arithmetic. This setting provides new geometric tools for interpreting elements of the Brauer group. With a view towards future arithmetic applications, the book extends a number of powerful tools for analyzing rational points on elliptic curves, e.g., isogenies among curves, torsion points, modular curves, and the resulting descent techniques, as well as higher-dimensional varieties like K3 surfaces. Inspired by the rapid recent advances in our understanding of K3 surfaces, the book is intended to foster cross-pollination between the fields of complex algebraic geometry and number theory. Contributors: · Nicolas Addington · Benjamin Antieau · Kenneth Ascher · Asher Auel · Fedor Bogomolov · Jean-Louis Colliot-Thélène · Krishna Dasaratha · Brendan Hassett · Colin Ingalls · Martí Lahoz · Emanuele Macrì · Kelly McKinnie · Andrew Obus · Ekin Ozman · Raman...
Distributed Computation of the knn Graph for Large High-Dimensional Point Sets.
Plaku, Erion; Kavraki, Lydia E
2007-03-01
High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over one hundred processors and indicate that similar speedup can be obtained with several hundred processors.
Approximation of High-Dimensional Rank One Tensors
Bachmayr, Markus
2013-11-12
Many real world problems are high-dimensional in that their solution is a function which depends on many variables or parameters. This presents a computational challenge since traditional numerical techniques are built on model classes for functions based solely on smoothness. It is known that the approximation of smoothness classes of functions suffers from the so-called \\'curse of dimensionality\\'. Avoiding this curse requires new model classes for real world functions that match applications. This has led to the introduction of notions such as sparsity, variable reduction, and reduced modeling. One theme that is particularly common is to assume a tensor structure for the target function. This paper investigates how well a rank one function f(x 1,...,x d)=f 1(x 1)⋯f d(x d), defined on Ω=[0,1]d can be captured through point queries. It is shown that such a rank one function with component functions f j in W∞ r([0,1]) can be captured (in L ∞) to accuracy O(C(d,r)N -r) from N well-chosen point evaluations. The constant C(d,r) scales like d dr. The queries in our algorithms have two ingredients, a set of points built on the results from discrepancy theory and a second adaptive set of queries dependent on the information drawn from the first set. Under the assumption that a point z∈Ω with nonvanishing f(z) is known, the accuracy improves to O(dN -r). © 2013 Springer Science+Business Media New York.
Quality and efficiency in high dimensional Nearest neighbor search
Tao, Yufei
2009-01-01
Nearest neighbor (NN) search in high dimensional space is an important problem in many applications. Ideally, a practical solution (i) should be implementable in a relational database, and (ii) its query cost should grow sub-linearly with the dataset size, regardless of the data and query distributions. Despite the bulk of NN literature, no solution fulfills both requirements, except locality sensitive hashing (LSH). The existing LSH implementations are either rigorous or adhoc. Rigorous-LSH ensures good quality of query results, but requires expensive space and query cost. Although adhoc-LSH is more efficient, it abandons quality control, i.e., the neighbor it outputs can be arbitrarily bad. As a result, currently no method is able to ensure both quality and efficiency simultaneously in practice. Motivated by this, we propose a new access method called the locality sensitive B-tree (LSB-tree) that enables fast highdimensional NN search with excellent quality. The combination of several LSB-trees leads to a structure called the LSB-forest that ensures the same result quality as rigorous-LSH, but reduces its space and query cost dramatically. The LSB-forest also outperforms adhoc-LSH, even though the latter has no quality guarantee. Besides its appealing theoretical properties, the LSB-tree itself also serves as an effective index that consumes linear space, and supports efficient updates. Our extensive experiments confirm that the LSB-tree is faster than (i) the state of the art of exact NN search by two orders of magnitude, and (ii) the best (linear-space) method of approximate retrieval by an order of magnitude, and at the same time, returns neighbors with much better quality. © 2009 ACM.
Optimally splitting cases for training and testing high dimensional classifiers
Directory of Open Access Journals (Sweden)
Simon Richard M
2011-04-01
Full Text Available Abstract Background We consider the problem of designing a study to develop a predictive classifier from high dimensional data. A common study design is to split the sample into a training set and an independent test set, where the former is used to develop the classifier and the latter to evaluate its performance. In this paper we address the question of what proportion of the samples should be devoted to the training set. How does this proportion impact the mean squared error (MSE of the prediction accuracy estimate? Results We develop a non-parametric algorithm for determining an optimal splitting proportion that can be applied with a specific dataset and classifier algorithm. We also perform a broad simulation study for the purpose of better understanding the factors that determine the best split proportions and to evaluate commonly used splitting strategies (1/2 training or 2/3 training under a wide variety of conditions. These methods are based on a decomposition of the MSE into three intuitive component parts. Conclusions By applying these approaches to a number of synthetic and real microarray datasets we show that for linear classifiers the optimal proportion depends on the overall number of samples available and the degree of differential expression between the classes. The optimal proportion was found to depend on the full dataset size (n and classification accuracy - with higher accuracy and smaller n resulting in more assigned to the training set. The commonly used strategy of allocating 2/3rd of cases for training was close to optimal for reasonable sized datasets (n ≥ 100 with strong signals (i.e. 85% or greater full dataset accuracy. In general, we recommend use of our nonparametric resampling approach for determing the optimal split. This approach can be applied to any dataset, using any predictor development method, to determine the best split.
An Effective Parameter Screening Strategy for High Dimensional Watershed Models
Khare, Y. P.; Martinez, C. J.; Munoz-Carpena, R.
2014-12-01
Watershed simulation models can assess the impacts of natural and anthropogenic disturbances on natural systems. These models have become important tools for tackling a range of water resources problems through their implementation in the formulation and evaluation of Best Management Practices, Total Maximum Daily Loads, and Basin Management Action Plans. For accurate applications of watershed models they need to be thoroughly evaluated through global uncertainty and sensitivity analyses (UA/SA). However, due to the high dimensionality of these models such evaluation becomes extremely time- and resource-consuming. Parameter screening, the qualitative separation of important parameters, has been suggested as an essential step before applying rigorous evaluation techniques such as the Sobol' and Fourier Amplitude Sensitivity Test (FAST) methods in the UA/SA framework. The method of elementary effects (EE) (Morris, 1991) is one of the most widely used screening methodologies. Some of the common parameter sampling strategies for EE, e.g. Optimized Trajectories [OT] (Campolongo et al., 2007) and Modified Optimized Trajectories [MOT] (Ruano et al., 2012), suffer from inconsistencies in the generated parameter distributions, infeasible sample generation time, etc. In this work, we have formulated a new parameter sampling strategy - Sampling for Uniformity (SU) - for parameter screening which is based on the principles of the uniformity of the generated parameter distributions and the spread of the parameter sample. A rigorous multi-criteria evaluation (time, distribution, spread and screening efficiency) of OT, MOT, and SU indicated that SU is superior to other sampling strategies. Comparison of the EE-based parameter importance rankings with those of Sobol' helped to quantify the qualitativeness of the EE parameter screening approach, reinforcing the fact that one should use EE only to reduce the resource burden required by FAST/Sobol' analyses but not to replace it.
High Dimensional Modulation and MIMO Techniques for Access Networks
DEFF Research Database (Denmark)
Binti Othman, Maisara
Exploration of advanced modulation formats and multiplexing techniques for next generation optical access networks are of interest as promising solutions for delivering multiple services to end-users. This thesis addresses this from two different angles: high dimensionality carrierless amplitudep...... wired-wireless access networks....... the capacity per wavelength of the femto-cell network. Bit rate up to 1.59 Gbps with fiber-wireless transmission over 1 m air distance is demonstrated. The results presented in this thesis demonstrate the feasibility of high dimensionality CAP in increasing the number of dimensions and their potentially...... to be utilized for multiple service allocation to different users. MIMO multiplexing techniques with OFDM provides the scalability in increasing spectral efficiency and bit rates for RoF systems. High dimensional CAP and MIMO multiplexing techniques are two promising solutions for supporting wired and hybrid...
Promoting Thinking, Problem-Solving and Reasoning during Small Group Discussions
Gillies, Robyn M.
2011-01-01
The study reports on the types of questioning strategies teachers use to promote thinking, problem-solving and reasoning during small group discussions. The study also reports on the types of discourses students use to problem-solve and reason during their small group discussions. An audiotape of one class lesson from the three teachers included…
A Mindfulness-Based Cognitive Psychoeducational Group Manual for Problem Gambling
Cormier, Abigail; McBride, Dawn Lorraine
2012-01-01
This project provides a comprehensive overview of the research literature on problem gambling in adults and includes a detailed mindfulness-based psychoeducational group manual for problem gambling, complete with an extensive group counselling consent form, assessment and screening protocols, 10 user-friendly lesson plans, templates for a…
Problem-Solving Strategies and Group Processes in Small Groups Learning Computer Programming.
Webb, Noreen M.; And Others
1986-01-01
Planning and debugging strategies and group processes predicting learning of computer programming were studied in 30 students aged 11 to 14. Students showed little advance planning. Factors associated with learning included high-level planning of program chunks, debugging of single statements, explaining, and verbalizing aloud while typing.…
Group relationships in early and late sessions and improvement in interpersonal problems.
Lo Coco, Gianluca; Gullo, Salvatore; Di Fratello, Carla; Giordano, Cecilia; Kivlighan, Dennis M
2016-07-01
Groups are more effective when positive bonds are established and interpersonal conflicts resolved in early sessions and work is accomplished in later sessions. Previous research has provided mixed support for this group development model. We performed a test of this theoretical perspective using group members' (actors) and aggregated group members' (partners) perceptions of positive bonding, positive working, and negative group relationships measured early and late in interpersonal growth groups. Participants were 325 Italian graduate students randomly (within semester) assigned to 1 of 16 interpersonal growth groups. Groups met for 9 weeks with experienced psychologists using Yalom and Leszcz's (2005) interpersonal process model. Outcome was assessed pre- and posttreatment using the Inventory of Interpersonal Problems, and group relationships were measured at Sessions 3 and 6 using the Group Questionnaire. As hypothesized, early measures of positive bonding and late measures of positive working, for both actors and partners, were positively related to improved interpersonal problems. Also as hypothesized, late measures of positive bonding and early measures of positive working, for both actors and partners, were negatively related to improved interpersonal problems. We also found that early actor and partner positive bonding and negative relationships interacted to predict changes in interpersonal problems. The findings are consistent with group development theory and suggest that group therapists focus on group-as-a-whole positive bonding relationships in early group sessions and on group-as-a-whole positive working relationships in later group sessions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Peer Group Membership and a Sense of Belonging: Their Relationship to Adolescent Behavior Problems
Newman, Barbara M.; Lohman, Brenda J.; Newman, Philip R.
2007-01-01
This study explored three aspects of peer group membership in adolescence: peer group affiliation, the importance of group membership, and a sense of peer group belonging. Each is considered in relationship to adolescents' behavior problems as measured by the Achenbach Youth Self-Report. Participants included an ethnically and socioeconomically…
Non-intrusive low-rank separated approximation of high-dimensional stochastic models
Doostan, Alireza
2013-08-01
This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.
Liu, Shuangyan; Joy, Mike
2011-01-01
This paper investigates the role of various student interactions with a learning forum in order to ascertain the existence of different group collaboration problems. A particular focus of interest has been learning forums, since forums have become broadly adopted tools to support online group collaboration. The types of collaboration problems were drawn from previous research that identified the main student-induced collaboration problems.\\ud \\ud A data set was collected for 87 undergraduates...
High-dimensional quantum channel estimation using classical light
CSIR Research Space (South Africa)
Mabena, Chemist M
2017-11-01
Full Text Available A method is proposed to characterize a high-dimensional quantum channel with the aid of classical light. It uses a single nonseparable input optical field that contains correlations between spatial modes and wavelength to determine the effect...
A hybridized K-means clustering approach for high dimensional ...
African Journals Online (AJOL)
Due to incredible growth of high dimensional dataset, conventional data base querying methods are inadequate to extract useful information, so researchers nowadays is forced to develop new techniques to meet the raised requirements. Such large expression data gives rise to a number of new computational challenges ...
Inference in High-dimensional Dynamic Panel Data Models
DEFF Research Database (Denmark)
Kock, Anders Bredahl; Tang, Haihan
error variance may be non-constant over time and depend on the covariates. Furthermore, our procedure allows for inference on high-dimensional subsets of the parameter vector of an increasing cardinality. We show that the confidence bands resulting from our procedure are asymptotically honest...
High-Dimensional Statistical Learning: Roots, Justifications, and Potential Machineries.
Zollanvari, Amin
2015-01-01
High-dimensional data generally refer to data in which the number of variables is larger than the sample size. Analyzing such datasets poses great challenges for classical statistical learning because the finite-sample performance of methods developed within classical statistical learning does not live up to classical asymptotic premises in which the sample size unboundedly grows for a fixed dimensionality of observations. Much work has been done in developing mathematical-statistical techniques for analyzing high-dimensional data. Despite remarkable progress in this field, many practitioners still utilize classical methods for analyzing such datasets. This state of affairs can be attributed, in part, to a lack of knowledge and, in part, to the ready-to-use computational and statistical software packages that are well developed for classical techniques. Moreover, many scientists working in a specific field of high-dimensional statistical learning are either not aware of other existing machineries in the field or are not willing to try them out. The primary goal in this work is to bring together various machineries of high-dimensional analysis, give an overview of the important results, and present the operating conditions upon which they are grounded. When appropriate, readers are referred to relevant review articles for more information on a specific subject.
The additive hazards model with high-dimensional regressors
DEFF Research Database (Denmark)
Martinussen, Torben
2009-01-01
This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...
Reduced nonlinear prognostic model construction from high-dimensional data
Gavrilov, Andrey; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander
2017-04-01
Construction of a data-driven model of evolution operator using universal approximating functions can only be statistically justified when the dimension of its phase space is small enough, especially in the case of short time series. At the same time in many applications real-measured data is high-dimensional, e.g. it is space-distributed and multivariate in climate science. Therefore it is necessary to use efficient dimensionality reduction methods which are also able to capture key dynamical properties of the system from observed data. To address this problem we present a Bayesian approach to an evolution operator construction which incorporates two key reduction steps. First, the data is decomposed into a set of certain empirical modes, such as standard empirical orthogonal functions or recently suggested nonlinear dynamical modes (NDMs) [1], and the reduced space of corresponding principal components (PCs) is obtained. Then, the model of evolution operator for PCs is constructed which maps a number of states in the past to the current state. The second step is to reduce this time-extended space in the past using appropriate decomposition methods. Such a reduction allows us to capture only the most significant spatio-temporal couplings. The functional form of the evolution operator includes separately linear, nonlinear (based on artificial neural networks) and stochastic terms. Explicit separation of the linear term from the nonlinear one allows us to more easily interpret degree of nonlinearity as well as to deal better with smooth PCs which can naturally occur in the decompositions like NDM, as they provide a time scale separation. Results of application of the proposed method to climate data are demonstrated and discussed. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical
Students' and tutors' perceptions of problems in PBL tutorial groups at a Brazilian medical school.
Zanolli, Mauricio B; Boshuizen, Henny P A; De Grave, Willem S
2002-01-01
There are few published studies that address the problem of dysfunctional tutorial groups. Most studies are restricted to student or faculty opinions separately and to specific aspects affecting the tutorial group function. This study examined teacher and student perceptions of frequency and importance of problems observed in tutorial groups in a new PBL program. Tutors (n=30) and students in the second (n=75) and third (n=53) year completed a questionnaire at the beginning of the 1999 academic year. The questionnaire had 33 items grouped as seven "factors" related to tutor performance, feedback, assessment, educational resources, student performance, educational problems and external factors The most important problems identified were related to tutors (mainly in training aspects) and students (mainly in problem discussion). Students and feedback (quality) were the most frequent. There were statistically significant differences between tutors' and students' (higher) and between second and third year (higher) students' perceptions of different factors. (1) Marilia Medical School (FAMEMA) has problems in the tutorial group function mainly related to contributions of students and tutors. (2) Students' and tutors' opinions, as well as those of second and third year students, differ and therefore all need to be consulted to solve tutorial group problems. (3) It is necessary to develop a better student training program and also to improve the tutors training program. (4) There is a need for continued evaluation of problem-based learning at FAMEMA. We must look at perceptions of students from all years.
2013-01-01
Background Problem-based learning (PBL) involves discussions among students who resolve loosely-structured problems to facilitate learning. In the PBL curriculum, faculty tutors are employed as facilitators for small groups of students. Because of lack of time and staff shortage, the effectiveness of tutorless PBL has been discussed as an alternate option. Methods Sessions in which tutored and tutorless PBL groups are mixed were presented by 1st-year medical students, who experienced both tutored and tutorless groups alternately in the two sessions of a year. To examine the effectiveness of tutored and tutorless PBL, written examination scores (WES) and self-contentment scores (SCS) were statistically analysed. Results WES averages did not significantly differ between the tutored and tutorless groups; however, a significantly greater variation was observed in WES in the tutorless group. SCS averages tended to be higher in the tutored PBL than in tutorless PBL groups. Conclusions Students in these tutorless PBL groups performed well in their written examinations, whereas those in the tutored PBL groups, achieved this and reported better self-contentment with their learning experience. Tutorless PBL sessions were considered to be comparable to tutored PBL sessions at least in the early stages. PMID:24289490
Larger groups are more successful in innovative problem solving in house sparrows.
Liker, András; Bókony, Veronika
2009-05-12
Group living offers well-known benefits to animals, such as better predator avoidance and increased foraging success. An important additional, but so far neglected, advantage is that groups may cope more effectively with unfamiliar situations through faster innovations of new solutions by some group members. We tested this hypothesis experimentally by presenting a new foraging task of opening a familiar feeder in an unfamiliar way to house sparrows in small and large groups (2 versus 6 birds). Group size had strong effects on problem solving: sparrows performed 4 times more and 11 times faster openings in large than in small groups, and all members of large groups profited by getting food sooner (7 times on average). Independently from group size, urban groups were more successful than rural groups. The disproportionately higher success in large groups was not a mere consequence of higher number of attempts, but was also related to a higher effectiveness of problem solving (3 times higher proportion of successful birds). The analyses of the birds' behavior suggest that the latter was not explained by either reduced investment in antipredator vigilance or reduced neophobia in large groups. Instead, larger groups may contain more diverse individuals with different skills and experiences, which may increase the chance of solving the task by some group members. Increased success in problem solving may promote group living in animals and may help them to adapt quickly to new situations in rapidly-changing environments.
Wikispaces (Wikis) and Group Problem Solving (GPS) sessions in Physics classes
Mohottala, Hashini
2013-03-01
We report the combine use of Wikispaces (Wikis) and Group Problem Solving (GPS) sessions conducted in the introductory level and upper level physics classes. This method gradually evolved from the combine use of Wikis and Just in Time Teaching (JiTT) practiced over the past years. As a part of this new teaching method, some essay type problems, parallel to the chapter in discussion, were posted on the Wikis at the beginning of each week and students were encouraged to visit the pages and do the work without providing numerical final answers but the steps. At the end of each week students were evaluated on the problem solving skills opening up more opportunity for peer interaction by putting them into small groups and letting them solve one selected problem. A class of 30 students is divided into 6 groups and as a whole four lengthy essay problems are discussed - each group is given to solve one problem. The problem numbers are drawn in a raffle and the groups are excited to find out what they get each week. The required skills to solve a problem are gained from the weekly given Wiki exercises. Wiki provides a user-friendly platform to make this effort a success. GPS sessions help the professor identify the failing students earlier and help them before it's too late.
Chan, Zenobia C Y
2013-08-01
To explore students' attitude towards problem-based learning, creativity and critical thinking, and the relevance to nursing education and clinical practice. Critical thinking and creativity are crucial in nursing education. The teaching approach of problem-based learning can help to reduce the difficulties of nurturing problem-solving skills. However, there is little in the literature on how to improve the effectiveness of a problem-based learning lesson by designing appropriate and innovative activities such as composing songs, writing poems and using role plays. Exploratory qualitative study. A sample of 100 students participated in seven semi-structured focus groups, of which two were innovative groups and five were standard groups, adopting three activities in problem-based learning, namely composing songs, writing poems and performing role plays. The data were analysed using thematic analysis. There are three themes extracted from the conversations: 'students' perceptions of problem-based learning', 'students' perceptions of creative thinking' and 'students' perceptions of critical thinking'. Participants generally agreed that critical thinking is more important than creativity in problem-based learning and clinical practice. Participants in the innovative groups perceived a significantly closer relationship between critical thinking and nursing care, and between creativity and nursing care than the standard groups. Both standard and innovative groups agreed that problem-based learning could significantly increase their critical thinking and problem-solving skills. Further, by composing songs, writing poems and using role plays, the innovative groups had significantly increased their awareness of the relationship among critical thinking, creativity and nursing care. Nursing educators should include more types of creative activities than it often does in conventional problem-based learning classes. The results could help nurse educators design an appropriate
Inferring gene regulatory relationships with a high-dimensional robust approach.
Zang, Yangguang; Zhao, Qing; Zhang, Qingzhao; Li, Yang; Zhang, Sanguo; Ma, Shuangge
2017-07-01
Gene expression (GE) levels have important biological and clinical implications. They are regulated by copy number alterations (CNAs). Modeling the regulatory relationships between GEs and CNAs facilitates understanding disease biology and can also have values in translational medicine. The expression level of a gene can be regulated by its cis-acting as well as trans-acting CNAs, and the set of trans-acting CNAs is usually not known, which poses a high-dimensional selection and estimation problem. Most of the existing studies share a common limitation in that they cannot accommodate long-tailed distributions or contamination of GE data. In this study, we develop a high-dimensional robust regression approach to infer the regulatory relationships between GEs and CNAs. A high-dimensional regression model is used to accommodate the effects of both cis-acting and trans-acting CNAs. A density power divergence loss function is used to accommodate long-tailed GE distributions and contamination. Penalization is adopted for regularized estimation and selection of relevant CNAs. The proposed approach is effectively realized using a coordinate descent algorithm. Simulation shows that it has competitive performance compared to the nonrobust benchmark and the robust LAD (least absolute deviation) approach. We analyze TCGA (The Cancer Genome Atlas) data on cutaneous melanoma and study GE-CNA regulations in the RAP (regulation of apoptosis) pathway, which further demonstrates the satisfactory performance of the proposed approach. © 2017 WILEY PERIODICALS, INC.
Sun, Hokeun; Wang, Shuang
2013-05-30
The matched case-control designs are commonly used to control for potential confounding factors in genetic epidemiology studies especially epigenetic studies with DNA methylation. Compared with unmatched case-control studies with high-dimensional genomic or epigenetic data, there have been few variable selection methods for matched sets. In an earlier paper, we proposed the penalized logistic regression model for the analysis of unmatched DNA methylation data using a network-based penalty. However, for popularly applied matched designs in epigenetic studies that compare DNA methylation between tumor and adjacent non-tumor tissues or between pre-treatment and post-treatment conditions, applying ordinary logistic regression ignoring matching is known to bring serious bias in estimation. In this paper, we developed a penalized conditional logistic model using the network-based penalty that encourages a grouping effect of (1) linked Cytosine-phosphate-Guanine (CpG) sites within a gene or (2) linked genes within a genetic pathway for analysis of matched DNA methylation data. In our simulation studies, we demonstrated the superiority of using conditional logistic model over unconditional logistic model in high-dimensional variable selection problems for matched case-control data. We further investigated the benefits of utilizing biological group or graph information for matched case-control data. We applied the proposed method to a genome-wide DNA methylation study on hepatocellular carcinoma (HCC) where we investigated the DNA methylation levels of tumor and adjacent non-tumor tissues from HCC patients by using the Illumina Infinium HumanMethylation27 Beadchip. Several new CpG sites and genes known to be related to HCC were identified but were missed by the standard method in the original paper. Copyright © 2012 John Wiley & Sons, Ltd.
Structural analysis of high-dimensional basins of attraction
Martiniani, Stefano; Schrenk, K. Julian; Stevenson, Jacob D.; Wales, David J.; Frenkel, Daan
2016-09-01
We propose an efficient Monte Carlo method for the computation of the volumes of high-dimensional bodies with arbitrary shape. We start with a region of known volume within the interior of the manifold and then use the multistate Bennett acceptance-ratio method to compute the dimensionless free-energy difference between a series of equilibrium simulations performed within this object. The method produces results that are in excellent agreement with thermodynamic integration, as well as a direct estimate of the associated statistical uncertainties. The histogram method also allows us to directly obtain an estimate of the interior radial probability density profile, thus yielding useful insight into the structural properties of such a high-dimensional body. We illustrate the method by analyzing the effect of structural disorder on the basins of attraction of mechanically stable packings of soft repulsive spheres.
Dimensionality reduction for registration of high-dimensional data sets.
Xu, Min; Chen, Hao; Varshney, Pramod K
2013-08-01
Registration of two high-dimensional data sets often involves dimensionality reduction to yield a single-band image from each data set followed by pairwise image registration. We develop a new application-specific algorithm for dimensionality reduction of high-dimensional data sets such that the weighted harmonic mean of Cramér-Rao lower bounds for the estimation of the transformation parameters for registration is minimized. The performance of the proposed dimensionality reduction algorithm is evaluated using three remotes sensing data sets. The experimental results using mutual information-based pairwise registration technique demonstrate that our proposed dimensionality reduction algorithm combines the original data sets to obtain the image pair with more texture, resulting in improved image registration.
HSM: Heterogeneous Subspace Mining in High Dimensional Data
DEFF Research Database (Denmark)
Müller, Emmanuel; Assent, Ira; Seidl, Thomas
2009-01-01
Heterogeneous data, i.e. data with both categorical and continuous values, is common in many databases. However, most data mining algorithms assume either continuous or categorical attributes, but not both. In high dimensional data, phenomena due to the "curse of dimensionality" pose additional...... challenges. Usually, due to locally varying relevance of attributes, patterns do not show across the full set of attributes. In this paper we propose HSM, which defines a new pattern model for heterogeneous high dimensional data. It allows data mining in arbitrary subsets of the attributes that are relevant...... for the respective patterns. Based on this model we propose an efficient algorithm, which is aware of the heterogeneity of the attributes. We extend an indexing structure for continuous attributes such that HSM indexing adapts to different attribute types. In our experiments we show that HSM efficiently mines...
Analysis of chaos in high-dimensional wind power system
Wang, Cong; Zhang, Hongli; Fan, Wenhui; Ma, Ping
2018-01-01
A comprehensive analysis on the chaos of a high-dimensional wind power system is performed in this study. A high-dimensional wind power system is more complex than most power systems. An 11-dimensional wind power system proposed by Huang, which has not been analyzed in previous studies, is investigated. When the systems are affected by external disturbances including single parameter and periodic disturbance, or its parameters changed, chaotic dynamics of the wind power system is analyzed and chaotic parameters ranges are obtained. Chaos existence is confirmed by calculation and analysis of all state variables' Lyapunov exponents and the state variable sequence diagram. Theoretical analysis and numerical simulations show that the wind power system chaos will occur when parameter variations and external disturbances change to a certain degree.
Machine-learned cluster identification in high-dimensional data.
Ultsch, Alfred; Lötsch, Jörn
2017-02-01
High-dimensional biomedical data are frequently clustered to identify subgroup structures pointing at distinct disease subtypes. It is crucial that the used cluster algorithm works correctly. However, by imposing a predefined shape on the clusters, classical algorithms occasionally suggest a cluster structure in homogenously distributed data or assign data points to incorrect clusters. We analyzed whether this can be avoided by using emergent self-organizing feature maps (ESOM). Data sets with different degrees of complexity were submitted to ESOM analysis with large numbers of neurons, using an interactive R-based bioinformatics tool. On top of the trained ESOM the distance structure in the high dimensional feature space was visualized in the form of a so-called U-matrix. Clustering results were compared with those provided by classical common cluster algorithms including single linkage, Ward and k-means. Ward clustering imposed cluster structures on cluster-less "golf ball", "cuboid" and "S-shaped" data sets that contained no structure at all (random data). Ward clustering also imposed structures on permuted real world data sets. By contrast, the ESOM/U-matrix approach correctly found that these data contain no cluster structure. However, ESOM/U-matrix was correct in identifying clusters in biomedical data truly containing subgroups. It was always correct in cluster structure identification in further canonical artificial data. Using intentionally simple data sets, it is shown that popular clustering algorithms typically used for biomedical data sets may fail to cluster data correctly, suggesting that they are also likely to perform erroneously on high dimensional biomedical data. The present analyses emphasized that generally established classical hierarchical clustering algorithms carry a considerable tendency to produce erroneous results. By contrast, unsupervised machine-learned analysis of cluster structures, applied using the ESOM/U-matrix method, is a
Parsimonious description for predicting high-dimensional dynamics
Yoshito Hirata; Tomoya Takeuchi; Shunsuke Horai; Hideyuki Suzuki; Kazuyuki Aihara
2015-01-01
When we observe a system, we often cannot observe all its variables and may have some of its limited measurements. Under such a circumstance, delay coordinates, vectors made of successive measurements, are useful to reconstruct the states of the whole system. Although the method of delay coordinates is theoretically supported for high-dimensional dynamical systems, practically there is a limitation because the calculation for higher-dimensional delay coordinates becomes more expensive. Here, ...
Bayesian Visual Analytics: Interactive Visualization for High Dimensional Data
Han, Chao
2012-01-01
In light of advancements made in data collection techniques over the past two decades, data mining has become common practice to summarize large, high dimensional datasets, in hopes of discovering noteworthy data structures. However, one concern is that most data mining approaches rely upon strict criteria that may mask information in data that analysts may find useful. We propose a new approach called Bayesian Visual Analytics (BaVA) which merges Bayesian Statistics with Visual Analytics to ...
RRT+ : Fast Planning for High-Dimensional Configuration Spaces
Xanthidis, Marios; Rekleitis, Ioannis; O'Kane, Jason M.
2016-01-01
In this paper we propose a new family of RRT based algorithms, named RRT+ , that are able to find faster solutions in high-dimensional configuration spaces compared to other existing RRT variants by finding paths in lower dimensional subspaces of the configuration space. The method can be easily applied to complex hyper-redundant systems and can be adapted by other RRT based planners. We introduce RRT+ and develop some variants, called PrioritizedRRT+ , PrioritizedRRT+-Connect, and Prioritize...
Oracle inequalities for high-dimensional panel data models
DEFF Research Database (Denmark)
Kock, Anders Bredahl
This paper is concerned with high-dimensional panel data models where the number of regressors can be much larger than the sample size. Under the assumption that the true parameter vector is sparse we establish finite sample upper bounds on the estimation error of the Lasso under two different se...... results by simulations and apply the methods to search for covariates explaining growth in the G8 countries....
A solution to the collective action problem in between-group conflict with within-group inequality.
Gavrilets, Sergey; Fortunato, Laura
2014-03-26
Conflict with conspecifics from neighbouring groups over territory, mating opportunities and other resources is observed in many social organisms, including humans. Here we investigate the evolutionary origins of social instincts, as shaped by selection resulting from between-group conflict in the presence of a collective action problem. We focus on the effects of the differences between individuals on the evolutionary dynamics. Our theoretical models predict that high-rank individuals, who are able to usurp a disproportional share of resources in within-group interactions, will act seemingly altruistically in between-group conflict, expending more effort and often having lower reproductive success than their low-rank group-mates. Similar behaviour is expected for individuals with higher motivation, higher strengths or lower costs, or for individuals in a leadership position. Our theory also provides an evolutionary foundation for classical equity theory, and it has implications for the origin of coercive leadership and for reproductive skew theory.
A solution to the collective action problem in between-group conflict with within-group inequality
Gavrilets, Sergey; Fortunato, Laura
2014-01-01
Conflict with conspecifics from neighbouring groups over territory, mating opportunities and other resources is observed in many social organisms, including humans. Here we investigate the evolutionary origins of social instincts, as shaped by selection resulting from between-group conflict in the presence of a collective action problem. We focus on the effects of the differences between individuals on the evolutionary dynamics. Our theoretical models predict that high-rank individuals, who are able to usurp a disproportional share of resources in within-group interactions, will act seemingly altruistically in between-group conflict, expending more effort and often having lower reproductive success than their low-rank group-mates. Similar behaviour is expected for individuals with higher motivation, higher strengths or lower costs, or for individuals in a leadership position. Our theory also provides an evolutionary foundation for classical equity theory, and it has implications for the origin of coercive leadership and for reproductive skew theory. PMID:24667443
High-dimensional quantum cloning and applications to quantum hacking.
Bouchard, Frédéric; Fickler, Robert; Boyd, Robert W; Karimi, Ebrahim
2017-02-01
Attempts at cloning a quantum system result in the introduction of imperfections in the state of the copies. This is a consequence of the no-cloning theorem, which is a fundamental law of quantum physics and the backbone of security for quantum communications. Although perfect copies are prohibited, a quantum state may be copied with maximal accuracy via various optimal cloning schemes. Optimal quantum cloning, which lies at the border of the physical limit imposed by the no-signaling theorem and the Heisenberg uncertainty principle, has been experimentally realized for low-dimensional photonic states. However, an increase in the dimensionality of quantum systems is greatly beneficial to quantum computation and communication protocols. Nonetheless, no experimental demonstration of optimal cloning machines has hitherto been shown for high-dimensional quantum systems. We perform optimal cloning of high-dimensional photonic states by means of the symmetrization method. We show the universality of our technique by conducting cloning of numerous arbitrary input states and fully characterize our cloning machine by performing quantum state tomography on cloned photons. In addition, a cloning attack on a Bennett and Brassard (BB84) quantum key distribution protocol is experimentally demonstrated to reveal the robustness of high-dimensional states in quantum cryptography.
National Research Council Canada - National Science Library
Crino, John
2002-01-01
.... This dissertation applies and extends some of Colletti's (1999) seminal work in group theory and metaheuristics in order to solve the theater distribution vehicle routing and scheduling problem (TDVRSP...
Directory of Open Access Journals (Sweden)
Patimat Alieva
2012-01-01
Full Text Available The article is devoted to the consideration and analysis of the main problems, concerning the realization of socially vulnerable groups' of the population potential. The problem of women and youth employment development takes on a special acuteness and actualite in the outlying district with a labour redundant labour market.
EigenPrism: inference for high dimensional signal-to-noise ratios.
Janson, Lucas; Barber, Rina Foygel; Candès, Emmanuel
2017-09-01
Consider the following three important problems in statistical inference, namely, constructing confidence intervals for (1) the error of a high-dimensional (p > n) regression estimator, (2) the linear regression noise level, and (3) the genetic signal-to-noise ratio of a continuous-valued trait (related to the heritability). All three problems turn out to be closely related to the little-studied problem of performing inference on the [Formula: see text]-norm of the signal in high-dimensional linear regression. We derive a novel procedure for this, which is asymptotically correct when the covariates are multivariate Gaussian and produces valid confidence intervals in finite samples as well. The procedure, called EigenPrism, is computationally fast and makes no assumptions on coefficient sparsity or knowledge of the noise level. We investigate the width of the EigenPrism confidence intervals, including a comparison with a Bayesian setting in which our interval is just 5% wider than the Bayes credible interval. We are then able to unify the three aforementioned problems by showing that the EigenPrism procedure with only minor modifications is able to make important contributions to all three. We also investigate the robustness of coverage and find that the method applies in practice and in finite samples much more widely than just the case of multivariate Gaussian covariates. Finally, we apply EigenPrism to a genetic dataset to estimate the genetic signal-to-noise ratio for a number of continuous phenotypes.
Uniqueness theorems for variational problems by the method of transformation groups
Reichel, Wolfgang
2004-01-01
A classical problem in the calculus of variations is the investigation of critical points of functionals {\\cal L} on normed spaces V. The present work addresses the question: Under what conditions on the functional {\\cal L} and the underlying space V does {\\cal L} have at most one critical point? A sufficient condition for uniqueness is given: the presence of a "variational sub-symmetry", i.e., a one-parameter group G of transformations of V, which strictly reduces the values of {\\cal L}. The "method of transformation groups" is applied to second-order elliptic boundary value problems on Riemannian manifolds. Further applications include problems of geometric analysis and elasticity.
Mixture drug-count response model for the high-dimensional drug combinatory effect on myopathy.
Wang, Xueying; Zhang, Pengyue; Chiang, Chien-Wei; Wu, Hengyi; Shen, Li; Ning, Xia; Zeng, Donglin; Wang, Lei; Quinney, Sara K; Feng, Weixing; Li, Lang
2018-02-20
Drug-drug interactions (DDIs) are a common cause of adverse drug events (ADEs). The electronic medical record (EMR) database and the FDA's adverse event reporting system (FAERS) database are the major data sources for mining and testing the ADE associated DDI signals. Most DDI data mining methods focus on pair-wise drug interactions, and methods to detect high-dimensional DDIs in medical databases are lacking. In this paper, we propose 2 novel mixture drug-count response models for detecting high-dimensional drug combinations that induce myopathy. The "count" indicates the number of drugs in a combination. One model is called fixed probability mixture drug-count response model with a maximum risk threshold (FMDRM-MRT). The other model is called count-dependent probability mixture drug-count response model with a maximum risk threshold (CMDRM-MRT), in which the mixture probability is count dependent. Compared with the previous mixture drug-count response model (MDRM) developed by our group, these 2 new models show a better likelihood in detecting high-dimensional drug combinatory effects on myopathy. CMDRM-MRT identified and validated (54; 374; 637; 442; 131) 2-way to 6-way drug interactions, respectively, which induce myopathy in both EMR and FAERS databases. We further demonstrate FAERS data capture much higher maximum myopathy risk than EMR data do. The consistency of 2 mixture models' parameters and local false discovery rate estimates are evaluated through statistical simulation studies. Copyright © 2017 John Wiley & Sons, Ltd.
Efficient and accurate nearest neighbor and closest pair search in high-dimensional space
Tao, Yufei
2010-07-01
Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial
Problem Based Learning as a Shared Musical Journey--Group Dynamics, Communication and Creativity
Lindvang, Charlotte; Beck, Bolette
2015-01-01
The focus of this paper is how we can facilitate problem based learning (PBL) more creatively. We take a closer look upon the connection between creative processes and social communication in the PBL group including how difficulties in the social interplay may hinder creativity. The paper draws on group dynamic theory, and points out the…
Hye Ha, Eun
2006-01-01
The purpose of this study was to develop a cognitive behavioral group therapy (CBT) for depressed mothers of children between 5-12 years old, with behavior problems and to examine the effectiveness of the program. The CBT group met 8 times in total (2-hour weekly sessions for 8 weeks), followed by a booster session 3 months after the program was…
Age Difference and Face-Saving in an Inter-Generational Problem-Based Learning Group
Robinson, Leslie
2016-01-01
This study used grounded theory methodology to investigate whether learning in a problem-based learning (PBL) group was influenced by student demographic diversity. Data comprised observations, in the form of video footage, of one first-year PBL group carried out over the period of an academic year, along with student interviews. Using the…
The Influence of Homophilous Interactions on Diversity Effects in Group Problem-Solving.
Estévez-Mujica, Claudia P; Acero, Andrés; Jiménez-Leal, William; Garcia-Diaz, César
2018-01-01
Increasingly diversity researchers call for further studies of group micro-processes and dynamics to understand the paradoxical effects of diversity on group performance. In this study, based on analyses of in-group, networked, homophilous interactions, we aim to explain further the effects of diversity on group performance in a parallel problem-solving task, both experimentally and computationally. We developed a 'whodunit' problem-solving experiment with 116 participants assigned to different-sized groups. Experimental results show that low diversity and high homophily levels are associated with lower performance while the effects of group size are not significant. To investigate this further, we developed an agent-based computational model (ABM), through which we inspected (a) the effect of different homophily and diversity strengths on performance, and (b) the robustness of such effects across group size variations. Overall, modeling results were consistent with our experimental findings, and revealed that the strength of homophily can drive diversity towards a positive or negative impact on performance. We also observed that increasing group size has a very marginal effect. Our work contributes to a better understanding of the implications of diversity in-group problem-solving by providing an integration of both experimental and computational perspectives in the analysis of group processes.
Students' engagement with their group in a problem-based learning curriculum.
McHarg, J; Kay, E J; Coombes, L R
2012-02-01
In a new enquiry-based learning dental curriculum, problem-based learning (PBL) was chosen as a central methodology because it promotes a collaborative and constructive approach to learning. However, inevitably, some groups function worse than others. This study explores the relationship between group functionality and individuals' results on knowledge-based assessment. It also sought to establish whether using the Belbin team role theory could improve group functionality. Students completed the Belbin team role inventory that assigns individuals to a team role type and were allocated to either an ideal Belbin group or a control group. To evaluate the functionality of the groups, Macgowan's group engagement measure was completed after 18 and 31 weeks for each student by their group facilitator. The scores were summed and averaged giving a group engagement score for each group. Relationships between group engagement, individual performance in assessment in weeks 18 and 31 and Belbin and non-Belbin teams were investigated. Individual group engagement scores and performance in the knowledge tests had a statistically significant positive relationship despite the small number of students involved (62). However, no correlation was shown between Belbin groups and group engagement scores. Those students who engaged most with the PBL process performed markedly better in assessments of knowledge. Using Belbin's team role theory to place students in PBL groups in an effort to increase group functionality had no effect when compared with non-Belbin control groups. © 2011 John Wiley & Sons A/S.
A Bi-Level Optimization Model for Grouping Constrained Storage Location Assignment Problems.
Xie, Jing; Mei, Yi; Ernst, Andreas T; Li, Xiaodong; Song, Andy
2016-12-23
In this paper, a novel bi-level grouping optimization (BIGO) model is proposed for solving the storage location assignment problem with grouping constraint (SLAP-GC). A major challenge in this problem is the grouping constraint which restricts the number of groups each product can have and the locations of items in the same group. In SLAP-GC, the problem consists of two subproblems, one is how to group the items, and the other one is how to assign the groups to locations. It is an arduous task to solve the two subproblems simultaneously. To overcome this difficulty, we propose a BIGO. BIGO optimizes item grouping in the upper level, and uses the lower-level optimization to evaluate each item grouping. Sophisticated fitness evaluation and search operators are designed for both upper and lower level optimization so that the feasibility of solutions can be guaranteed, and the search can focus on promising areas in the search space. Based on the BIGO model, a multistart random search method and a tabu algorithm are proposed. The experimental results on the real-world dataset validate the efficacy of the BIGO model and the advantage of the tabu method over the random search method.
Variable kernel density estimation in high-dimensional feature spaces
CSIR Research Space (South Africa)
Van der Walt, Christiaan M
2017-02-01
Full Text Available with the KDE is non-parametric, since no parametric distribution is imposed on the estimate; instead the estimated distribution is defined by the sum of the kernel functions centred on the data points. KDEs thus require the selection of two design parameters... has become feasible – understanding and modelling high- dimensional data has thus become a crucial activity, espe- cially in the field of machine learning. Since non-parametric density estimators are data-driven and do not require or impose a pre...
The additive hazards model with high-dimensional regressors
DEFF Research Database (Denmark)
Martinussen, Torben; Scheike, Thomas
2009-01-01
This paper considers estimation and prediction in the Aalen additive hazards model in the case where the covariate vector is high-dimensional such as gene expression measurements. Some form of dimension reduction of the covariate space is needed to obtain useful statistical analyses. We study...... the partial least squares regression method. It turns out that it is naturally adapted to this setting via the so-called Krylov sequence. The resulting PLS estimator is shown to be consistent provided that the number of terms included is taken to be equal to the number of relevant components in the regression...
On spectral distribution of high dimensional covariation matrices
DEFF Research Database (Denmark)
Heinrich, Claudio; Podolskij, Mark
In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider N-dimensional Itô integrals with time varying matrix-valued integrands. We observe n equidistant high frequency data points...... of the underlying Brownian diffusion and we assume that N/n -> c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory....
Single-Machine Group Scheduling Problems with Variable Job Processing Times
Directory of Open Access Journals (Sweden)
Ping Ji
2015-01-01
Full Text Available This paper considers two resource constrained single-machine group scheduling problems. These problems involve variable job processing times (general position-dependent learning effects and deteriorating jobs; that is, the processing time of a job is defined by the function that involves its starting time and position in the group, and groups’ setup time is a positive strictly decreasing continuous function of the amount of consumed resource. Polynomial time algorithms are proposed to optimally solve the makespan minimization problem under the constraint that the total resource consumption does not exceed a given limit and the total resource consumption minimization problem under the constraint that the makespan does not exceed a given limit, respectively.
Sample size requirements for training high-dimensional risk predictors.
Dobbin, Kevin K; Song, Xiao
2013-09-01
A common objective of biomarker studies is to develop a predictor of patient survival outcome. Determining the number of samples required to train a predictor from survival data is important for designing such studies. Existing sample size methods for training studies use parametric models for the high-dimensional data and cannot handle a right-censored dependent variable. We present a new training sample size method that is non-parametric with respect to the high-dimensional vectors, and is developed for a right-censored response. The method can be applied to any prediction algorithm that satisfies a set of conditions. The sample size is chosen so that the expected performance of the predictor is within a user-defined tolerance of optimal. The central method is based on a pilot dataset. To quantify uncertainty, a method to construct a confidence interval for the tolerance is developed. Adequacy of the size of the pilot dataset is discussed. An alternative model-based version of our method for estimating the tolerance when no adequate pilot dataset is available is presented. The model-based method requires a covariance matrix be specified, but we show that the identity covariance matrix provides adequate sample size when the user specifies three key quantities. Application of the sample size method to two microarray datasets is discussed.
Mapping morphological shape as a high-dimensional functional curve.
Fu, Guifang; Huang, Mian; Bo, Wenhao; Hao, Han; Wu, Rongling
2017-01-06
Detecting how genes regulate biological shape has become a multidisciplinary research interest because of its wide application in many disciplines. Despite its fundamental importance, the challenges of accurately extracting information from an image, statistically modeling the high-dimensional shape and meticulously locating shape quantitative trait loci (QTL) affect the progress of this research. In this article, we propose a novel integrated framework that incorporates shape analysis, statistical curve modeling and genetic mapping to detect significant QTLs regulating variation of biological shape traits. After quantifying morphological shape via a radius centroid contour approach, each shape, as a phenotype, was characterized as a high-dimensional curve, varying as angle θ runs clockwise with the first point starting from angle zero. We then modeled the dynamic trajectories of three mean curves and variation patterns as functions of θ Our framework led to the detection of a few significant QTLs regulating the variation of leaf shape collected from a natural population of poplar, Populus szechuanica var tibetica This population, distributed at altitudes 2000-4500 m above sea level, is an evolutionarily important plant species. This is the first work in the quantitative genetic shape mapping area that emphasizes a sense of 'function' instead of decomposing the shape into a few discrete principal components, as the majority of shape studies do. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
High-dimensional camera shake removal with given depth map.
Yue, Tao; Suo, Jinli; Dai, Qionghai
2014-06-01
Camera motion blur is drastically nonuniform for large depth-range scenes, and the nonuniformity caused by camera translation is depth dependent but not the case for camera rotations. To restore the blurry images of large-depth-range scenes deteriorated by arbitrary camera motion, we build an image blur model considering 6-degrees of freedom (DoF) of camera motion with a given scene depth map. To make this 6D depth-aware model tractable, we propose a novel parametrization strategy to reduce the number of variables and an effective method to estimate high-dimensional camera motion as well. The number of variables is reduced by temporal sampling motion function, which describes the 6-DoF camera motion by sampling the camera trajectory uniformly in time domain. To effectively estimate the high-dimensional camera motion parameters, we construct the probabilistic motion density function (PMDF) to describe the probability distribution of camera poses during exposure, and apply it as a unified constraint to guide the convergence of the iterative deblurring algorithm. Specifically, PMDF is computed through a back projection from 2D local blur kernels to 6D camera motion parameter space and robust voting. We conduct a series of experiments on both synthetic and real captured data, and validate that our method achieves better performance than existing uniform methods and nonuniform methods on large-depth-range scenes.
Elucidating high-dimensional cancer hallmark annotation via enriched ontology.
Yan, Shankai; Wong, Ka-Chun
2017-09-01
Cancer hallmark annotation is a promising technique that could discover novel knowledge about cancer from the biomedical literature. The automated annotation of cancer hallmarks could reveal relevant cancer transformation processes in the literature or extract the articles that correspond to the cancer hallmark of interest. It acts as a complementary approach that can retrieve knowledge from massive text information, advancing numerous focused studies in cancer research. Nonetheless, the high-dimensional nature of cancer hallmark annotation imposes a unique challenge. To address the curse of dimensionality, we compared multiple cancer hallmark annotation methods on 1580 PubMed abstracts. Based on the insights, a novel approach, UDT-RF, which makes use of ontological features is proposed. It expands the feature space via the Medical Subject Headings (MeSH) ontology graph and utilizes novel feature selections for elucidating the high-dimensional cancer hallmark annotation space. To demonstrate its effectiveness, state-of-the-art methods are compared and evaluated by a multitude of performance metrics, revealing the full performance spectrum on the full set of cancer hallmarks. Several case studies are conducted, demonstrating how the proposed approach could reveal novel insights into cancers. https://github.com/cskyan/chmannot. Copyright © 2017 Elsevier Inc. All rights reserved.
An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling
Energy Technology Data Exchange (ETDEWEB)
LI, Weixuan; Lin, Guang; Zhang, Dongxiao
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly important for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and En
An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling
Energy Technology Data Exchange (ETDEWEB)
Li, Weixuan, E-mail: weixuan.li@usc.edu [Sonny Astani Department of Civil and Environmental Engineering, University of Southern California, Los Angeles, CA 90089 (United States); Lin, Guang, E-mail: guang.lin@pnnl.gov [Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Zhang, Dongxiao, E-mail: dxz@pku.edu.cn [Department of Energy and Resources Engineering, College of Engineering, Peking University, Beijing 100871 (China)
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functions is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and
DEFF Research Database (Denmark)
Zhou, Chunfang; Kolmos, Anette; Nielsen, Jens Frederik Dalsgaard
2012-01-01
In this paper, we explore how engineering students are motivated to develop group creativity in a Problem and Project- Based Learning (PBL) environment. Theoretically, we take a social cultural approach to group creativity and emphasize the influences of a learning environment on student motivation...... in multiple ways in a PBL environment, such as formal and informal group discussions, regular supervisor meetings and sharing leadership. Furthermore, factors such as common goals, support of peers and openness stimulate motivation. However, the students think that a time schedule is a barrier to group...
Directory of Open Access Journals (Sweden)
Sergio Rojas-Galeano
2008-03-01
Full Text Available The analysis of complex proteomic and genomic profiles involves the identification of significant markers within a set of hundreds or even thousands of variables that represent a high-dimensional problem space. The occurrence of noise, redundancy or combinatorial interactions in the profile makes the selection of relevant variables harder.Here we propose a method to select variables based on estimated relevance to hidden patterns. Our method combines a weighted-kernel discriminant with an iterative stochastic probability estimation algorithm to discover the relevance distribution over the set of variables. We verified the ability of our method to select predefined relevant variables in synthetic proteome-like data and then assessed its performance on biological high-dimensional problems. Experiments were run on serum proteomic datasets of infectious diseases. The resulting variable subsets achieved classification accuracies of 99% on Human African Trypanosomiasis, 91% on Tuberculosis, and 91% on Malaria serum proteomic profiles with fewer than 20% of variables selected. Our method scaled-up to dimensionalities of much higher orders of magnitude as shown with gene expression microarray datasets in which we obtained classification accuracies close to 90% with fewer than 1% of the total number of variables.Our method consistently found relevant variables attaining high classification accuracies across synthetic and biological datasets. Notably, it yielded very compact subsets compared to the original number of variables, which should simplify downstream biological experimentation.
The Subspace Voyager: Exploring High-Dimensional Data along a Continuum of Salient 3D Subspaces.
Wang, Bing; Mueller, Klaus
2018-02-01
Analyzing high-dimensional data and finding hidden patterns is a difficult problem and has attracted numerous research efforts. Automated methods can be useful to some extent but bringing the data analyst into the loop via interactive visual tools can help the discovery process tremendously. An inherent problem in this effort is that humans lack the mental capacity to truly understand spaces exceeding three spatial dimensions. To keep within this limitation, we describe a framework that decomposes a high-dimensional data space into a continuum of generalized 3D subspaces. Analysts can then explore these 3D subspaces individually via the familiar trackball interface while using additional facilities to smoothly transition to adjacent subspaces for expanded space comprehension. Since the number of such subspaces suffers from combinatorial explosion, we provide a set of data-driven subspace selection and navigation tools which can guide users to interesting subspaces and views. A subspace trail map allows users to manage the explored subspaces, keep their bearings, and return to interesting subspaces and views. Both trackball and trail map are each embedded into a word cloud of attribute labels which aid in navigation. We demonstrate our system via several use cases in a diverse set of application areas-cluster analysis and refinement, information discovery, and supervised training of classifiers. We also report on a user study that evaluates the usability of the various interactions our system provides.
McEvoy, Peter M; Burgess, Melissa M; Nathan, Paula
2014-03-01
Cognitive behavioural therapy (CBT) is efficacious, but there remains individual variability in outcomes. Patient's interpersonal problems may affect treatment outcomes, either directly or through a relationship mediated by helping alliance. Interpersonal problems may affect alliance and outcomes differentially in individual and group (CBGT) treatments. The main aim of this study was to investigate the relationship between interpersonal problems, alliance, dropout and outcomes for a clinical sample receiving either individual or group CBT for anxiety or depression in a community clinic. Patients receiving individual CBT (N=84) or CBGT (N=115) completed measures of interpersonal problems, alliance, and disorder specific symptoms at the commencement and completion of CBT. In CBGT higher pre-treatment interpersonal problems were associated with increased risk of dropout and poorer outcomes. This relationship was not mediated by alliance. In individual CBT those who reported higher alliance were more likely to complete treatment, although alliance was not associated with symptom change, and interpersonal problems were not related to attrition or outcome. Allocation to group and individual therapy was non-random, so selection bias may have influenced these results. Some analyses were only powered to detect large effects. Helping alliance ratings were high, so range restriction may have obscured the relationship between helping alliance, attrition and outcomes. Pre-treatment interpersonal problems increase risk of dropout and predict poorer outcomes in CBGT, but not in individual CBT, and this relationship is not mediated by helping alliance. Stronger alliance is associated with treatment completion in individual, but not group CBT. Copyright © 2014 Elsevier B.V. All rights reserved.
Gustafsson, Peter; Jonsson, Gunnar; Enghag, Margareta
2015-01-01
The problem-solving process is investigated for five groups of students when solving context-rich problems in an introductory physics course included in an engineering programme. Through transcripts of their conversation, the paths in the problem-solving process have been traced and related to a general problem-solving model. All groups exhibit…
A Hardy Inequality with Remainder Terms in the Heisenberg Group and the Weighted Eigenvalue Problem
Directory of Open Access Journals (Sweden)
Zixia Yuan
2007-12-01
Full Text Available Based on properties of vector fields, we prove Hardy inequalities with remainder terms in the Heisenberg group and a compact embedding in weighted Sobolev spaces. The best constants in Hardy inequalities are determined. Then we discuss the existence of solutions for the nonlinear eigenvalue problems in the Heisenberg group with weights for the p-sub-Laplacian. The asymptotic behaviour, simplicity, and isolation of the first eigenvalue are also considered.
A Hardy Inequality with Remainder Terms in the Heisenberg Group and the Weighted Eigenvalue Problem
Directory of Open Access Journals (Sweden)
Dou Jingbo
2007-01-01
Full Text Available Based on properties of vector fields, we prove Hardy inequalities with remainder terms in the Heisenberg group and a compact embedding in weighted Sobolev spaces. The best constants in Hardy inequalities are determined. Then we discuss the existence of solutions for the nonlinear eigenvalue problems in the Heisenberg group with weights for the -sub-Laplacian. The asymptotic behaviour, simplicity, and isolation of the first eigenvalue are also considered.
Bennett, J B; Lehman, W E
1998-09-01
While job-related alcohol use may be associated with problems for drinkers, less is known about the effects of employee drinking on co-workers. We hypothesized that either exposure to co-worker drinking or the presence of a drinking climate would positively correlate with reports of stress and other problems. Following previous research, we also predicted that work group cohesion (or team orientation) would buffer against such problems. Two random samples of municipal employees (Ns = 909 and 1,068) completed anonymous surveys. These assessed individual drinking, co-worker drinking, task-oriented group cohesion, the direct reports of negative consequences due to co-worker substance use, and five problem indicators: job stress, job withdrawal, health problems, and performance (work accidents and absences). In each sample, drinking climate correlated with stress and withdrawal more so than did reports of individual drinking. Drinking climate and individual job stress were negatively associated with cohesion. ANCOVA results indicated that drinking climate combined with low cohesion resulted in increased vulnerability for all five problems. Moreover, cohesion appeared to attenuate the negative impact of exposure to drinking norms. As many as 40% of employees report at least one negative consequence associated with co-worker substance use (alcohol and drugs). Because teamwork may buffer negative effects of drinking climate on co-workers, workplace prevention efforts might be enhanced through a focus on the social environment. These efforts would include team-building and discussions of the impact of co-worker drinking on employee productivity.
Group Problem Solving as a Different Participatory Approach to Citizenship Education
Guérin, Laurence
2017-01-01
Purpose: The main goal of this article is to learning define and justify group problem solving as an approach to citizenship education. It is demonstrated that the choice of theoretical framework of democracy has consequences for the chosen learning goals, educational approach and learning activities. The framework used here is an epistemic theory…
Choi, Youngsoo; Ro, Heejung
2012-01-01
The development of positive attitudes in team-based work is important in management education. This study investigates hospitality students' attitudes toward group projects by examining instructional factors and team problems. Specifically, we examine how the students' perceptions of project appropriateness, instructors' support, and evaluation…
Within-Group Effect-Size Benchmarks for Problem-Solving Therapy for Depression in Adults
Rubin, Allen; Yu, Miao
2017-01-01
This article provides benchmark data on within-group effect sizes from published randomized clinical trials that supported the efficacy of problem-solving therapy (PST) for depression among adults. Benchmarks are broken down by type of depression (major or minor), type of outcome measure (interview or self-report scale), whether PST was provided…
Fan, Xitao; Wang, Lin
The Monte Carlo study compared the performance of predictive discriminant analysis (PDA) and that of logistic regression (LR) for the two-group classification problem. Prior probabilities were used for classification, but the cost of misclassification was assumed to be equal. The study used a fully crossed three-factor experimental design (with…
Improving Problem-Based Learning in Creative Communities through Effective Group Evaluation
West, Richard E.; Williams, Greg; Williams, David
2013-01-01
In this case study, we researched one cohort from the Center for Animation, a higher education teaching environment that has successfully fostered group creativity and learning outcomes through problem-based learning. Through live and videotaped observations of the interactions of this community over 18 months, in addition to focused interviews…
Saving Face: Managing Rapport in a Problem-Based Learning Group
Robinson, Leslie; Harris, Ann; Burton, Rob
2015-01-01
This qualitative study investigated the complex social aspects of communication required for students to participate effectively in Problem-Based Learning and explored how these dynamics are managed. The longitudinal study of a group of first-year undergraduates examined interactions using Rapport Management as a framework to analyse communication…
Directory of Open Access Journals (Sweden)
Aisling T. O'Donnell
2015-08-01
Full Text Available Previous research has demonstrated that the unemployed suffer increased psychological and physical health problems compared to their employed counterparts. Further, unemployment leads to an unwanted new social identity that is stigmatizing, and stigma is known to be a stressor causing psychological and physical health problems. However, it is not yet known whether being stigmatized as an unemployed group member is associated with psychological and physical health in this group. The current study tested the impact of anticipated stigma on psychological distress and physical health problems, operationalized as somatic symptoms, in a volunteer sample of unemployed people. Results revealed that anticipated stigma had a direct effect on both psychological distress and somatic symptoms, such that greater anticipated stigma significantly predicted higher levels of both. Moreover, the direct effect on somatic symptoms became non-significant when psychological distress was taken into account. Thus, to the extent that unemployed participants anticipated experiencing greater stigma, they also reported increased psychological distress, and this psychological distress predicted increased somatic symptoms. Our findings complement and extend the existing literature on the relationships between stigmatized identities, psychological distress and physical health problems, particularly in relation to the unemployed group. This group is important to consider both theoretically, given the unwanted and transient nature of the identity compared to other stigmatized identities, but also practically, as the findings indicate a need to orient to the perceived valence of the unemployed identity and its effects on psychological and physical health.
Evaluation of a group-based social skills training for children with problem behavior
van Vugt, E.S.; Deković, M.; Prinzie, P.; Stams, G.J.J.M.; Asscher, J.J.
2012-01-01
This study evaluated a group-based training program in social skills targeting reduction of problem behaviors in N = 161 children between 7 and 13 years of age. The effects of the intervention were tested in a quasi-experimental study, with a follow-up assessment 12 months after an optional
Student Perceptions about Group Based Problem Solving Process in Online and In-Class Settings
Directory of Open Access Journals (Sweden)
Salih Birişçi
2013-12-01
Full Text Available The aim of this study, through qualitative measures, was to systematically examine the perspectives and meanings formed by the teachers about teaching and learning. This study took place in the spring semester of the 2010-2011 academic year of Artvin Çoruh University. The sample size was 27. The participants were divided in two groups – Group 1 with group work taking place in an online test (D1, N=12; Group 2 with group work taking place in a classroom experiment (D2, N=12. After six weeks implementation, semi-structured interviews were conducted in order to determine students’ thoughts on problem solving and group work with success on attitudes towards mathematics. Positive change in students’ attitudes towards mathematics occurred in both the D1 and D2 groups. According to the application, the students stated that they developed increased interest towards mathematics and it turned into enjoyable.Key Words: Problem solving, group study, online learning, student views
Energy Efficient MAC Scheme for Wireless Sensor Networks with High-Dimensional Data Aggregate
Directory of Open Access Journals (Sweden)
Seokhoon Kim
2015-01-01
Full Text Available This paper presents a novel and sustainable medium access control (MAC scheme for wireless sensor network (WSN systems that process high-dimensional aggregated data. Based on a preamble signal and buffer threshold analysis, it maximizes the energy efficiency of the wireless sensor devices which have limited energy resources. The proposed group management MAC (GM-MAC approach not only sets the buffer threshold value of a sensor device to be reciprocal to the preamble signal but also sets a transmittable group value to each sensor device by using the preamble signal of the sink node. The primary difference between the previous and the proposed approach is that existing state-of-the-art schemes use duty cycle and sleep mode to save energy consumption of individual sensor devices, whereas the proposed scheme employs the group management MAC scheme for sensor devices to maximize the overall energy efficiency of the whole WSN systems by minimizing the energy consumption of sensor devices located near the sink node. Performance evaluations show that the proposed scheme outperforms the previous schemes in terms of active time of sensor devices, transmission delay, control overhead, and energy consumption. Therefore, the proposed scheme is suitable for sensor devices in a variety of wireless sensor networking environments with high-dimensional data aggregate.
Applying recursive numerical integration techniques for solving high dimensional integrals
Energy Technology Data Exchange (ETDEWEB)
Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik
2016-11-15
The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.
Building high dimensional imaging database for content based image search
Sun, Qinpei; Sun, Jianyong; Ling, Tonghui; Wang, Mingqing; Yang, Yuanyuan; Zhang, Jianguo
2016-03-01
In medical imaging informatics, content-based image retrieval (CBIR) techniques are employed to aid radiologists in the retrieval of images with similar image contents. CBIR uses visual contents, normally called as image features, to search images from large scale image databases according to users' requests in the form of a query image. However, most of current CBIR systems require a distance computation of image character feature vectors to perform query, and the distance computations can be time consuming when the number of image character features grows large, and thus this limits the usability of the systems. In this presentation, we propose a novel framework which uses a high dimensional database to index the image character features to improve the accuracy and retrieval speed of a CBIR in integrated RIS/PACS.
Experimental High-Dimensional Einstein-Podolsky-Rosen Steering
Zeng, Qiang; Wang, Bo; Li, Pengyun; Zhang, Xiangdong
2018-01-01
Steering nonlocality is the fundamental property of quantum mechanics, which has been widely demonstrated in some systems with qubits. Recently, theoretical works have shown that the high-dimensional (HD) steering effect exhibits novel and important features, such as noise suppression, which appear promising for potential application in quantum information processing (QIP). However, experimental observation of these HD properties remains a great challenge to date. In this work, we demonstrate the HD steering effect by encoding with orbital angular momentum photons for the first time. More importantly, we have quantitatively certified the noise-suppression phenomenon in the HD steering effect by introducing a tunable isotropic noise. We believe our results represent a significant advance of the nonlocal steering study and have direct benefits for QIP applications with superior capacity and reliability.
Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration
Energy Technology Data Exchange (ETDEWEB)
Masalma, Yahya [Universidad del Turabo; Jiao, Yu [ORNL
2010-10-01
We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.
Interactive Java Tools for Exploring High-dimensional Data
Directory of Open Access Journals (Sweden)
James W. Bradley
2000-12-01
Full Text Available The World Wide Web (WWW is a new mechanism for providing information. At this point, the majority of the information on the WWW is static, which means it is incapable of responding to user input. Text, images, and video are examples of static information that can easily be included in a WWW page. With the advent of the Java programming language, it is now possible to embed dynamic information in the form of interactive programs called applets. Therefore, it is not only possible to transfer raw data over the WWW, but we can also now provide interactive graphics for displaying and exploring data in the context of a WWW page. In this paper, we will describe the use of Java applets that have been developed for the interactive display of high dimensional data on the WWW.
GD-RDA: A New Regularized Discriminant Analysis for High-Dimensional Data.
Zhou, Yan; Zhang, Baoxue; Li, Gaorong; Tong, Tiejun; Wan, Xiang
2017-11-01
High-throughput techniques bring novel tools and also statistical challenges to genomic research. Identification of which type of diseases a new patient belongs to has been recognized as an important problem. For high-dimensional small sample size data, the classical discriminant methods suffer from the singularity problem and are, therefore, no longer applicable in practice. In this article, we propose a geometric diagonalization method for the regularized discriminant analysis. We then consider a bias correction to further improve the proposed method. Simulation studies show that the proposed method performs better than, or at least as well as, the existing methods in a wide range of settings. A microarray dataset and an RNA-seq dataset are also analyzed and they demonstrate the superiority of the proposed method over the existing competitors, especially when the number of samples is small or the number of genes is large. Finally, we have developed an R package called "GDRDA" which is available upon request.
Nakano, Takashi; Otsuka, Makoto; Yoshimoto, Junichiro; Doya, Kenji
2015-01-01
A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach.
Bridging Creativity and Group by Elements of Problem-Based Learning (PBL)
DEFF Research Database (Denmark)
Zhou, Chunfang
2015-01-01
As the recent studies have discussed Problem-Based Learning (PBL) as popular model of fostering creativity, this paper aims to provide a theoretical framework bridging creativity and student group context by elements of PBL. According to the literature review, the elements at least include group...... learning, problem solving, interdisciplinary learning, project management and facilitation. The main five elements construct PBL as a suitable learning environment to develop individual creativity and to stimulate interplay of individual creativity and group creativity. Thus, a theoretical model...... will be built for understanding that how PBL influences creativity by its elements that underpins a systematic view to PBL and creativity which also indicates both theoretical and practical significances....
Caillaud, Sabine; Bonnot, Virginie; Ratiu, Eugenia; Krauth-Gruber, Silvia
2016-06-01
This study explores the way groups cope with collective responsibility for ecological problems. The social representations approach was adopted, and the collective symbolic coping model was used as a frame of analysis, integrating collective emotions to enhance the understanding of coping processes. The original feature of this study is that the analysis is at group level. Seven focus groups were conducted with French students. An original use of focus groups was proposed: Discussions were structured to induce feelings of collective responsibility and enable observation of how groups cope with such feelings at various levels (social knowledge; social identities; group dynamics). Two analyses were conducted: Qualitative analysis of participants' use of various kinds of knowledge, social categories and the group dynamics, and lexicometric analysis to reveal how emotions varied during the different discussion phases. Results showed that groups' emotional states moved from negative to positive: They used specific social categories and resorted to shared stereotypes to cope with collective responsibility and maintain the integrity of their worldview. Only then did debate become possible again; it was anchored in the nature-culture dichotomy such that groups switched from group-based to system-based emotions. © 2015 The British Psychological Society.
Problem Based Learning as a Shared Musical Journey – Group Dynamics, Communication and Creativity
Directory of Open Access Journals (Sweden)
Charlotte Lindvang
2015-06-01
Full Text Available The focus of this paper is how we can facilitate problem based learning (PBL more creatively. We take a closer look upon the connection between creative processes and social communication in the PBL group including how difficulties in the social interplay may hinder creativity. The paper draws on group dynamic theory, and points out the importance of building a reflexive milieu in the group. Musical concepts are used to illustrate the communicative and creative aspects of PBL and the paper uses the analogy between improvising together and do a project work together. We also discuss the role of the supervisor in a PBL group process. Further we argue that creativity is rooted deep in our consciousness and connected to our ability to work with a flexible mind. In order to enhance the cohesion as well as the creativity of the group a model of music listening as a concrete intervention tool in PBL processes is proposed.
Smart sampling and incremental function learning for very large high dimensional data.
Loyola R, Diego G; Pedergnana, Mattia; Gimeno García, Sebastián
2016-06-01
Very large high dimensional data are common nowadays and they impose new challenges to data-driven and data-intensive algorithms. Computational Intelligence techniques have the potential to provide powerful tools for addressing these challenges, but the current literature focuses mainly on handling scalability issues related to data volume in terms of sample size for classification tasks. This work presents a systematic and comprehensive approach for optimally handling regression tasks with very large high dimensional data. The proposed approach is based on smart sampling techniques for minimizing the number of samples to be generated by using an iterative approach that creates new sample sets until the input and output space of the function to be approximated are optimally covered. Incremental function learning takes place in each sampling iteration, the new samples are used to fine tune the regression results of the function learning algorithm. The accuracy and confidence levels of the resulting approximation function are assessed using the probably approximately correct computation framework. The smart sampling and incremental function learning techniques can be easily used in practical applications and scale well in the case of extremely large data. The feasibility and good results of the proposed techniques are demonstrated using benchmark functions as well as functions from real-world problems. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Selecting Optimal Feature Set in High-Dimensional Data by Swarm Search
Directory of Open Access Journals (Sweden)
Simon Fong
2013-01-01
Full Text Available Selecting the right set of features from data of high dimensionality for inducing an accurate classification model is a tough computational challenge. It is almost a NP-hard problem as the combinations of features escalate exponentially as the number of features increases. Unfortunately in data mining, as well as other engineering applications and bioinformatics, some data are described by a long array of features. Many feature subset selection algorithms have been proposed in the past, but not all of them are effective. Since it takes seemingly forever to use brute force in exhaustively trying every possible combination of features, stochastic optimization may be a solution. In this paper, we propose a new feature selection scheme called Swarm Search to find an optimal feature set by using metaheuristics. The advantage of Swarm Search is its flexibility in integrating any classifier into its fitness function and plugging in any metaheuristic algorithm to facilitate heuristic search. Simulation experiments are carried out by testing the Swarm Search over some high-dimensional datasets, with different classification algorithms and various metaheuristic algorithms. The comparative experiment results show that Swarm Search is able to attain relatively low error rates in classification without shrinking the size of the feature subset to its minimum.
Using High-Dimensional Image Models to Perform Highly Undetectable Steganography
Pevný, Tomáš; Filler, Tomáš; Bas, Patrick
This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to "preserve" the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.
An Efficient High Dimensional Cluster Method and its Application in Global Climate Sets
Directory of Open Access Journals (Sweden)
Ke Li
2007-10-01
Full Text Available Because of the development of modern-day satellites and other data acquisition systems, global climate research often involves overwhelming volume and complexity of high dimensional datasets. As a data preprocessing and analysis method, the clustering method is playing a more and more important role in these researches. In this paper, we propose a spatial clustering algorithm that, to some extent, cures the problem of dimensionality in high dimensional clustering. The similarity measure of our algorithm is based on the number of top-k nearest neighbors that two grids share. The neighbors of each grid are computed based on the time series associated with each grid, and computing the nearest neighbor of an object is the most time consuming step. According to Tobler's "First Law of Geography," we add a spatial window constraint upon each grid to restrict the number of grids considered and greatly improve the efficiency of our algorithm. We apply this algorithm to a 100-year global climate dataset and partition the global surface into sub areas under various spatial granularities. Experiments indicate that our spatial clustering algorithm works well.
Directory of Open Access Journals (Sweden)
Mr. Evgeny M. Klimenko
2016-06-01
Full Text Available The article considers the problem of intellectuals as social class, features of the national intellectuals, and the problems that national intellectuals of indigenous ethnic groups of Khabarovsk region came across with in the 1990s. These are the problems of preservation of national culture, national language, and difficulties, which have arisen in the sphere of getting national education by representatives of the autochthonic population. The article investigates the transformation process of national culture happening after the Soviet culture had changed into the global culture. When writing this paper the author used unpublished archival documentation. The research is made with assistance of the Ministry of Education and Science of the Russian Federation, the contract No. 14.Z56.16.5304-MK, a project subject: "Regional model of transformation of indigenous small ethnos culture in the conditions of socialist modernization of the Russian Far East in the second half of 1930-1970s".
National Research Council Canada - National Science Library
藤田, 幸一
2015-01-01
There is a tendency that the success of some of the village-level savings groups in Laos inevitably causes an excess funds problem, because the activity of a saving group is confined to a small village territory...
Reinke, Charles M; De la Mata Luque, Teofilo M; Su, Mehmet F; Sinclair, Michael B; El-Kady, Ihab
2011-06-01
The problem of designing electromagnetic metamaterials is complicated by the pseudo-infinite parameter space governing such materials. We present a general solution based on group theory for the design and optimization of the electromagnetic properties of metamaterials. Using this framework, the fundamental properties of a metamaterial design, such as anisotropy or magnetic or electrical resonances, can be elucidated based on the symmetry class into which the unit cell falls. This provides a methodology for the inverse problem of design of the electromagnetic properties of a metamaterial. We also present simulations of a zia metamaterial that provides greater design flexibility for tuning the resonant properties of the device than a structure based on a simple split-ring resonator. The power of this zia element is demonstrated by creating bianisotropic, chiral, and biaxial designs using the inverse group-theory procedure outlined in this paper.
The effects of homogeneous small groups on the efficacy of problem-based learning
Directory of Open Access Journals (Sweden)
JAVAD KOJURI
2013-07-01
Full Text Available Introduction: Problem-based learning (PBL as a learning style has gained a special position amongst different levels of education systems, and many different approaches, such as tutor education, proper scenario presentation, etc., are used to increase its efficiency. However, the role of homogeneous groups to facilitate team working has never been studied. The purpose of this study is to examine the effect of selective group allocation in PBL efficiency. Methods: In this semi-experimental double-blinded study, 40 students of medicine during their externship in the radiology department were divided into two equivalent groups based on their grade average points. The same topics and the same instructors were chosen for both groups. In the control group, the students were randomly divided into four subgroups each with five members. The subgroups in the study group, on the other hand, were homogenized based on their grade average points. Results: The students’ rate of learning of the theoretical topics and their performance in reporting and interpreting the stereotypes in radiology were measured at the beginning and at the end of the study in both groups by two questionnaires with Alpha Krunback of 0.87 and 0.85. All students were male with the mean age of 23.7 years ± 1.19. Age, grade point average of the students in the last semester and the mean of their pre and post-test scores in both groups showed a normal pattern of distribution (p>0.05. The learning and performance scores in each group at the beginning and at the end of the course showed a statistically significant difference with a p value of 0.011 and 0.03, respectively. Conclusion: Homogenizing the PBL groups with allocation of more competent student in each group plays a complementary tutor role and boosts the level of learning by enhancing group dynamicity.
How students perceive problem-based learning (PBL) group tutorials at a Swedish Medical College
Szabó, Zoltán; Harangi, Márta; Nylander, Eva; Ljungman, Anders; Theodorsson, Annette; Ahn, Henrik; Davidsson, Bo
2015-01-01
Introduction: student perception of problem-based learning (PBL) group tutorials was investigated at a Swedish University Medical College 27 years after the introduction of PBL into the curriculum. Methods: a survey questionnaire comprising 43 questions answered on a Likert-type scale, together with one open question was used. The questionnaire was distributed to all 821 students taking part in the Linköping University medical program at the beginning of the Spring Term 2013. The results were...
A randomized trial of group parent training: reducing child conduct problems in real-world settings.
Kjøbli, John; Hukkelberg, Silje; Ogden, Terje
2013-03-01
Group-based Parent Management Training, the Oregon model (PMTO, 12 sessions) was delivered by the regular staff of municipal child and family services. PMTO is based on social interaction learning theory and promotes positive parenting skills in parents of children with conduct problems. This study examined the effectiveness of the group-based training intervention in real world settings both immediately following and six months after termination of the intervention. One hundred thirty-seven children (3-12 years) and their parents participated in this study. The families were randomly assigned to group-based training or a comparison group. Data were collected from parents and teachers. The caregiver assessments of parenting practices and child conduct problems and caregiver and teacher reported social competence revealed immediate and significant intervention effects. Short- and long-term beneficial effects were reported from parents, although no follow-up effects were evident on teacher reports. These effectiveness findings and the potential for increasing the number of families served to support the further dissemination and implementation of group-based parent training. Copyright © 2012 Elsevier Ltd. All rights reserved.
A qualitative numerical study of high dimensional dynamical systems
Albers, David James
Since Poincare, the father of modern mathematical dynamical systems, much effort has been exerted to achieve a qualitative understanding of the physical world via a qualitative understanding of the functions we use to model the physical world. In this thesis, we construct a numerical framework suitable for a qualitative, statistical study of dynamical systems using the space of artificial neural networks. We analyze the dynamics along intervals in parameter space, separating the set of neural networks into roughly four regions: the fixed point to the first bifurcation; the route to chaos; the chaotic region; and a transition region between chaos and finite-state neural networks. The study is primarily with respect to high-dimensional dynamical systems. We make the following general conclusions as the dimension of the dynamical system is increased: the probability of the first bifurcation being of type Neimark-Sacker is greater than ninety-percent; the most probable route to chaos is via a cascade of bifurcations of high-period periodic orbits, quasi-periodic orbits, and 2-tori; there exists an interval of parameter space such that hyperbolicity is violated on a countable, Lebesgue measure 0, "increasingly dense" subset; chaos is much more likely to persist with respect to parameter perturbation in the chaotic region of parameter space as the dimension is increased; moreover, as the number of positive Lyapunov exponents is increased, the likelihood that any significant portion of these positive exponents can be perturbed away decreases with increasing dimension. The maximum Kaplan-Yorke dimension and the maximum number of positive Lyapunov exponents increases linearly with dimension. The probability of a dynamical system being chaotic increases exponentially with dimension. The results with respect to the first bifurcation and the route to chaos comment on previous results of Newhouse, Ruelle, Takens, Broer, Chenciner, and Iooss. Moreover, results regarding the high-dimensional
Kavakiotis, Ioannis; Samaras, Patroklos; Triantafyllidis, Alexandros; Vlahavas, Ioannis
2017-11-01
Single Nucleotide Polymorphism (SNPs) are, nowadays, becoming the marker of choice for biological analyses involving a wide range of applications with great medical, biological, economic and environmental interest. Classification tasks i.e. the assignment of individuals to groups of origin based on their (multi-locus) genotypes, are performed in many fields such as forensic investigations, discrimination between wild and/or farmed populations and others. Τhese tasks, should be performed with a small number of loci, for computational as well as biological reasons. Thus, feature selection should precede classification tasks, especially for Single Nucleotide Polymorphism (SNP) datasets, where the number of features can amount to hundreds of thousands or millions. In this paper, we present a novel data mining approach, called FIFS - Frequent Item Feature Selection, based on the use of frequent items for selection of the most informative markers from population genomic data. It is a modular method, consisting of two main components. The first one identifies the most frequent and unique genotypes for each sampled population. The second one selects the most appropriate among them, in order to create the informative SNP subsets to be returned. The proposed method (FIFS) was tested on a real dataset, which comprised of a comprehensive coverage of pig breed types present in Britain. This dataset consisted of 446 individuals divided in 14 sub-populations, genotyped at 59,436 SNPs. Our method outperforms the state-of-the-art and baseline methods in every case. More specifically, our method surpassed the assignment accuracy threshold of 95% needing only half the number of SNPs selected by other methods (FIFS: 28 SNPs, Delta: 70 SNPs Pairwise FST: 70 SNPs, In: 100 SNPs.) CONCLUSION: Our approach successfully deals with the problem of informative marker selection in high dimensional genomic datasets. It offers better results compared to existing approaches and can aid biologists
Grošelj, Petra; Zadnik Stirn, Lidija
2015-09-15
Environmental management problems can be dealt with by combining participatory methods, which make it possible to include various stakeholders in a decision-making process, and multi-criteria methods, which offer a formal model for structuring and solving a problem. This paper proposes a three-phase decision making approach based on the analytic network process and SWOT (strengths, weaknesses, opportunities and threats) analysis. The approach enables inclusion of various stakeholders or groups of stakeholders in particular stages of decision making. The structure of the proposed approach is composed of a network consisting of an objective cluster, a cluster of strategic goals, a cluster of SWOT factors and a cluster of alternatives. The application of the suggested approach is applied to a management problem of Pohorje, a mountainous area in Slovenia. Stakeholders from sectors that are important for Pohorje (forestry, agriculture, tourism and nature protection agencies) who can offer a wide range of expert knowledge were included in the decision-making process. The results identify the alternative of "sustainable development" as the most appropriate for development of Pohorje. The application in the paper offers an example of employing the new approach to an environmental management problem. This can also be applied to decision-making problems in various other fields. Copyright © 2015 Elsevier Ltd. All rights reserved.
Locke, Kenneth D; Sayegh, Liliane; Penberthy, J Kim; Weber, Charlotte; Haentjens, Katherine; Turecki, Gustavo
2017-06-01
We assessed severely and persistently depressed patients' interpersonal self-efficacy, problems, and goals, plus changes in interpersonal functioning and depression during 20 weeks of group therapy. Outpatients (32 female, 26 male, mean age = 45 years) completed interpersonal circumplex measures of goals, efficacy, and problems before completing 20 weeks of manualized group therapy, during which we regularly assessed depression and interpersonal style. Compared to normative samples, patients lacked interpersonal agency, including less self-efficacy for expressive/assertive actions; stronger motives to avoid conflict, scorn, and humiliation; and more problems with being too submissive, inhibited, and accommodating. Behavioral Activation and especially Cognitive Behavioral Analysis System of Psychotherapy interventions produced improvements in depression and interpersonal agency, with increases in "agentic and communal" efficacy predicting subsequent decreases in depression. While severely and persistently depressed patients were prone to express maladaptive interpersonal dispositions, over the course of group therapy, they showed increasingly agentic and beneficial patterns of cognitions, motives, and behaviors. © 2016 Wiley Periodicals, Inc.
Bayesian Subset Modeling for High-Dimensional Generalized Linear Models
Liang, Faming
2013-06-01
This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.
Experimental Design of Formulations Utilizing High Dimensional Model Representation.
Li, Genyuan; Bastian, Caleb; Welsh, William; Rabitz, Herschel
2015-07-23
Many applications involve formulations or mixtures where large numbers of components are possible to choose from, but a final composition with only a few components is sought. Finding suitable binary or ternary mixtures from all the permissible components often relies on simplex-lattice sampling in traditional design of experiments (DoE), which requires performing a large number of experiments even for just tens of permissible components. The effect rises very rapidly with increasing numbers of components and can readily become impractical. This paper proposes constructing a single model for a mixture containing all permissible components from just a modest number of experiments. Yet the model is capable of satisfactorily predicting the performance for full as well as all possible binary and ternary component mixtures. To achieve this goal, we utilize biased random sampling combined with high dimensional model representation (HDMR) to replace DoE simplex-lattice design. Compared with DoE, the required number of experiments is significantly reduced, especially when the number of permissible components is large. This study is illustrated with a solubility model for solvent mixture screening.
The literary uses of high-dimensional space
Directory of Open Access Journals (Sweden)
Ted Underwood
2015-12-01
Full Text Available Debates over “Big Data” shed more heat than light in the humanities, because the term ascribes new importance to statistical methods without explaining how those methods have changed. What we badly need instead is a conversation about the substantive innovations that have made statistical modeling useful for disciplines where, in the past, it truly wasn’t. These innovations are partly technical, but more fundamentally expressed in what Leo Breiman calls a new “culture” of statistical modeling. Where 20th-century methods often required humanists to squeeze our unstructured texts, sounds, or images into some special-purpose data model, new methods can handle unstructured evidence more directly by modeling it in a high-dimensional space. This opens a range of research opportunities that humanists have barely begun to discuss. To date, topic modeling has received most attention, but in the long run, supervised predictive models may be even more important. I sketch their potential by describing how Jordan Sellers and I have begun to model poetic distinction in the long 19th century—revealing an arc of gradual change much longer than received literary histories would lead us to expect.
Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression
Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph
2017-10-01
In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.
An unsupervised feature extraction method for high dimensional image data compaction
Ghassemian, Hassan; Landgrebe, David
1987-01-01
A new on-line unsupervised feature extraction method for high-dimensional remotely sensed image data compaction is presented. This method can be utilized to solve the problem of data redundancy in scene representation by satellite-borne high resolution multispectral sensors. The algorithm first partitions the observation space into an exhaustive set of disjoint objects. Then, pixels that belong to an object are characterized by an object feature. Finally, the set of object features is used for data transmission and classification. The example results show that the performance with the compacted features provides a slight improvement in classification accuracy instead of any degradation. Also, the information extraction method does not need to be preceded by a data decompaction.
The Effects of Feature Optimization on High-Dimensional Essay Data
Directory of Open Access Journals (Sweden)
Bong-Jun Yi
2015-01-01
Full Text Available Current machine learning (ML based automated essay scoring (AES systems have employed various and vast numbers of features, which have been proven to be useful, in improving the performance of the AES. However, the high-dimensional feature space is not properly represented, due to the large volume of features extracted from the limited training data. As a result, this problem gives rise to poor performance and increased training time for the system. In this paper, we experiment and analyze the effects of feature optimization, including normalization, discretization, and feature selection techniques for different ML algorithms, while taking into consideration the size of the feature space and the performance of the AES. Accordingly, we show that the appropriate feature optimization techniques can reduce the dimensions of features, thus, contributing to the efficient training and performance improvement of AES.
Parabolic Theory as a High-Dimensional Limit of Elliptic Theory
Davey, Blair
2017-10-01
The aim of this article is to show how certain parabolic theorems follow from their elliptic counterparts. This technique is demonstrated through new proofs of five important theorems in parabolic unique continuation and the regularity theory of parabolic equations and geometric flows. Specifically, we give new proofs of an L 2 Carleman estimate for the heat operator, and the monotonicity formulas for the frequency function associated to the heat operator, the two-phase free boundary problem, the flow of harmonic maps, and the mean curvature flow. The proofs rely only on the underlying elliptic theorems and limiting procedures belonging essentially to probability theory. In particular, each parabolic theorem is proved by taking a high-dimensional limit of the related elliptic result.
Robust Hessian Locally Linear Embedding Techniques for High-Dimensional Data
Directory of Open Access Journals (Sweden)
Xianglei Xing
2016-05-01
Full Text Available Recently manifold learning has received extensive interest in the community of pattern recognition. Despite their appealing properties, most manifold learning algorithms are not robust in practical applications. In this paper, we address this problem in the context of the Hessian locally linear embedding (HLLE algorithm and propose a more robust method, called RHLLE, which aims to be robust against both outliers and noise in the data. Specifically, we first propose a fast outlier detection method for high-dimensional datasets. Then, we employ a local smoothing method to reduce noise. Furthermore, we reformulate the original HLLE algorithm by using the truncation function from differentiable manifolds. In the reformulated framework, we explicitly introduce a weighted global functional to further reduce the undesirable effect of outliers and noise on the embedding result. Experiments on synthetic as well as real datasets demonstrate the effectiveness of our proposed algorithm.
Uncertainty dimensions of information behaviour in a group based problem solving context
DEFF Research Database (Denmark)
Hyldegård, Jette
2009-01-01
This paper presents a study of uncertainty dimensions of information behaviour in a group based problem solving context. After a presentation of the cognitive uncertainty dimension underlying Kuhlthau's ISP-model, uncertainty factors associated with personality, the work task situation and social...... members' experiences of uncertainty differ from the individual information seeker in Kuhlthau's ISP-model, and how this experience may be related to personal, work task and social factors. A number of methods have been employed to collect data on each group member during the assignment process......: a demographic survey, a personality test, 3 process surveys, 3 diaries and 3 interviews. It was found that group members' experiences of uncertainty did not correspond with the ISP-model in that other factors beyond the mere information searching process seemed to intermingle with the complex process...
Representing potential energy surfaces by high-dimensional neural network potentials.
Behler, J
2014-05-07
The development of interatomic potentials employing artificial neural networks has seen tremendous progress in recent years. While until recently the applicability of neural network potentials (NNPs) has been restricted to low-dimensional systems, this limitation has now been overcome and high-dimensional NNPs can be used in large-scale molecular dynamics simulations of thousands of atoms. NNPs are constructed by adjusting a set of parameters using data from electronic structure calculations, and in many cases energies and forces can be obtained with very high accuracy. Therefore, NNP-based simulation results are often very close to those gained by a direct application of first-principles methods. In this review, the basic methodology of high-dimensional NNPs will be presented with a special focus on the scope and the remaining limitations of this approach. The development of NNPs requires substantial computational effort as typically thousands of reference calculations are required. Still, if the problem to be studied involves very large systems or long simulation times this overhead is regained quickly. Further, the method is still limited to systems containing about three or four chemical elements due to the rapidly increasing complexity of the configuration space, although many atoms of each species can be present. Due to the ability of NNPs to describe even extremely complex atomic configurations with excellent accuracy irrespective of the nature of the atomic interactions, they represent a general and therefore widely applicable technique, e.g. for addressing problems in materials science, for investigating properties of interfaces, and for studying solvation processes.
Directory of Open Access Journals (Sweden)
Alessandra Turini Bolsoni-Silva
2010-01-01
Full Text Available Negative parental practices may influence the onset and maintenance of externalizing behavior problems, and positive parenting seem to improve children's social skills and reduce behavior problems. The objective of the present study was to describe the effects of an intervention designed to foster parents' social skills related to upbringing practices in order to reduce externalizing problems in children aged 4 to 6 years. Thirteen mothers and two care taker grandmothers took part in the study with an average of four participants per group. To assess intervention effects, we used a repeated measure design with control, pre, and post intervention assessments. Instruments used were: (a An interview schedule that evaluates the social interactions between parents and children functionally, considering each pair of child¿s and parent's behaviors as context for one another; (b A Social Skills Inventory; (c Child Behavior Checklist - CBCL. Intervention was effective in improving parent general social skills, decreasing negative parental practices and decreasing child behavior problems.
Ihm, Jung-Joon; An, So-Youn; Seo, Deog-Gyu
2017-06-01
The aim of this study was to determine whether the personality types of dental students and their group dynamics were linked to their problem-based learning (PBL) performance. The Myers-Briggs Type Indicator (MBTI) instrument was used with 263 dental students enrolled in Seoul National University School of Dentistry from 2011 to 2013; the students had participated in PBL in their first year. A four-session PBL setting was designed to analyze how individual personality types and the diversity of their small groups were associated with PBL performance. Overall, the results showed that the personality type of PBL performance that was the most prominent was Judging. As a group became more diverse with its different constituent personality characteristics, there was a tendency for the group to be higher ranked in terms of PBL performance. In particular, the overperforming group was clustered around three major profiles: Extraverted Intuitive Thinking Judging (ENTJ), Introverted Sensing Thinking Judging (ISTJ), and Extraverted Sensing Thinking Judging (ESTJ). Personality analysis would be beneficial for dental faculty members in order for them to understand the extent to which cooperative learning would work smoothly, especially when considering group personalities.
Group interaction in problem-based learning tutorials: a systematic review.
Azer, S A; Azer, D
2015-11-01
This review aimed at identifying studies on group interaction in problem-based learning (PBL) and elucidate methods used, factors affecting group interaction and the relationship between interaction and student's learning. PubMed, EMBASE, PsycINFO and HighWire were searched (January 1999 to June 2013) using a combination of pre-specified search terms. The search words were also used in searching nine journals in dental and medical education. Also edited research books on PBL were searched. Both qualitative and descriptive studies of group interaction were selected and critically appraised. Finally, 42 of 10,606 papers were included (35 journal articles and seven from research books). The materials used in assessing group interaction varied depending on the methodology design. Forty-three percent of the studies used video recording to evaluate group interaction. Other studies used indirect approaches such as focus groups, interviews and questionnaires. Factors affecting group interactions were students' and tutors' perceptions, tutor's subject-matter expertise, training students, tutor's group dynamics. There was no conclusive evidence about the impact of interaction in PBL on learning. Most studies were from medicine (64%), and 35 papers were published in the last 10 years. The majority of studies were conducted in Europe, North America and Asia. Although there is a progressive increase in publications on PBL group interaction during the last 10 years, there are knowledge gaps and deficiencies in this area and most studies are lacking solid theoretical basis and are descriptive. There is a deficiency in the literature in this area from dentistry and other allied health disciplines. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A Discrete Group Search Optimizer for Hybrid Flowshop Scheduling Problem with Random Breakdown
Directory of Open Access Journals (Sweden)
Zhe Cui
2014-01-01
Full Text Available The scheduling problems have been discussed in the literature extensively under the assumption that the machines are permanently available without any breakdown. However, in the real manufacturing environments, the machines could be unavailable inevitably for many reasons. In this paper, the authors introduce the hybrid flowshop scheduling problem with random breakdown (RBHFS together with a discrete group search optimizer algorithm (DGSO. In particular, two different working cases, preempt-resume case, and preempt-repeat case are considered under random breakdown. The proposed DGSO algorithm adopts the vector representation and several discrete operators, such as insert, swap, differential evolution, destruction, and construction in the producers, scroungers, and rangers phases. In addition, an orthogonal test is applied to configure the adjustable parameters in the DGSO algorithm. The computational results in both cases indicate that the proposed algorithm significantly improves the performances compared with other high performing algorithms in the literature.
Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems
Peng, Juan-juan; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong
2016-07-01
As a variation of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete and inconsistent information that exists in the real world. Simplified neutrosophic sets (SNSs) have been proposed for the main purpose of addressing issues with a set of specific numbers. However, there are certain problems regarding the existing operations of SNSs, as well as their aggregation operators and the comparison methods. Therefore, this paper defines the novel operations of simplified neutrosophic numbers (SNNs) and develops a comparison method based on the related research of intuitionistic fuzzy numbers. On the basis of these operations and the comparison method, some SNN aggregation operators are proposed. Additionally, an approach for multi-criteria group decision-making (MCGDM) problems is explored by applying these aggregation operators. Finally, an example to illustrate the applicability of the proposed method is provided and a comparison with some other methods is made.
A modified interactive procedure to solve multi-objective group decision making problem
Directory of Open Access Journals (Sweden)
Mohammad Izadikhah
2014-08-01
Full Text Available Multi-objective optimization and multiple criteria decision making problems are the process of designing the best alternative by considering the incommensurable and conflicting objectives simultaneously. One of the first interactive procedures to solve multiple criteria decision making problems is STEM method. In this paper we propose a modified interactive procedure based on STEM method by calculating the weight vector of objectives which emphasize that more important objectives be closer to ideal one. We use the AHP and TOPSIS method to find these weights and develop a multi-objective group decision making procedure. Therefore the presented method tries to increase the rate of satisfactoriness of the obtained solution. Finally, a numerical example for illustration of the new method is given to clarify the main results developed in this paper.
Win, Ni Ni; Nadarajah, Vishna Devi V; Win, Daw Khin
2015-01-01
Problem-based learning (PBL) is usually conducted in small-group learning sessions with approximately eight students per facilitator. In this study, we implemented a modified version of PBL involving collaborative groups in an undergraduate chiropractic program and assessed its pedagogical effectiveness. This study was conducted at the International Medical University, Kuala Lumpur, Malaysia, and involved the 2012 chiropractic student cohort. Six PBL cases were provided to chiropractic students, consisting of three PBL cases for which learning resources were provided and another three PBL cases for which learning resources were not provided. Group discussions were not continuously supervised, since only one facilitator was present. The students' perceptions of PBL in collaborative groups were assessed with a questionnaire that was divided into three domains: motivation, cognitive skills, and perceived pressure to work. Thirty of the 31 students (97%) participated in the study. PBL in collaborative groups was significantly associated with positive responses regarding students' motivation, cognitive skills, and perceived pressure to work (Plearning resources increased motivation and cognitive skills (Plearning resources.
The role of self-help group in social inclusion of adults with mental health problems
Meglič, Maruša
2014-01-01
Mental health is the foundation of human activity at all levels and a part of health and basic human need. In my thesis I researched the importance of self-help groups for individuals with mental health problems to maintain mental health in social inclusion process and stigma effects reduction. In the first part I explain the terms and concepts of mental health in relation to mental illness and mental disorders, I cover the terms of stigma and the function of prejudices in the process of stig...
Li, Yanming; Nan, Bin; Zhu, Ji
2015-06-01
We propose a multivariate sparse group lasso variable selection and estimation method for data with high-dimensional predictors as well as high-dimensional response variables. The method is carried out through a penalized multivariate multiple linear regression model with an arbitrary group structure for the regression coefficient matrix. It suits many biology studies well in detecting associations between multiple traits and multiple predictors, with each trait and each predictor embedded in some biological functional groups such as genes, pathways or brain regions. The method is able to effectively remove unimportant groups as well as unimportant individual coefficients within important groups, particularly for large p small n problems, and is flexible in handling various complex group structures such as overlapping or nested or multilevel hierarchical structures. The method is evaluated through extensive simulations with comparisons to the conventional lasso and group lasso methods, and is applied to an eQTL association study. © 2015, The International Biometric Society.
3D overlapped grouping Ga for optimum 2D guillotine cutting stock problem
Directory of Open Access Journals (Sweden)
Maged R. Rostom
2014-09-01
Full Text Available The cutting stock problem (CSP is one of the significant optimization problems in operations research and has gained a lot of attention for increasing efficiency in industrial engineering, logistics and manufacturing. In this paper, new methodologies for optimally solving the cutting stock problem are presented. A modification is proposed to the existing heuristic methods with a hybrid new 3-D overlapped grouping Genetic Algorithm (GA for nesting of two-dimensional rectangular shapes. The objective is the minimization of the wastage of the sheet material which leads to maximizing material utilization and the minimization of the setup time. The model and its results are compared with real life case study from a steel workshop in a bus manufacturing factory. The effectiveness of the proposed approach is shown by comparing and shop testing of the optimized cutting schedules. The results reveal its superiority in terms of waste minimization comparing to the current cutting schedules. The whole procedure can be completed in a reasonable amount of time by the developed optimization program.
Directory of Open Access Journals (Sweden)
P. Brysiewicz
2002-09-01
Full Text Available Problem-based Learning is a learner-centered approach to education which encourages student participation and group work in the learning process. This method of selfdirected learning is facilitated by the use of small-group discussions. This being the case, it is important for groups to function effectively in order for this learning to occur. These small groups are guided by a facilitator and utilize real-life problems from the clinical settings.
Using clinical experience in discussion within problem-based learning groups.
O'Neill, Paul; Duplock, Amanda; Willis, Sarah
2006-11-01
A key principle in problem-based learning (PBL) is the student linking learning from different sources to enrich understanding. We have explored how medical students based in a clinical environment use clinical experience within PBL groups. We recorded the discussion of 12 third-year groups, which were meeting for the second time on a PBL case, where students report back on the learning objectives. Discussions covering five separate PBL paper cases were recorded. Analysis of the transcripts was based on constant comparative method using a coding framework. The range of discussion segments of clinical experience was 2-15, with 9 of 12 groups having at least five separate segments. Our initial coding framework covered 10 categories, of which the most common were: a specific patient encounter (19%); an experience in the community (15%); and a personal health experience (15%). Students often used emotive phrases with 37 examples in the clinical experience segments compared with 9 from the longer non-clinical discussion. Most clinical descriptions triggered further discussion with almost half leading to some related medical topic. The discussion segments were subsequently coded into; 'confirming' (40); 'extending' (40); and 'disconfirming' (16) the understanding of the group for that topic. Discussion of clinical experience encouraged students to connect to the affective aspects of learning. It helped students to bridge between the tutorial and real clinical contexts. A clinical experience was often a powerful pivotal point, which confirmed, extended or refuted what was being discussed.
Two representations of a high-dimensional perceptual space.
Victor, Jonathan D; Rizvi, Syed M; Conte, Mary M
2017-08-01
A perceptual space is a mental workspace of points in a sensory domain that supports similarity and difference judgments and enables further processing such as classification and naming. Perceptual spaces are present across sensory modalities; examples include colors, faces, auditory textures, and odors. Color is perhaps the best-studied perceptual space, but it is atypical in two respects. First, the dimensions of color space are directly linked to the three cone absorption spectra, but the dimensions of generic perceptual spaces are not as readily traceable to single-neuron properties. Second, generic perceptual spaces have more than three dimensions. This is important because representing each distinguishable point in a high-dimensional space by a separate neuron or population is unwieldy; combinatorial strategies may be needed to overcome this hurdle. To study the representation of a complex perceptual space, we focused on a well-characterized 10-dimensional domain of visual textures. Within this domain, we determine perceptual distances in a threshold task (segmentation) and a suprathreshold task (border salience comparison). In N=4 human observers, we find both quantitative and qualitative differences between these sets of measurements. Quantitatively, observers' segmentation thresholds were inconsistent with their uncertainty determined from border salience comparisons. Qualitatively, segmentation thresholds suggested that distances are determined by a coordinate representation with Euclidean geometry. Border salience comparisons, in contrast, indicated a global curvature of the space, and that distances are determined by activity patterns across broadly tuned elements. Thus, our results indicate two representations of this perceptual space, and suggest that they use differing combinatorial strategies. To move from sensory signals to decisions and actions, the brain carries out a sequence of transformations. An important stage in this process is the
A fast PC algorithm for high dimensional causal discovery with multi-core PCs.
Le, Thuc; Hoang, Tao; Li, Jiuyong; Liu, Lin; Liu, Huawen; Hu, Shu
2016-07-14
Discovering causal relationships from observational data is a crucial problem and it has applications in many research areas. The PC algorithm is the state-of-the-art constraint based method for causal discovery. However, runtime of the PC algorithm, in the worst-case, is exponential to the number of nodes (variables), and thus it is inefficient when being applied to high dimensional data, e.g. gene expression datasets. On another note, the advancement of computer hardware in the last decade has resulted in the widespread availability of multi-core personal computers. There is a significant motivation for designing a parallelised PC algorithm that is suitable for personal computers and does not require end users' parallel computing knowledge beyond their competency in using the PC algorithm. In this paper, we develop parallel-PC, a fast and memory efficient PC algorithm using the parallel computing technique. We apply our method to a range of synthetic and real-world high dimensional datasets. Experimental results on a dataset from the DREAM 5 challenge show that the original PC algorithm could not produce any results after running more than 24 hours; meanwhile, our parallel-PC algorithm managed to finish within around 12 hours with a 4-core CPU computer, and less than 6 hours with a 8-core CPU computer. Furthermore, we integrate parallel-PC into a causal inference method for inferring miRNA-mRNA regulatory relationships. The experimental results show that parallel-PC helps improve both the efficiency and accuracy of the causal inference algorithm.
2D-EM clustering approach for high-dimensional data through folding feature vectors.
Sharma, Alok; Kamola, Piotr J; Tsunoda, Tatsuhiko
2017-12-28
Clustering methods are becoming widely utilized in biomedical research where the volume and complexity of data is rapidly increasing. Unsupervised clustering of patient information can reveal distinct phenotype groups with different underlying mechanism, risk prognosis and treatment response. However, biological datasets are usually characterized by a combination of low sample number and very high dimensionality, something that is not adequately addressed by current algorithms. While the performance of the methods is satisfactory for low dimensional data, increasing number of features results in either deterioration of accuracy or inability to cluster. To tackle these challenges, new methodologies designed specifically for such data are needed. We present 2D-EM, a clustering algorithm approach designed for small sample size and high-dimensional datasets. To employ information corresponding to data distribution and facilitate visualization, the sample is folded into its two-dimension (2D) matrix form (or feature matrix). The maximum likelihood estimate is then estimated using a modified expectation-maximization (EM) algorithm. The 2D-EM methodology was benchmarked against several existing clustering methods using 6 medically-relevant transcriptome datasets. The percentage improvement of Rand score and adjusted Rand index compared to the best performing alternative method is up to 21.9% and 155.6%, respectively. To present the general utility of the 2D-EM method we also employed 2 methylome datasets, again showing superior performance relative to established methods. The 2D-EM algorithm was able to reproduce the groups in transcriptome and methylome data with high accuracy. This build confidence in the methods ability to uncover novel disease subtypes in new datasets. The design of 2D-EM algorithm enables it to handle a diverse set of challenging biomedical dataset and cluster with higher accuracy than established methods. MATLAB implementation of the tool can be
Observational analysis of near-peer and faculty tutoring in problem-based learning groups.
Cianciolo, Anna T; Kidd, Bryan; Murray, Sean
2016-07-01
Near-peer and faculty staff tutors may facilitate problem-based learning (PBL) through different means. Near-peer tutors are thought to compensate for their lack of subject matter expertise with greater adeptness at group facilitation and a better understanding of their learners. However, theoretical explanations of tutor effectiveness have been developed largely from recollections of tutor practices gathered through student evaluation surveys, focus groups and interviews. A closer look at what happens during PBL sessions tutored by near-peers and faculty members seems warranted to augment theory from a grounded perspective. We conducted an observational study to explore interactional practices during PBL tutorials at our medical school, at which near-peer tutoring of Year 2 students is an established practice. Between October 2014 and May 2015, video-recordings were made of nine purposively sampled tutor groups using three tutor types (near-peer, clinical faculty and basic science faculty staff) across three systems-based units. An investigator team comprising a Year 2 student, a Year 4 student and a behavioural scientist independently analysed the videos until their observations reached saturation and then met face to face to discuss their detailed field notes. Through constant comparison, narratives of tutor practices and group dynamics were generated for each of the nine tutor groups, representing the collective impressions of the members of the investigator team. Variation was greater within than across tutor types. Tutors' practices idiosyncratically and sometimes substantially diverged from PBL principles, yet all tutors attempted to convey authority or 'insider' status with respect to the short- and long-term goals of medical education. Students prompted these status demonstrations by expressing gratitude, asking questions and exhibiting analogous status demonstrations themselves. Understanding the socio-cognitive nature of tutoring from a grounded
Energy Technology Data Exchange (ETDEWEB)
Hyman, J.; Beyer, W.; Louck, J.; Metropolis, N.
1996-07-01
This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). Group theoretical methods are a powerful tool both in their applications to mathematics and to physics. The broad goal of this project was to use such methods to develop the implications of group (symmetry) structures underlying models of physical systems, as well as to broaden the understanding of simple models of chaotic systems. The main thrust was to develop further the complex mathematics that enters into many-particle quantum systems with special emphasis on the new directions in applied mathematics that have emerged and continue to surface in these studies. In this area, significant advances in understanding the role of SU(2) 3nj-coefficients in SU(3) theory have been made and in using combinatoric techniques in the study of generalized Schur functions, discovered during this project. In the context of chaos, the study of maps of the interval and the associated theory of words has led to significant discoveries in Galois group theory, to the classification of fixed points, and to the solution of a problem in the classification of DNA sequences.
Problem-based learning: facilitating multiple small teams in a large group setting.
Hyams, Jennifer H; Raidal, Sharanne L
2013-01-01
Problem-based learning (PBL) is often described as resource demanding due to the high staff-to-student ratio required in a traditional PBL tutorial class where there is commonly one facilitator to every 5-16 students. The veterinary science program at Charles Sturt University, Australia, has developed a method of group facilitation which readily allows one or two staff members to facilitate up to 30 students at any one time while maintaining the benefits of a small PBL team of six students. Multi-team facilitation affords obvious financial and logistic advantages, but there are also important pedagogical benefits derived from uniform facilitation across multiple groups, enhanced discussion and debate between groups, and the development of self-facilitation skills in students. There are few disadvantages to the roaming facilitator model, provided that several requirements are addressed. These requirements include a suitable venue, large whiteboards, a structured approach to support student engagement with each disclosure, a detailed facilitator guide, and an open, collaborative, and communicative environment.
Visualizing High-Dimensional Structures by Dimension Ordering and Filtering using Subspace Analysis
Ferdosi, Bilkis J.; Roerdink, Jos B. T. M.
2011-01-01
High-dimensional data visualization is receiving increasing interest because of the growing abundance of high-dimensional datasets. To understand such datasets, visualization of the structures present in the data, such as clusters, can be an invaluable tool. Structures may be present in the full
New moves-preventing weight-related problems in adolescent girls a group-randomized study.
Neumark-Sztainer, Dianne R; Friend, Sarah E; Flattum, Colleen F; Hannan, Peter J; Story, Mary T; Bauer, Katherine W; Feldman, Shira B; Petrich, Christine A
2010-11-01
Weight-related problems are prevalent in adolescent girls. To evaluate New Moves, a school-based program aimed at preventing weight-related problems in adolescent girls. School-based group-randomized controlled design. 356 girls (mean age=15.8±1.2 years) from six intervention and six control high schools. More than 75% of the girls were racial/ethnic minorities and 46% were overweight or obese. Data were collected in 2007-2009 and analyzed in 2009-2010. An all-girls physical education class, supplemented with nutrition and self-empowerment components, individual sessions using motivational interviewing, lunch meetings, and parent outreach. Percentage body fat, BMI, physical activity, sedentary activity, dietary intake, eating patterns, unhealthy weight control behaviors, and body/self-image. New Moves did not lead to significant changes in the girls' percentage body fat or BMI but improvements were seen for sedentary activity, eating patterns, unhealthy weight control behaviors, and body/self-image. For example, in comparison to control girls, at 9-month follow-up, intervention girls decreased their sedentary behaviors by approximately one 30-minute block a day (p=0.050); girls increased their portion control behaviors (p=0.014); the percentage of girls using unhealthy weight control behaviors decreased by 13.7% (p=0.021); and improvements were seen in body image (p=0.045) and self-worth (p=0.031). Additionally, intervention girls reported more support by friends, teachers, and families for healthy eating and physical activity. New Moves provides a model for addressing the broad spectrum of weight-related problems among adolescent girls. Further work is needed to enhance the effectiveness of interventions to improve weight status of youth. Copyright © 2010 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Urbanoski, Karen; van Mierlo, Trevor; Cunningham, John
2016-08-22
This study contributes to emerging literature on online health networks by modeling communication patterns between members of a moderated online support group for problem drinking. Using social network analysis, we described members' patterns of joint participation in threads, parsing out the role of site moderators, and explored differences in member characteristics by network position. Posts made to the online support group of Alcohol Help Centre during 2013 were structured as a two-mode network of members (n = 205) connected via threads (n = 506). Metrics included degree centrality, clique membership, and tie strength. The network consisted of one component and no cliques of members, although most made few posts and a small number communicated only with the site's moderators. Highly active members were older and tended to have started posting prior to 2013. The distribution of members across threads varied from threads containing posts by one member to others that connected multiple members. Moderators accounted for sizable proportions of the connectivity between both members and threads. After 5 years of operation, the AHC online support group appears to be fairly cohesive and stable, in the sense that there were no isolated subnetworks comprised of specific types of members or devoted to specific topics. Participation and connectedness at the member-level was varied, however, and tended to be low on average. The moderators were among the most central in the network, although there were also members who emerged as central and dedicated contributors to the online discussions across topics. Study findings highlight a number of areas for consideration by online support group developers and managers.
Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.
Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto
2017-10-12
Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analysed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression data sets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R
Ren, Chao; He, Xiaohai; Nguyen, Truong Q
2017-01-01
Single image super-resolution (SR) is very important in many computer vision systems. However, as a highly ill-posed problem, its performance mainly relies on the prior knowledge. Among these priors, the non-local total variation (NLTV) prior is very popular and has been thoroughly studied in recent years. Nevertheless, technical challenges remain. Because NLTV only exploits a fixed non-shifted target patch in the patch search process, a lack of similar patches is inevitable in some cases. Thus, the non-local similarity cannot be fully characterized, and the effectiveness of NLTV cannot be ensured. Based on the motivation that more accurate non-local similar patches can be found by using shifted target patches, a novel multishifted similar-patch search (MSPS) strategy is proposed. With this strategy, NLTV is extended as a newly proposed super-high-dimensional NLTV (SHNLTV) prior to fully exploit the underlying non-local similarity. However, as SHNLTV is very high-dimensional, applying it directly to SR is very difficult. To solve this problem, a novel statistics-based dimension reduction strategy is proposed and then applied to SHNLTV. Thus, SHNLTV becomes a more computationally effective prior that we call adaptive high-dimensional non-local total variation (AHNLTV). In AHNLTV, a novel joint weight strategy that fully exploits the potential of the MSPS-based non-local similarity is proposed. To further boost the performance of AHNLTV, the adaptive geometric duality (AGD) prior is also incorporated. Finally, an efficient split Bregman iteration-based algorithm is developed to solve the AHNLTV-AGD-driven minimization problem. Extensive experiments validate the proposed method achieves better results than many state-of-the-art SR methods in terms of both objective and subjective qualities.
Statistical Machine Learning for Structured and High Dimensional Data
2014-09-17
Respondents should be aware that notwithstanding any other provision of law , no person shall be subject to any penalty for failing to comply with a...data analysis problems. In particular, we have been working with data from the Kepler telescope for finding exoplanets orbiting distant stars. The
Brysiewicz, P; Cassimjee, R; McInerney, P
2002-11-01
Problem-based Learning is a learner-centered approach to education which encourages student participation and group work in the learning process. This method of self-directed learning is facilitated by the use of small-group discussions. This being the case, it is important for groups to function effectively in order for this learning to occur. These small groups are guided by a facilitator and utilize real-life problems from the clinical settings. An exploratory survey using open-ended questionnaires was conducted amongst senior nursing students at the University of Natal, and this focussed on their experiences of group work. The students described their best and worst experiences as a member of a group, as well as what they found most irritating and most appreciated about the group work. The students also highlighted what they expected from the group and in turn what they were willing to do for the group.
McEvoy, Peter M; Burgess, Melissa M; Nathan, Paula
2013-09-05
Interpersonal functioning is a key determinant of psychological well-being, and interpersonal problems (IPs) are common among individuals with psychiatric disorders. However, IPs are rarely formally assessed in clinical practice or within cognitive behavior therapy research trials as predictors of treatment attrition and outcome. The main aim of this study was to investigate the relationship between IPs, depressogenic cognitions, and treatment outcome in a large clinical sample receiving cognitive behavioral group therapy (CBGT) for depression in a community clinic. Patients (N=144) referred for treatment completed measures of IPs, negative cognitions, depression symptoms, and quality of life (QoL) before and at the completion of a 12-week manualized CBGT protocol. Two IPs at pre-treatment, 'finding it hard to be supportive of others' and 'not being open about problems,' were associated with higher attrition. Pre-treatment IPs also predicted higher post-treatment depression symptoms (but not QoL) after controlling for pre-treatment symptoms, negative cognitions, demographics, and comorbidity. In particular, 'difficulty being assertive' and a 'tendency to subjugate one's needs' were associated with higher post-treatment depression symptoms. Changes in IPs did not predict post-treatment depression symptoms or QoL when controlling for changes in negative cognitions, pre-treatment symptoms, demographics, and comorbidity. In contrast, changes in negative cognitions predicted both post-treatment depression and QoL, even after controlling for changes in IPs and the other covariates. Correlational design, potential attrition bias, generalizability to other disorders and treatments needs to be evaluated. Pre-treatment IPs may increase risk of dropout and predict poorer outcomes, but changes in negative cognitions during treatment were most strongly associated with improvement in symptoms and QoL during CBGT. Copyright © 2013 Elsevier B.V. All rights reserved.
Outlier Detection via Online Oversampling in High Dimensional space
Kavya M Menon; G. Sakthi
2014-01-01
Anomaly detection is an important topic in data mining. Many applications such as intrusion or credit card fraud detection require efficient method to identify deviated data instances, mostly anomaly detection methods are typically implemented in batch mode, and thus cannot be easily extended to large-scale problems without sacrificing computation and memory requirements. This paper proposes an online oversampling principal component analysis (osPCA) algorithm, and aim to detecting the presen...
Crone, Mathilde R.; Bekkema, Nienke; Wiefferink, Carin H.; Reijneveld, Sijmen A.
Objective To assess the effectiveness of child health care professionals (CHP) in identifying psychosocial problems among children originating from industrialized and nonindustrialized countries and to assess whether parental concerns enhance CHP problem-identification. Study design During routine
Crone, M.R.; Bekkema, N.; Wiefferink, C.H.; Reijneveld, S.A.
2010-01-01
Objective: To assess the effectiveness of child health care professionals (CHP) in identifying psychosocial problems among children originating from industrialized and nonindustrialized countries and to assess whether parental concerns enhance CHP problem-identification. Study design: During routine
Crone, M.R.; Bekkema, N.; Wiefferink, C.H.; Reijneveld, S.A.
2010-01-01
Objective: To assess the effectiveness of child health care professionals (CHP) in identifying psychosocial problems among children originating from industrialized and nonindustrialized countries and to assess whether parental concerns enhance CHP problem-identification. Study design: During routine
Crone, M.R.; Bekkema, N.; Wiefferink, C.H.; Reijneveld, S.A.
2010-01-01
Objective To assess the effectiveness of child health care professionals (CHP) in identifying psychosocial problems among children originating from industrialized and nonindustrialized countries and to assess whether parental concerns enhance CHP problem-identification. Study design During routine
Maes, Marlies; Stevens, Gonneke W. J. M.; Verkuijten, Maykel
2014-01-01
Previous research has identified ethnic group identification as a moderator in the relationship between perceived ethnic discrimination and problem behaviors in ethnic minority children. However, little is known about the influence of religious and host national identification on this relationship.
Zimring, James C; Welniak, Lis; Semple, John W; Ness, Paul M; Slichter, Sherrill J; Spitalnik, Steven L
2011-02-01
In April 2010, a working group sponsored by the National Heart, Lung, and Blood Institute was assembled to identify research strategies to improve our understanding of alloimmunization caused by the transfusion of allogeneic blood components and to evaluate potential approaches to both reduce its occurrence and manage its effects. Significant sequelae of alloimmunization were discussed and identified, including difficulties in maintaining chronic transfusion of red blood cells and platelets, hemolytic disease of the newborn, neonatal alloimmune thrombocytopenia, and rejection of transplanted cells and tissues. The discussions resulted in a consensus that identified key areas of future research and developmental areas, including genetic and epigenetic recipient factors that regulate alloimmunization, biochemical specifics of transfused products that affect alloimmunization, and novel technologies for high-throughput genotyping to facilitate extensive and efficient antigen matching between donor and recipient. Additional areas of importance included analysis of unappreciated medical sequelae of alloimmunization, such as cellular immunity and its effect upon transplant and autoimmunity. In addition, support for research infrastructure was discussed, with an emphasis on encouraging collaboration and synergy of animal models biology and human clinical research. Finally, training future investigators was identified as an area of importance. In aggregate, this communication provides a synopsis of the opinions of the working group on the above issues and presents both a list of suggested priorities and the rationale for the topics of focus. The areas of research identified in this report represent potential fertile ground for the medical advancement of preventing and managing alloimmunization in its different forms and mitigating the clinical problems it presents to multiple patient populations. © 2011 American Association of Blood Banks.
Das Carlo, Mandira; Swadi, Harith; Mpofu, Debbie
2003-01-01
The popularization of problem-based learning (PBL) has drawn attention to the motivational and cognitive skills necessary for medical students in group learning. This study identifies the effect of motivational and cognitive factors on group productivity of PBL tutorial groups. A self-administered questionnaire was completed by 115 students at the end of PBL tutorials for 4 themes. The questionnaire explored student perceptions about effect of motivation, cohesion, sponging, withdrawal, interaction, and elaboration on group productivity. We further analyzed (a) differences in perceptions between male and female students, (b) effect of "problems," and (c) effect of student progress over time on group productivity. There were linear relations between a tutorial group's success and the factors studied. Significant differences were noted between male and female student groups. Students and tutors need to recognize symptoms of ineffective PBL groups. Our study emphasizes the need to take into account cultural issues in setting ground rules for PBL tutorials.
Mohammadi, M; B Ghalebaghi; MF Ghaleh Bandi; E Amintehrani; Sh Khodaie; Sh Shoaee; MR Ashrafi
2007-01-01
Objective: To describe sleep patterns and sleep problems among preschool and school aged group children in a primary care setting in Iran. Material & Methods: This cross sectional study was conducted in two primary care pediatric clinics in Tehran from March 2006 to September 2006.Findings: Sleep patterns of 215 children studied (101 were in preschool age group; 2-6 years old, and 114 were in primary school age group; 7-12 years old). Sleep problems were common in study group, as follows: bed...
Caetano, Raul; Mills, Britain A
2011-07-01
The "prevention paradox," a notion that most alcohol-related problems are generated by nonheavy drinkers, has significant relevance to public health policy and prevention efforts. The extent of the paradox has driven debate over the type of balance that should be struck between alcohol policies targeting a select group of high-risk drinkers versus more global approaches that target the population at-large. This paper examines the notion that most alcohol problems among 4 Hispanic national groups in the United States are attributable to moderate drinkers. A general population survey employing a multistage cluster sample design, with face-to-face interviews in respondents' homes was conducted in 5 metropolitan areas of the United States. Study participants included a total of 2,773 current drinkers 18 years and older. Alcohol consumed in the past year (bottom 90% vs. top 10%), binge drinking (binge vs. no binge), and a 4-way grouping defined by volume and binge criteria were used. Alcohol-related harms included 14 social and dependence problems. Drinkers at the bottom 90% of the distribution are responsible for 56 to 73% of all social problems, and for 55 to 73% of all dependence-related problems reported, depending on Hispanic national group. Binge drinkers are responsible for the majority of the social problems (53 to 75%) and dependence-related problems (59 to 73%), also depending on Hispanic national group. Binge drinkers at the bottom 90% of the distribution are responsible for a larger proportion of all social and dependence-related problems reported than those at the top 10% of the volume distribution. Cuban Americans are an exception. The prevention paradox holds when using volume-based risk groupings and disappears when using a binge-drinking risk grouping. Binge drinkers who drink moderately on an average account for more harms than those who drink heavily across all groups, with exception of Cuban Americans. Copyright © 2011 by the Research Society on
Variance inflation in high dimensional Support Vector Machines
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie; Hansen, Lars Kai
2013-01-01
is not the full input space. Hence, when applying the model to future data the model is effectively blind to the missed orthogonal subspace. This can lead to an inflated variance of hidden variables estimated in the training set and when the model is applied to test data we may find that the hidden variables...... follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including...
Cooperation and Conflict: Faction Problem of Western Medicine Group in Modern China
Directory of Open Access Journals (Sweden)
Jeongeun JO
2016-08-01
Medicine Group doctors for China to timely respond to the rapidly increased demand. However, a conflict over the promotion of hygiene administration and the unification, organization of medical education did not end. This conflict was deepening as the Nanjing nationalist government promoted sanitary administration. It was the Britain - America faction who seized a chance of victory. It was because figures from the Britain - America faction held important positions in the hygiene department. Of course, some related to the National Medical and Pharmaceutical Association of China were also involved in the hygiene department; however, most took charge of simple technical tasks, not having a significant impact on hygiene administration. To solve the problem of factions of the Western Medicine Group, the Britain - America faction or the Germany - Japan faction had to arrange the education system with a strong power, or to organize a new association of two factions mixed, as in Chinese faction(zhonghuapai. But an effort of the Britain - America faction to unify the systems of medical schools did not reach the Germany - Japan faction’s medical schools. Additionally, from 1928, executives of the two Chinese medical associations discussed their merger; however they could not agree because of practitioners’interests involved. Substantially, a conflict between factions of the Western Medicine Group continued even until the mid-1930s. This implies that the then Chinese government had a lack of capacity of uniting and organizing the medical community.
Cooperation and Conflict: Faction Problem of Western Medicine Group in Modern China.
Jo, Jeongeun
2016-08-01
for China to timely respond to the rapidly increased demand. However, a conflict over the promotion of hygiene administration and the unification, organization of medical education did not end. This conflict was deepening as the Nanjing nationalist government promoted sanitary administration. It was the Britain - America faction who seized a chance of victory. It was because figures from the Britain - America faction held important positions in the hygiene department. Of course, some related to the National Medical and Pharmaceutical Association of China were also involved in the hygiene department; however, most took charge of simple technical tasks, not having a significant impact on hygiene administration. To solve the problem of factions of the Western Medicine Group, the Britain - America faction or the Germany - Japan faction had to arrange the education system with a strong power, or to organize a new association of two factions mixed, as in Chinese faction(zhonghuapai). But an effort of the Britain - America faction to unify the systems of medical schools did not reach the Germany - Japan faction's medical schools. Additionally, from 1928, executives of the two Chinese medical associations discussed their merger; however they could not agree because of practitioners'interests involved. Substantially, a conflict between factions of the Western Medicine Group continued even until the mid-1930s. This implies that the then Chinese government had a lack of capacity of uniting and organizing the medical community.
Directory of Open Access Journals (Sweden)
Amalia Putri Wijayanti
2016-05-01
Penelitian ini bertujuan untuk membandingkan model Group Investigation dengan Problem Based Learning berbasis Multiple Intelligences untuk mengukur kemampuan memecahkan masalah. Penelitian ini merupakan penelitian eksperimen semu dengan desain penelitian non equivalent group design. Subyek dalam penelitian ini siswa kelas XI IPS SMA Negeri 1 Kota Batu Semester Genap tahun ajaran 2015/2016. Penilaian Kemampuan memecahkan masalah menggunakan instrumen soal pretest dan posttest yang telah divalidasi dan uji reliabilitas. Analisis menggunakan two way anava dengan program SPSS 16.0 for Windows. Berdasarkan hasil analisis data diperoleh hasil penelitian terdapat perbedaan Kemampuan memecahkan masalah dengan menggunakan Group Investigation dan Problem Based Learning berbasis Multiple Intelligences. Rata-rata hasil belajar Group Investigation lebih tinggi 4,2 dibandingkan Problem Based Learning. Dapat disimpulkan bahwa penggunaan model Group Investigation berbasis Multiple Intelligences dapat mendorong siswa meningkatkan hasil belajar.
Simulation-based hypothesis testing of high dimensional means under covariance heterogeneity.
Chang, Jinyuan; Zheng, Chao; Zhou, Wen-Xin; Zhou, Wen
2017-12-01
In this article, we study the problem of testing the mean vectors of high dimensional data in both one-sample and two-sample cases. The proposed testing procedures employ maximum-type statistics and the parametric bootstrap techniques to compute the critical values. Different from the existing tests that heavily rely on the structural conditions on the unknown covariance matrices, the proposed tests allow general covariance structures of the data and therefore enjoy wide scope of applicability in practice. To enhance powers of the tests against sparse alternatives, we further propose two-step procedures with a preliminary feature screening step. Theoretical properties of the proposed tests are investigated. Through extensive numerical experiments on synthetic data sets and an human acute lymphoblastic leukemia gene expression data set, we illustrate the performance of the new tests and how they may provide assistance on detecting disease-associated gene-sets. The proposed methods have been implemented in an R-package HDtest and are available on CRAN. © 2017, The International Biometric Society.
Städler, Nicolas; Dondelinger, Frank; Hill, Steven M; Akbani, Rehan; Lu, Yiling; Mills, Gordon B; Mukherjee, Sach
2017-09-15
Molecular pathways and networks play a key role in basic and disease biology. An emerging notion is that networks encoding patterns of molecular interplay may themselves differ between contexts, such as cell type, tissue or disease (sub)type. However, while statistical testing of differences in mean expression levels has been extensively studied, testing of network differences remains challenging. Furthermore, since network differences could provide important and biologically interpretable information to identify molecular subgroups, there is a need to consider the unsupervised task of learning subgroups and networks that define them. This is a nontrivial clustering problem, with neither subgroups nor subgroup-specific networks known at the outset. We leverage recent ideas from high-dimensional statistics for testing and clustering in the network biology setting. The methods we describe can be applied directly to most continuous molecular measurements and networks do not need to be specified beforehand. We illustrate the ideas and methods in a case study using protein data from The Cancer Genome Atlas (TCGA). This provides evidence that patterns of interplay between signalling proteins differ significantly between cancer types. Furthermore, we show how the proposed approaches can be used to learn subtypes and the molecular networks that define them. As the Bioconductor package nethet. staedler.n@gmail.com or sach.mukherjee@dzne.de. Supplementary data are available at Bioinformatics online.
A multistage mathematical approach to automated clustering of high-dimensional noisy data
Friedman, Alexander; Keselman, Michael D.; Gibb, Leif G.; Graybiel, Ann M.
2015-01-01
A critical problem faced in many scientific fields is the adequate separation of data derived from individual sources. Often, such datasets require analysis of multiple features in a highly multidimensional space, with overlap of features and sources. The datasets generated by simultaneous recording from hundreds of neurons emitting phasic action potentials have produced the challenge of separating the recorded signals into independent data subsets (clusters) corresponding to individual signal-generating neurons. Mathematical methods have been developed over the past three decades to achieve such spike clustering, but a complete solution with fully automated cluster identification has not been achieved. We propose here a fully automated mathematical approach that identifies clusters in multidimensional space through recursion, which combats the multidimensionality of the data. Recursion is paired with an approach to dimensional evaluation, in which each dimension of a dataset is examined for its informational importance for clustering. The dimensions offering greater informational importance are given added weight during recursive clustering. To combat strong background activity, our algorithm takes an iterative approach of data filtering according to a signal-to-noise ratio metric. The algorithm finds cluster cores, which are thereafter expanded to include complete clusters. This mathematical approach can be extended from its prototype context of spike sorting to other datasets that suffer from high dimensionality and background activity. PMID:25831512
Anomaly Detection in Large Sets of High-Dimensional Symbol Sequences
Budalakoti, Suratna; Srivastava, Ashok N.; Akella, Ram; Turkov, Eugene
2006-01-01
This paper addresses the problem of detecting and describing anomalies in large sets of high-dimensional symbol sequences. The approach taken uses unsupervised clustering of sequences using the normalized longest common subsequence (LCS) as a similarity measure, followed by detailed analysis of outliers to detect anomalies. As the LCS measure is expensive to compute, the first part of the paper discusses existing algorithms, such as the Hunt-Szymanski algorithm, that have low time-complexity. We then discuss why these algorithms often do not work well in practice and present a new hybrid algorithm for computing the LCS that, in our tests, outperforms the Hunt-Szymanski algorithm by a factor of five. The second part of the paper presents new algorithms for outlier analysis that provide comprehensible indicators as to why a particular sequence was deemed to be an outlier. The algorithms provide a coherent description to an analyst of the anomalies in the sequence, compared to more normal sequences. The algorithms we present are general and domain-independent, so we discuss applications in related areas such as anomaly detection.
CMA – a comprehensive Bioconductor package for supervised classification with high dimensional data
Slawski, M; Daumer, M; Boulesteix, A-L
2008-01-01
Background For the last eight years, microarray-based classification has been a major topic in statistics, bioinformatics and biomedicine research. Traditional methods often yield unsatisfactory results or may even be inapplicable in the so-called "p ≫ n" setting where the number of predictors p by far exceeds the number of observations n, hence the term "ill-posed-problem". Careful model selection and evaluation satisfying accepted good-practice standards is a very complex task for statisticians without experience in this area or for scientists with limited statistical background. The multiplicity of available methods for class prediction based on high-dimensional data is an additional practical challenge for inexperienced researchers. Results In this article, we introduce a new Bioconductor package called CMA (standing for "Classification for MicroArrays") for automatically performing variable selection, parameter tuning, classifier construction, and unbiased evaluation of the constructed classifiers using a large number of usual methods. Without much time and effort, users are provided with an overview of the unbiased accuracy of most top-performing classifiers. Furthermore, the standardized evaluation framework underlying CMA can also be beneficial in statistical research for comparison purposes, for instance if a new classifier has to be compared to existing approaches. Conclusion CMA is a user-friendly comprehensive package for classifier construction and evaluation implementing most usual approaches. It is freely available from the Bioconductor website at . PMID:18925941
Defining and evaluating classification algorithm for high-dimensional data based on latent topics.
Directory of Open Access Journals (Sweden)
Le Luo
Full Text Available Automatic text categorization is one of the key techniques in information retrieval and the data mining field. The classification is usually time-consuming when the training dataset is large and high-dimensional. Many methods have been proposed to solve this problem, but few can achieve satisfactory efficiency. In this paper, we present a method which combines the Latent Dirichlet Allocation (LDA algorithm and the Support Vector Machine (SVM. LDA is first used to generate reduced dimensional representation of topics as feature in VSM. It is able to reduce features dramatically but keeps the necessary semantic information. The Support Vector Machine (SVM is then employed to classify the data based on the generated features. We evaluate the algorithm on 20 Newsgroups and Reuters-21578 datasets, respectively. The experimental results show that the classification based on our proposed LDA+SVM model achieves high performance in terms of precision, recall and F1 measure. Further, it can achieve this within a much shorter time-frame. Our process improves greatly upon the previous work in this field and displays strong potential to achieve a streamlined classification process for a wide range of applications.
A hyper-spherical adaptive sparse-grid method for high-dimensional discontinuity detection
Energy Technology Data Exchange (ETDEWEB)
Zhang, Guannan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Webster, Clayton G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gunzburger, Max D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Burkardt, John V. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2014-03-01
This work proposes and analyzes a hyper-spherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces is proposed. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hyper-surface of an N-dimensional dis- continuous quantity of interest, by virtue of a hyper-spherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyper-spherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hyper-surface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous error estimates and complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.
Free-energy calculations along a high-dimensional fragmented path with constrained dynamics.
Chen, Changjun; Huang, Yanzhao; Xiao, Yi
2012-09-01
Free-energy calculations for high-dimensional systems, such as peptides or proteins, always suffer from a serious sampling problem in a huge conformational space. For such systems, path-based free-energy methods, such as thermodynamic integration or free-energy perturbation, are good choices. However, both of them need sufficient sampling along a predefined transition path, which can only be controlled using restrained or constrained dynamics. Constrained simulations produce more reasonable free-energy profiles than restrained simulations. But calculations of standard constrained dynamics require an explicit expression of reaction coordinates as a function of Cartesian coordinates of all related atoms, which may be difficult to find for the complex transition of biomolecules. In this paper, we propose a practical solution: (1) We use restrained dynamics to define an optimized transition path, divide it into small fragments, and define a virtual reaction coordinate to denote a position along the path. (2) We use constrained dynamics to perform a formal free-energy calculation for each fragment and collect the values together to provide the entire free-energy profile. This method avoids the requirement to explicitly define reaction coordinates in Cartesian coordinates and provides a novel strategy to perform free-energy calculations for biomolecules along any complex transition path.
Feature selection for high-dimensional integrated data
Zheng, Charles
2012-04-26
Motivated by the problem of identifying correlations between genes or features of two related biological systems, we propose a model of feature selection in which only a subset of the predictors Xt are dependent on the multidimensional variate Y, and the remainder of the predictors constitute a “noise set” Xu independent of Y. Using Monte Carlo simulations, we investigated the relative performance of two methods: thresholding and singular-value decomposition, in combination with stochastic optimization to determine “empirical bounds” on the small-sample accuracy of an asymptotic approximation. We demonstrate utility of the thresholding and SVD feature selection methods to with respect to a recent infant intestinal gene expression and metagenomics dataset.
A simple new filter for nonlinear high-dimensional data assimilation
Tödter, Julian; Kirchgessner, Paul; Ahrens, Bodo
2015-04-01
performance with a realistic ensemble size. The results confirm that, in principle, it can be applied successfully and as simple as the ETKF in high-dimensional problems without further modifications of the algorithm, even though it is only based on the particle weights. This proves that the suggested method constitutes a useful filter for nonlinear, high-dimensional data assimilation, and is able to overcome the curse of dimensionality even in deterministic systems.
Mapping the human DC lineage through the integration of high-dimensional techniques
See, Peter; Dutertre, Charles-Antoine; Chen, Jinmiao; Günther, Patrick; McGovern, Naomi; Irac, Sergio Erdal; Gunawan, Merry; Beyer, Marc; Händler, Kristian; Duan, Kaibo; Sumatoh, Hermi Rizal Bin; Ruffin, Nicolas; Jouve, Mabel; Gea-Mallorquí, Ester; Hennekam, Raoul C. M.; Lim, Tony; Yip, Chan Chung; Wen, Ming; Malleret, Benoit; Low, Ivy; Shadan, Nurhidaya Binte; Fen, Charlene Foong Shu; Tay, Alicia; Lum, Josephine; Zolezzi, Francesca; Larbi, Anis; Poidinger, Michael; Chan, Jerry K. Y.; Chen, Qingfeng; Rénia, Laurent; Haniffa, Muzlifah; Benaroch, Philippe; Schlitzer, Andreas; Schultze, Joachim L.; Newell, Evan W.; Ginhoux, Florent
2017-01-01
Dendritic cells (DC) are professional antigen-presenting cells that orchestrate immune responses. The human DC population comprises two main functionally specialized lineages, whose origins and differentiation pathways remain incompletely defined. Here, we combine two high-dimensional
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Experimental ladder proof of Hardy's nonlocality for high-dimensional quantum systems
Chen, Lixiang; Zhang, Wuhong; Wu, Ziwen; Wang, Jikang; Fickler, Robert; Karimi, Ebrahim
2017-08-01
Recent years have witnessed a rapidly growing interest in high-dimensional quantum entanglement for fundamental studies as well as towards novel applications. Therefore, the ability to verify entanglement between physical qudits, d -dimensional quantum systems, is of crucial importance. To show nonclassicality, Hardy's paradox represents "the best version of Bell's theorem" without using inequalities. However, so far it has only been tested experimentally for bidimensional vector spaces. Here, we formulate a theoretical framework to demonstrate the ladder proof of Hardy's paradox for arbitrary high-dimensional systems. Furthermore, we experimentally demonstrate the ladder proof by taking advantage of the orbital angular momentum of high-dimensionally entangled photon pairs. We perform the ladder proof of Hardy's paradox for dimensions 3 and 4, both with the ladder up to the third step. Our paper paves the way towards a deeper understanding of the nature of high-dimensionally entangled quantum states and may find applications in quantum information science.
Characterization of high-dimensional entangled systems via mutually unbiased measurements
CSIR Research Space (South Africa)
Giovannini, D
2013-04-01
Full Text Available Mutually unbiased bases (MUBs) play a key role in many protocols in quantum science, such as quantum key distribution. However, defining MUBs for arbitrary high-dimensional systems is theoretically difficult, and measurements in such bases can...
Mitigating the Insider Threat Using High-Dimensional Search and Modeling
National Research Council Canada - National Science Library
Van Den Berg, Eric; Uphadyaya, Shambhu; Ngo, Phi H; Muthukrishnan, Muthu; Palan, Rajago
2006-01-01
In this project a system was built aimed at mitigating insider attacks centered around a high-dimensional search engine for correlating the large number of monitoring streams necessary for detecting insider attacks...
Projection Bank: From High-dimensional Data to Medium-length Binary Codes
Liu, Li; Yu, Mengyang; Shao, Ling
2015-01-01
Recently, very high-dimensional feature representations, e.g., Fisher Vector, have achieved excellent performance for visual recognition and retrieval. However, these lengthy representations always cause extremely heavy computational and storage costs and even become unfeasible in some large-scale applications. A few existing techniques can transfer very high-dimensional data into binary codes, but they still require the reduced code length to be relatively long to maintain acceptable accurac...
Jiang, Tiefeng; Yang, Fan
2013-01-01
For random samples of size $n$ obtained from $p$-variate normal distributions, we consider the classical likelihood ratio tests (LRT) for their means and covariance matrices in the high-dimensional setting. These test statistics have been extensively studied in multivariate analysis, and their limiting distributions under the null hypothesis were proved to be chi-square distributions as $n$ goes to infinity and $p$ remains fixed. In this paper, we consider the high-dimensional case where both...
Zhu, Yinchu
2017-01-01
Economic modeling in a data-rich environment is often challenging. To allow for enough flexibility and to model heterogeneity, models might have parameters with dimensionality growing with (or even much larger than) the sample size of the data. Learning these high-dimensional parameters requires new methodologies and theories. We consider three important high-dimensional models and propose novel methods for estimation and inference. Empirical applications in economics and finance are also stu...
Swank, Jacqueline M.; Shin, Sang Min
2015-01-01
This research study focused on the use of a garden group counseling intervention to address the self-esteem of children with emotional and behavioral problems. The researchers found higher self-esteem among participants (N = 31) following the gardening group. Additionally, participants discussed feeling calm and happy and learning to working…
Mulcahy, Robert Sean
2010-01-01
Learners inevitably enter adult technical training classrooms--indeed, in all classrooms--with different levels of expertise on the subject matter. When the diversity of expertise is wide and the course makes use of small group problem solving, instructors have a choice about how to group learners: they may distribute learners with greater…
Directory of Open Access Journals (Sweden)
V. Kuznetsova
2013-04-01
Full Text Available The follow-up study was designed to assess and to compare the effects of sensitivity to reward, sensitivity to punishment and family environment on internalizing and externalizing problems in a community sample of 477 children and adolescents aged 3-18 (50% female. The level of problem behavior at Time 1 in all age groups was the best predictor of corresponding type of problem level at Time 2; the residual variance in problem behavior was also predicted by sensitivity to reinforcement. Family factors contributed for change in externalizing problems and hyperactivity in preschool and middle childhood children; living in the urban environment was significant factor for peer problem. The study showed that individual differences interact with the family factors in the process of development, and family environment could strengthen or mitigate the influence of biological factors on children and adolescents’ adjustment.
Multivariate statistical analysis a high-dimensional approach
Serdobolskii, V
2000-01-01
In the last few decades the accumulation of large amounts of in formation in numerous applications. has stimtllated an increased in terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen ...
High-dimensional chaotic and attractor systems a comprehensive introduction
Ivancevic, Vladimir G
2007-01-01
This is a graduate–level monographic textbook devoted to understanding, prediction and control of high–dimensional chaotic and attractor systems of real life. The objective of the book is to provide the serious reader with a serious scientific tool that will enable the actual performance of competitive research in high–dimensional chaotic and attractor dynamics. The book has nine Chapters. The first Chapter gives a textbook-like introduction into the low-dimensional attractors and chaos. This Chapter has an inspirational character, similar to other books on nonlinear dynamics and deterministic chaos. The second Chapter deals with Smale’s topological transformations of stretching, squeezing and folding (of the system’s phase–space), developed for the purpose of chaos theory. The third Chapter is devoted to Poincaré's 3-body problem and basic techniques of chaos control, mostly of Ott-Grebogi-Yorke type. The fourth Chapter is a review of both Landau’s and topological phase transition theory, as w...
Sala, Giovanni; Gobet, Fernand
2017-12-01
It has been proposed that playing chess enables children to improve their ability in mathematics. These claims have been recently evaluated in a meta-analysis (Sala & Gobet, 2016, Educational Research Review, 18, 46-57), which indicated a significant effect in favor of the groups playing chess. However, the meta-analysis also showed that most of the reviewed studies used a poor experimental design (in particular, they lacked an active control group). We ran two experiments that used a three-group design including both an active and a passive control group, with a focus on mathematical ability. In the first experiment (N = 233), a group of third and fourth graders was taught chess for 25 hours and tested on mathematical problem-solving tasks. Participants also filled in a questionnaire assessing their meta-cognitive ability for mathematics problems. The group playing chess was compared to an active control group (playing checkers) and a passive control group. The three groups showed no statistically significant difference in mathematical problem-solving or metacognitive abilities in the posttest. The second experiment (N = 52) broadly used the same design, but the Oriental game of Go replaced checkers in the active control group. While the chess-treated group and the passive control group slightly outperformed the active control group with mathematical problem solving, the differences were not statistically significant. No differences were found with respect to metacognitive ability. These results suggest that the effects (if any) of chess instruction, when rigorously tested, are modest and that such interventions should not replace the traditional curriculum in mathematics.
Software Tools for Robust Analysis of High-Dimensional Data
Directory of Open Access Journals (Sweden)
Valentin Todorov
2014-06-01
Full Text Available The present work discusses robust multivariate methods specifically designed for highdimensions. Their implementation in R is presented and their application is illustratedon examples. The first group are algorithms for outlier detection, already introducedelsewhere and implemented in other packages. The value added of the new package isthat all methods follow the same design pattern and thus can use the same graphicaland diagnostic tools. The next topic covered is sparse principal components including anobject oriented interface to the standard method proposed by Zou, Hastie, and Tibshirani(2006 and the robust one proposed by Croux, Filzmoser, and Fritz (2013. Robust partialleast squares (see Hubert and Vanden Branden 2003 as well as partial least squares fordiscriminant analysis conclude the scope of the new package.
Directory of Open Access Journals (Sweden)
Aurélie eVilenne
2015-11-01
Full Text Available Aims: Recent studies with animal models showed that the stimulant and sedative effects of alcohol change during the adolescent period. In humans, the stimulant effects of ethanol are most often indirectly recorded through the measurement of explicit and implicit alcohol effect expectancies. However, it is unknown how such implicit and explicit expectancies evolve with age in humans during adolescence.Methods: Adolescent (13-16 year old, young adult (17-18 year old and adult (35-55 year old participants were recruited. On the basis of their score on the Alcohol Use Disorder Identification Test (AUDIT, they were classified as non-problem (AUDIT ≤ 7 or problem (AUDIT ≥ 11 drinkers. The participants completed the Alcohol Expectancy Questionnaire (AEQ and performed two unipolar Implicit Association Test (IAT to assess implicit associations between alcohol and the concepts of stimulation and sedation.Results: Problem drinkers from the three age groups reported significantly higher positive alcohol expectancies than non-problem drinkers on all AEQ subscales. Positive alcohol explicit expectancies also gradually decreased with age, with adolescent problem drinkers reporting especially high positive expectancies. This effect was statistically significant for all positive expectancies, with the exception of relaxation expectancies that were only close to statistical significance. In contrast, stimulation and sedation alcohol implicit associations were not significantly different between problem and non-problem drinkers and did not change with age.Conclusions: These results indicate that explicit positive alcohol effect expectancies predict current alcohol consumption levels, especially in adolescents. Positive alcohol expectancies also gradually decrease with age in the three cross-sectional groups of adolescents, young adults and adults. This effect might be related to changes in the physiological response to alcohol.
Scalable collaborative targeted learning for high-dimensional data.
Ju, Cheng; Gruber, Susan; Lendle, Samuel D; Chambaz, Antoine; Franklin, Jessica M; Wyss, Richard; Schneeweiss, Sebastian; van der Laan, Mark J
2017-01-01
Robust inference of a low-dimensional parameter in a large semi-parametric model relies on external estimators of infinite-dimensional features of the distribution of the data. Typically, only one of the latter is optimized for the sake of constructing a well-behaved estimator of the low-dimensional parameter of interest. Optimizing more than one of them for the sake of achieving a better bias-variance trade-off in the estimation of the parameter of interest is the core idea driving the general template of the collaborative targeted minimum loss-based estimation procedure. The original instantiation of the collaborative targeted minimum loss-based estimation template can be presented as a greedy forward stepwise collaborative targeted minimum loss-based estimation algorithm. It does not scale well when the number p of covariates increases drastically. This motivates the introduction of a novel instantiation of the collaborative targeted minimum loss-based estimation template where the covariates are pre-ordered. Its time complexity is [Formula: see text] as opposed to the original [Formula: see text], a remarkable gain. We propose two pre-ordering strategies and suggest a rule of thumb to develop other meaningful strategies. Because it is usually unclear a priori which pre-ordering strategy to choose, we also introduce another instantiation called SL-C-TMLE algorithm that enables the data-driven choice of the better pre-ordering strategy given the problem at hand. Its time complexity is [Formula: see text] as well. The computational burden and relative performance of these algorithms were compared in simulation studies involving fully synthetic data or partially synthetic data based on a real world large electronic health database; and in analyses of three real, large electronic health databases. In all analyses involving electronic health databases, the greedy collaborative targeted minimum loss-based estimation algorithm is unacceptably slow. Simulation studies
Bayesian Inference of High-Dimensional Dynamical Ocean Models
Lin, J.; Lermusiaux, P. F. J.; Lolla, S. V. T.; Gupta, A.; Haley, P. J., Jr.
2015-12-01
This presentation addresses a holistic set of challenges in high-dimension ocean Bayesian nonlinear estimation: i) predict the probability distribution functions (pdfs) of large nonlinear dynamical systems using stochastic partial differential equations (PDEs); ii) assimilate data using Bayes' law with these pdfs; iii) predict the future data that optimally reduce uncertainties; and (iv) rank the known and learn the new model formulations themselves. Overall, we allow the joint inference of the state, equations, geometry, boundary conditions and initial conditions of dynamical models. Examples are provided for time-dependent fluid and ocean flows, including cavity, double-gyre and Strait flows with jets and eddies. The Bayesian model inference, based on limited observations, is illustrated first by the estimation of obstacle shapes and positions in fluid flows. Next, the Bayesian inference of biogeochemical reaction equations and of their states and parameters is presented, illustrating how PDE-based machine learning can rigorously guide the selection and discovery of complex ecosystem models. Finally, the inference of multiscale bottom gravity current dynamics is illustrated, motivated in part by classic overflows and dense water formation sites and their relevance to climate monitoring and dynamics. This is joint work with our MSEAS group at MIT.
Yuan, Xiaoru; Ren, Donghao; Wang, Zuchao; Guo, Cong
2013-12-01
For high-dimensional data, this work proposes two novel visual exploration methods to gain insights into the data aspect and the dimension aspect of the data. The first is a Dimension Projection Matrix, as an extension of a scatterplot matrix. In the matrix, each row or column represents a group of dimensions, and each cell shows a dimension projection (such as MDS) of the data with the corresponding dimensions. The second is a Dimension Projection Tree, where every node is either a dimension projection plot or a Dimension Projection Matrix. Nodes are connected with links and each child node in the tree covers a subset of the parent node's dimensions or a subset of the parent node's data items. While the tree nodes visualize the subspaces of dimensions or subsets of the data items under exploration, the matrix nodes enable cross-comparison between different combinations of subspaces. Both Dimension Projection Matrix and Dimension Project Tree can be constructed algorithmically through automation, or manually through user interaction. Our implementation enables interactions such as drilling down to explore different levels of the data, merging or splitting the subspaces to adjust the matrix, and applying brushing to select data clusters. Our method enables simultaneously exploring data correlation and dimension correlation for data with high dimensions.
Energy Technology Data Exchange (ETDEWEB)
Tripathy, Rohit, E-mail: rtripath@purdue.edu; Bilionis, Ilias, E-mail: ibilion@purdue.edu; Gonzalez, Marcial, E-mail: marcial-gonzalez@purdue.edu
2016-09-15
Uncertainty quantification (UQ) tasks, such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying computer codes. To cope with the cost of simulations, one replaces the real response surface with a cheap surrogate based, e.g., on polynomial chaos expansions, neural networks, support vector machines, or Gaussian processes (GP). However, the number of simulations required to learn a generic multivariate response grows exponentially as the input dimension increases. This curse of dimensionality can only be addressed, if the response exhibits some special structure that can be discovered and exploited. A wide range of physical responses exhibit a special structure known as an active subspace (AS). An AS is a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the AS is low enough, then learning the link function is a much easier problem than the original problem of learning a high-dimensional function. The classic approach to discovering the AS requires gradient information, a fact that severely limits its applicability. Furthermore, and partly because of its reliance to gradients, it is not able to handle noisy observations. The latter is an essential trait if one wants to be able to propagate uncertainty through stochastic simulators, e.g., through molecular dynamics codes. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction. In particular, the AS is represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the
Directory of Open Access Journals (Sweden)
Narissara Eiamkanitchat
2015-01-01
Full Text Available Neurofuzzy methods capable of selecting a handful of useful features are very useful in analysis of high dimensional datasets. A neurofuzzy classification scheme that can create proper linguistic features and simultaneously select informative features for a high dimensional dataset is presented and applied to the diffuse large B-cell lymphomas (DLBCL microarray classification problem. The classification scheme is the combination of embedded linguistic feature creation and tuning algorithm, feature selection, and rule-based classification in one neural network framework. The adjustable linguistic features are embedded in the network structure via fuzzy membership functions. The network performs the classification task on the high dimensional DLBCL microarray dataset either by the direct calculation or by the rule-based approach. The 10-fold cross validation is applied to ensure the validity of the results. Very good results from both direct calculation and logical rules are achieved. The results show that the network can select a small set of informative features in this high dimensional dataset. By a comparison to other previously proposed methods, our method yields better classification performance.
Aflakseir, Abdulaziz; Zarei, Masoumeh
2013-01-01
Background Studies have shown that individuals with fertility problems experience psychosocial problems. The use of various coping strategies seems to have different impacts on women with infertility stress. The aim of this study was to examine the role of coping strategies (active-avoidance, passive-avoidance, active confronting and meaning based) in predicting infertility stress among a group of women seeking infertility treatment in Shiraz. Methods One hundred twenty infertile women were r...
Groupthink: Effects of Cohesiveness and Problem-Solving Procedures on Group Decision Making.
Callaway, Michael R.; Esser, James K.
1984-01-01
Tested Janis' groupthink formulation with 126 students by manipulating group cohesiveness and adequacy of decision procedures in a factorial design. Results showed highest quality decisions were produced by groups of intermediate cohesiveness. Highly cohesive groups without adequate decision procedures (the groupthink condition) tended to make the…
Idowu, Yewande; Easton, Graham
2016-01-01
Objectives To evaluate the perception of medical students of the new approach to problem-based learning which involves students writing their own problem-based learning cases based on their recent clinical attachment, and team assessment. Design Focus group interviews with students using purposive sampling. Transcripts of the audio recordings were analysed using thematic analysis. Setting Imperial College School of Medicine, London. Participants Medical students in the second year of the MBBS course, who attended the problem-based learning case writing session. Main outcome measures To elicit the students’ views about problem-based learning case writing and team assessment. Results The following broad themes emerged: effect of group dynamics on the process; importance of defining the tutor’s role; role of summative assessment; feedback as a learning tool and the skills developed during the process. Conclusions Overall the students found the new approach, writing problem-based learning cases based on patients seen during their clinical attachments, useful in helping them to gain a better understanding about the problem-based learning process, promoting creativity and reinforcing the importance of team work and peer assessment which are vital professional skills. Further tutor development and guidance for students about the new approach was found to be important in ensuring it is a good learning experience. We hope this evaluation will be of use to other institutions considering introducing students’ case writing to problem-based learning. PMID:26981255
Idowu, Yewande; Muir, Elizabeth; Easton, Graham
2016-03-01
To evaluate the perception of medical students of the new approach to problem-based learning which involves students writing their own problem-based learning cases based on their recent clinical attachment, and team assessment. Focus group interviews with students using purposive sampling. Transcripts of the audio recordings were analysed using thematic analysis. Imperial College School of Medicine, London. Medical students in the second year of the MBBS course, who attended the problem-based learning case writing session. To elicit the students' views about problem-based learning case writing and team assessment. The following broad themes emerged: effect of group dynamics on the process; importance of defining the tutor's role; role of summative assessment; feedback as a learning tool and the skills developed during the process. Overall the students found the new approach, writing problem-based learning cases based on patients seen during their clinical attachments, useful in helping them to gain a better understanding about the problem-based learning process, promoting creativity and reinforcing the importance of team work and peer assessment which are vital professional skills. Further tutor development and guidance for students about the new approach was found to be important in ensuring it is a good learning experience. We hope this evaluation will be of use to other institutions considering introducing students' case writing to problem-based learning.
Directory of Open Access Journals (Sweden)
N. V. Bezverkhniy
2015-01-01
Full Text Available The paper considers the possibility for building a one-way function in the small cancellation group. Thus, it uses the algorithm to solve the problem for a cyclic subgroup, also known as a discrete logarithm problem, and the algorithm to solve the word problem in this class of groups.Research is conducted using geometric methods of combinatorial group theory (the method of diagrams in groups.In public channel exchange of information are used one-way functions, direct calculation of which should be much less complicated than the calculation of the inverse function. The paper considers the combination of two problems: discrete logarithms and conjugacy. This leads to the problem of conjugate membership for a cyclic subgroup. The work proposes an algorithm based on this problem, which can be used as a basis in investigation of the appropriate one-way function for its fitness to build a public key distribution scheme.The study used doughnut charts of word conjugacy, and for one special class of such charts has been proven a property of the layer-based periodicity. The presence of such properties is obviously leads to a solution of the power conjugacy of words in the considered class of groups. Unfortunately, this study failed to show any periodicity of a doughnut chart, but for one of two possible classes this periodicity has been proven.The building process of one-way function considered in the paper was studied in terms of possibility to calculate both direct and inverse mappings. The computational complexity was not considered. Thus, the following two tasks were yet unresolved: determining the quality of one-way function in the above protocol of the public key distribution and completing the study of the periodicity of doughnut charts of word conjugacy, leading to a positive solution of the power conjugacy of words in the class groups under consideration.
1999-02-15
Normal sleep is required for optimal functioning. Normal wakefulness should be effortless and free of unintended sleep episodes. Problem sleepiness is common and occurs when the quantity of sleep is inadequate because of primary sleep disorders, other medical conditions or lifestyle factors. Medications and substances that disturb sleep, such as caffeine and nicotine, or those that have sedating side effects, may also cause problem sleepiness. This condition can lead to impairment in attention, performance problems at work and school, and potentially dangerous situations when the patient is driving or undertaking other safety-sensitive tasks. However, problem sleepiness is generally correctable when it is recognized. Asking a patient and his or her bed partner about the likelihood of drowsiness or of falling asleep during specific activities, as well as questions that uncover factors contributing to the sleepiness, helps the physician to recognize the disorder. Accurate diagnosis of specific sleep disorders may require evaluation by a specialist. The primary care physician is in an ideal position to identify signs and symptoms of problem sleepiness and initiate appropriate care of the patient, including educating the patient about the dangers of functioning while impaired by sleepiness.
Dyson, Rachel; Robertson, Gail C; Wong, Maria M
2015-06-01
Internalizing problems in adolescence encompass behaviors directed inward at the self (Colman, Wadsworth, Croudace, & Jones, 2007). Several predictors have been linked to internalizing problems including antisocial and prosocial peers (Cartwright, 2007; Chung, 2010). Effortful control, a component of self-regulation, is one factor that could mediate the relationship between peer behaviors and individual outcomes. This study assessed the relationship between peer behaviors, effortful control, and adolescent internalizing problems. Participants were 151 middle school adolescents (M = 12.16 years old) who completed self-report questionnaires regarding behaviors of their peers, perceptions of effortful control, and experiences of internalizing problems. Structural equation modeling (SEM) yielded a significant negative relationship between antisocial peers and effortful control, and a significant positive relationship between prosocial peers and effortful control. In addition, effortful control significantly mediated the relationship between prosocial peers and internalizing problems, but not for antisocial peers. Implications for interventions related to adolescent health were discussed. Copyright © 2015 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Awate, Suyash P; Yushkevich, Paul; Song, Zhuang; Licht, Daniel; Gee, James C
2009-01-01
The paper presents a novel statistical framework for cortical folding pattern analysis that relies on a rich multivariate descriptor of folding patterns in a region of interest (ROI). The ROI-based approach avoids problems faced by spatial-normalization-based approaches stemming from the severe deficiency of homologous features between typical human cerebral cortices. Unlike typical ROI-based methods that summarize folding complexity or shape by a single number, the proposed descriptor unifies complexity and shape of the surface in a high-dimensional space. In this way, the proposed framework couples the reliability of ROI-based analysis with the richness of the novel cortical folding pattern descriptor. Furthermore, the descriptor can easily incorporate additional variables, e.g. cortical thickness. The paper proposes a novel application of a nonparametric permutation-based approach for statistical hypothesis testing for any multivariate high-dimensional descriptor. While the proposed framework has a rigorous theoretical underpinning, it is straightforward to implement. The framework is validated via simulated and clinical data. The paper is the first to quantitatively evaluate cortical folding in neonates with complex congenital heart disease.
A Combined group EA-PROMETHEE method for a supplier selection problem
Directory of Open Access Journals (Sweden)
Hamid Reza Rezaee Kelidbari
2016-07-01
Full Text Available One of the important decisions which impacts all firms’ activities is the supplier selection problem. Since the 1950s, several works have addressed this problem by treating different aspects and instances. In this paper, a combined multiple criteria decision making (MCDM technique (EA-PROMETHEE has been applied to implement a proper decision making. To this aim, after reviewing the theoretical background regarding to supplier selection, the extension analysis (EA is used to determine the importance of criteria and PROMETHEE for appraisal of suppliers based on the criteria. An empirical example illustrated the proposed approach.
Page 1 Rigidity problem for lattices in solvable Lie groups 513 . in ...
Indian Academy of Sciences (India)
º f i.1 º group as a lattice. Now we may give an example of a rigid lattice T in olvable Lie group G of (I)-type. This lattice will be constructed with the help of a weakly rigid lattice in a splittable Lie. £º. : ~! ~1?? 1 - 4.4.2 group of (I)-type fro. Superrigid” lattice from the corollary above. (I)-type from example 2.8 and a. Example 8.3.
A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems
2005-05-01
Fraleigh (1976) or Herstein (1975). Colletti (1999) provides a robust treatment of group theory from the perspective of metaheuristics. Section 2.2...operation ( Fraleigh 1976, Herstein 1975). 4 2.1.2 Subgroups Let G be a group with H c G. If H is also a group under the operation G of G, H is a subgroup of...subgroup of G that contains g ( Fraleigh 1976). The group K = (g,h, j) is the smallest subgroup that contains g, h, andj. 5 2.1.5 The External Direct
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.
Hu, Zongliang; Dong, Kai; Dai, Wenlin; Tong, Tiejun
2017-09-21
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data
Directory of Open Access Journals (Sweden)
Hongchao Song
2017-01-01
Full Text Available Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE and an ensemble k-nearest neighbor graphs- (K-NNG- based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.
A Hybrid Semi-Supervised Anomaly Detection Model for High-Dimensional Data.
Song, Hongchao; Jiang, Zhuqing; Men, Aidong; Yang, Bo
2017-01-01
Anomaly detection, which aims to identify observations that deviate from a nominal sample, is a challenging task for high-dimensional data. Traditional distance-based anomaly detection methods compute the neighborhood distance between each observation and suffer from the curse of dimensionality in high-dimensional space; for example, the distances between any pair of samples are similar and each sample may perform like an outlier. In this paper, we propose a hybrid semi-supervised anomaly detection model for high-dimensional data that consists of two parts: a deep autoencoder (DAE) and an ensemble k-nearest neighbor graphs- (K-NNG-) based anomaly detector. Benefiting from the ability of nonlinear mapping, the DAE is first trained to learn the intrinsic features of a high-dimensional dataset to represent the high-dimensional data in a more compact subspace. Several nonparametric KNN-based anomaly detectors are then built from different subsets that are randomly sampled from the whole dataset. The final prediction is made by all the anomaly detectors. The performance of the proposed method is evaluated on several real-life datasets, and the results confirm that the proposed hybrid model improves the detection accuracy and reduces the computational complexity.
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix
Hu, Zongliang
2017-09-27
The determinant of the covariance matrix for high-dimensional data plays an important role in statistical inference and decision. It has many real applications including statistical tests and information theory. Due to the statistical and computational challenges with high dimensionality, little work has been proposed in the literature for estimating the determinant of high-dimensional covariance matrix. In this paper, we estimate the determinant of the covariance matrix using some recent proposals for estimating high-dimensional covariance matrix. Specifically, we consider a total of eight covariance matrix estimation methods for comparison. Through extensive simulation studies, we explore and summarize some interesting comparison results among all compared methods. We also provide practical guidelines based on the sample size, the dimension, and the correlation of the data set for estimating the determinant of high-dimensional covariance matrix. Finally, from a perspective of the loss function, the comparison study in this paper may also serve as a proxy to assess the performance of the covariance matrix estimation.
Fostering Student Engagement: Creative Problem-Solving in Small Group Facilitations
Samson, Patricia L.
2015-01-01
Creative Problem-Solving (CPS) can be a transformative teaching methodology that supports a dialogical learning atmosphere that can transcend the traditional classroom and inspire excellence in students by linking real life experiences with the curriculum. It supports a sense of inquiry that incorporates both experiential learning and the…
Group composition and its effect on female and male problem-solving in science education
Harskamp, Egbert; Ding, Ning; Suhre, Cor
2008-01-01
Background: Cooperative learning may help students elaborate upon problem information through interpersonal discourse, and this may provoke a higher level of thinking. Interaction stimulates students to put forward and order their thoughts, and to understand the ideas or questions of their peer
The Effects of Group Monitoring on Fatigue-Related Einstellung during Mathematical Problem Solving
Frings, Daniel
2011-01-01
Fatigue resulting from sleep deficit can lead to decreased performance in a variety of cognitive domains and can result in potentially serious accidents. The present study aimed to test whether fatigue leads to increased Einstellung (low levels of cognitive flexibility) in a series of mathematical problem-solving tasks. Many situations involving…
Murray, Lynn M.
2012-01-01
Live-client projects are increasingly used in marketing coursework. However, students, instructors, and clients are often disappointed by the results. This paper reports an approach drawn from the problem-based learning, scaffolding, and team formation and coaching literatures that uses favor of a series of workshops designed to guide students in…
Barreto, Jose; Reilly, John; Brown, David; Frost. Laura; Coticone, Sulekha Rao; Dubetz, Terry Ann; Beharry, Zanna; Davis-McGibony, C. Michele; Ramoutar, Ria; Rudd, Gillian
2014-01-01
New technological developments have minimized training, hardware expense, and distribution problems for the production and use of instructional videos, and any science instructor can now make instructional videos for their classes. We created short "Khan style" videos for the topic of buffers in biochemistry and assigned them as…
The Discursive Construction of Group Cohesion in Problem-Based Learning Tutorials
Hendry, Gillian; Wiggins, Sally; Anderson, Tony
2016-01-01
Research has shown that educators may be reluctant to implement group work in their teaching due to concerns about students partaking in off-task behaviours. However, such off-task interactions have been shown to promote motivation, trust, and rapport-building. This paper details a study in which student groups were video recorded as they engaged…
Bastiaanssen, I.L.W.; Delsing, M.J.M.H.; Kroes, G.; Engels, R.C.M.E.; Veerman, J.W.
2014-01-01
Group care workers in residential youth care are considered important in influencing behavioral development of children. In spite of this, their role has largely been neglected in research on residential care. The aim of the current study was twofold. First, longitudinal changes in group care worker
Efficient characterization of high-dimensional parameter spaces for systems biology
Directory of Open Access Journals (Sweden)
Hafner Marc
2011-09-01
Full Text Available Abstract Background A biological system's robustness to mutations and its evolution are influenced by the structure of its viable space, the region of its space of biochemical parameters where it can exert its function. In systems with a large number of biochemical parameters, viable regions with potentially complex geometries fill a tiny fraction of the whole parameter space. This hampers explorations of the viable space based on "brute force" or Gaussian sampling. Results We here propose a novel algorithm to characterize viable spaces efficiently. The algorithm combines global and local explorations of a parameter space. The global exploration involves an out-of-equilibrium adaptive Metropolis Monte Carlo method aimed at identifying poorly connected viable regions. The local exploration then samples these regions in detail by a method we call multiple ellipsoid-based sampling. Our algorithm explores efficiently nonconvex and poorly connected viable regions of different test-problems. Most importantly, its computational effort scales linearly with the number of dimensions, in contrast to "brute force" sampling that shows an exponential dependence on the number of dimensions. We also apply this algorithm to a simplified model of a biochemical oscillator with positive and negative feedback loops. A detailed characterization of the model's viable space captures well known structural properties of circadian oscillators. Concretely, we find that model topologies with an essential negative feedback loop and a nonessential positive feedback loop provide the most robust fixed period oscillations. Moreover, the connectedness of the model's viable space suggests that biochemical oscillators with varying topologies can evolve from one another. Conclusions Our algorithm permits an efficient analysis of high-dimensional, nonconvex, and poorly connected viable spaces characteristic of complex biological circuitry. It allows a systematic use of robustness as
Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R
2016-12-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Competing Risks Data Analysis with High-dimensional Covariates: An Application in Bladder Cancer
Directory of Open Access Journals (Sweden)
Leili Tapak
2015-06-01
Full Text Available Analysis of microarray data is associated with the methodological problems of high dimension and small sample size. Various methods have been used for variable selection in high-dimension and small sample size cases with a single survival endpoint. However, little effort has been directed toward addressing competing risks where there is more than one failure risks. This study compared three typical variable selection techniques including Lasso, elastic net, and likelihood-based boosting for high-dimensional time-to-event data with competing risks. The performance of these methods was evaluated via a simulation study by analyzing a real dataset related to bladder cancer patients using time-dependent receiver operator characteristic (ROC curve and bootstrap .632+ prediction error curves. The elastic net penalization method was shown to outperform Lasso and boosting. Based on the elastic net, 33 genes out of 1381 genes related to bladder cancer were selected. By fitting to the Fine and Gray model, eight genes were highly significant (P < 0.001. Among them, expression of RTN4, SON, IGF1R, SNRPE, PTGR1, PLEK, and ETFDH was associated with a decrease in survival time, whereas SMARCAD1 expression was associated with an increase in survival time. This study indicates that the elastic net has a higher capacity than the Lasso and boosting for the prediction of survival time in bladder cancer patients. Moreover, genes selected by all methods improved the predictive power of the model based on only clinical variables, indicating the value of information contained in the microarray features.
Srikrajang, Jantila; Pochamarn, Chomnard; Chittreecheur, Jittaporn; Apisarnthanarak, Anucha; Danchaivijitr, Somwang
2005-12-01
To examine the effect of an education program and problem solving work group on nursing practices for prevention of needlestick and sharp injury. A quasi-experimental study design with a control group was conducted at the emergency and labor rooms in Sermngam Hospital, Lampang. All healthcare workers (HCWs) in the emergency and labor room were randomly assigned to the experimental and control group from April 17, 2002 to September 3, 2002. Data collection included demographics, a participatory problem solving plan, and safety nursing practice observation recording form. The present study was divided into a two months observation period, followed by a one month intervention period and a two month post-intervention observation period. Interventions included education and posters to promote safe nursing practices, peer reminders to avoid unsafe nursing practices, providing devices for recapping needles and small-sized trays to facilitate one-handed recapping, and making a hole in the lid of a sharp container Nursing practices on prevention of needlestick and sharp injury were prospectively monitored. Twelve HCWs (12/24; 50%) were randomly assigned to the experimental group and 12 (12/24; 50%) were assigned to the control group. There was no difference with respects to demographic and safety nursing practices on prevention of needlestick and sharp injury during the pre-interventional period among these groups. Compared to the pre-interventional period, significant improvement on safety nursing practices for all nursing practice categories were observed in the experimental group after the intervention (P=0.001). Compared to the control group, all safety nursing practice categories were performed more often in the experimental group (p=0.001). The educational and problem solving work group on nursing practices to prevent needlestick and sharp injury were effective and should be considered as an intervention to reduce needlestick and sharp injury in emergency and labor
Method of resonating groups in the Faddeev-Hahn equation formalism for three-body nuclear problem
Nasirov, M Z
2002-01-01
The Faddeev-Hahn equation formalism for three-body nuclear problem is considered. For solution of the equations the method of resonant groups have applied. The calculations of tritium binding energy and doublet nd-scattering length have been carried out. The results obtained shows that Faddeev-Hahn equation formalism is very simple and effective. (author)
Caetano, Raul; Vaeth, Patrice A. C.; Rodriguez, Lori A.
2012-01-01
The purpose of this study was to examine the association between acculturation, birthplace, and alcohol-related social problems across Hispanic national groups. A total of 5,224 Hispanic adults (18+ years) were interviewed using a multistage cluster sample design in Miami, New York, Philadelphia, Houston, and Los Angeles. Multivariate analysis…
Directory of Open Access Journals (Sweden)
Kateryna Novokhatska
2016-03-01
Full Text Available In recent years, materialized views (MVs are widely used to enhance the database performance by storing pre-calculated results of resource-intensive queries in the physical memory. In order to identify which queries may be potentially materialized, database transaction log for a long period of time should be analyzed. The goal of analysis is to distinguish resource-intensive and frequently used queries collected from database log, and optimize these queries by implementation of MVs. In order to achieve greater efficiency of MVs, they were used not only for the optimization of single queries, but also for entire groups of queries that are similar in syntax and execution results. Thus, the problem stated in this article is the development of approach that will allow forming groups of queries with similar syntax around the most resource-intensive queries in order to identify the list of potential candidates for materialization. For solving this problem, we have applied the algorithm of categorical data clustering to the query grouping problem on the step of database log analysis and searching candidates for materialization. In the current work CLOPE algorithm was modified to cover the introduced problem. Statistical and timing indicators were taken into account in order to form the clusters around the most resource intensive queries. Application of modified algorithm CLOPE allowed to decrease calculable complexity of clustering and to enhance the quality of formed groups.
Khumsikiew, Jeerisuda; Donsamak, Sisira; Saeteaw, Manit
2015-01-01
Problem-based Learning (PBL) is an alternate method of instruction that incorporates basic elements of cognitive learning theory. Colleges of pharmacy use PBL to aid anticipated learning outcomes and practice competencies for pharmacy student. The purpose of this study were to implement and evaluate a model of small group PBL for 5th year pharmacy…
Skalická, Vera; Belsky, Jay; Stenseng, Frode; Wichstrøm, Lars
2015-01-01
In this Norwegian study, bidirectional relations between children's behavior problems and child-teacher conflict and closeness were examined, and the possibility of moderation of these associations by child-care group size was tested. Eight hundred and nineteen 4-year-old children were followed up in first grade. Results revealed reciprocal…
Maes, Marlies; Stevens, Gonneke W. J. M.; Verkuyten, Maykel
2014-01-01
Previous research has identified ethnic group identification as a moderator in the relationship between perceived ethnic discrimination and problem behaviors in ethnic minority children. However, little is known about the influence of religious and host national identification on this relationship. This study investigated the moderating role of…
Fickler, Robert; Lapkiewicz, Radek; Huber, Marcus; Lavery, Martin P J; Padgett, Miles J; Zeilinger, Anton
2014-07-30
Photonics has become a mature field of quantum information science, where integrated optical circuits offer a way to scale the complexity of the set-up as well as the dimensionality of the quantum state. On photonic chips, paths are the natural way to encode information. To distribute those high-dimensional quantum states over large distances, transverse spatial modes, like orbital angular momentum possessing Laguerre Gauss modes, are favourable as flying information carriers. Here we demonstrate a quantum interface between these two vibrant photonic fields. We create three-dimensional path entanglement between two photons in a nonlinear crystal and use a mode sorter as the quantum interface to transfer the entanglement to the orbital angular momentum degree of freedom. Thus our results show a flexible way to create high-dimensional spatial mode entanglement. Moreover, they pave the way to implement broad complex quantum networks where high-dimensionally entangled states could be distributed over distant photonic chips.
Chibanda, Dixon; Shetty, Avinash K; Tshimanga, Mufuta; Woelk, Godfrey; Stranix-Chibanda, Lynda; Rusakaniko, Simbarashe
2014-01-01
Postnatal depression (PND) is a major problem in low- and middle-income countries (LMICs). A total of 210 postpartum mothers attending primary care urban clinics were screened for PND at 6 weeks postpartum using the Edinburgh Postnatal Depression Scale (EPDS) and Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition; DSM-IV) criteria for major depression. The HIV prevalence was 14.8%. Of the 210 enrolled postpartum mothers, 64 (33%) met DSM IV criteria for depression. Using trained peer coun- selors, mothers with PND (n = 58) were randomly assigned to either group problem-solving therapy (PST, n = 30) or amitriptyline (n = 28). Of the 58 mothers with PND, 49 (85%) completed 6 weeks of group PST (n = 27) or pharmacotherapy (n = 22). At baseline, the mean EPDS score for participants randomized to group PST was 17.3 (standard deviation [SD] 3.7), while the group randomized to amitriptyline had a mean EPDS score of 17.9 (SD 3.9; P = .581). At 6 weeks postintervention, the drop in mean EPDS score was greater in the PST group (8.22, SD 3.6) compared to the amitriptyline group (10.7, SD 2.7; P = .0097). Group PST using peer counselors is feasible, acceptable, and more effective compared to pharmacotherapy in the treatment of PND. Group PST could be integrated into maternal and child health clinics and preventing mother-to-child transmission of HIV programs in LMICs.
A Discrete Group Search Optimizer for Hybrid Flowshop Scheduling Problem with Random Breakdown
National Research Council Canada - National Science Library
Cui, Zhe; Gu, Xingsheng
2014-01-01
...) together with a discrete group search optimizer algorithm (DGSO). In particular, two different working cases, preempt-resume case, and preempt-repeat case are considered under random breakdown...
A Group Theoretic Tabu Search Approach to the Traveling Salesman Problem
National Research Council Canada - National Science Library
Hall, Shane
2000-01-01
.... This research demonstrates a Group Theoretic Tabu Search (GTTS) Java algorithm for the TSP. The tabu search metaheuristic continuously finds near-optimal solutions to the TSP under various different implementations...
National Research Council Canada - National Science Library
Micetic, John
2004-01-01
... I) for lubricating hydroelectric turbines and associated governor systems. Products now being supplied by the lubrication industry for the same purpose are based on hydro-cracked paraffinic oils (Group II...
Niec, Larissa N; Barnett, Miya L; Prewett, Matthew S; Shanley Chatham, Jenelle R
2016-08-01
Although efficacious interventions exist for childhood conduct problems, a majority of families in need of services do not receive them. To address problems of treatment access and adherence, innovative adaptations of current interventions are needed. This randomized control trial investigated the relative efficacy of a novel format of parent-child interaction therapy (PCIT), a treatment for young children with conduct problems. Eighty-one families with 3- to 6-year-old children (71.6% boys, 85.2% White) with diagnoses of oppositional defiant or conduct disorder were randomized to individual PCIT (n = 42) or the novel format, Group PCIT. Parents completed standardized measures of children's conduct problems, parenting stress, and social support at intake, posttreatment, and 6-month follow-up. Therapist ratings, parent attendance, and homework completion provided measures of treatment adherence. Throughout treatment, parenting skills were assessed using the Dyadic Parent-Child Interaction Coding System. Parents in both group and individual PCIT reported significant improvements from intake to posttreatment and follow-up in their children's conduct problems and adaptive functioning, as well as significant decreases in parenting stress. Parents in both treatment conditions also showed significant improvements in their parenting skills. There were no interactions between time and treatment format. Contrary to expectation, parents in Group PCIT did not experience greater social support or treatment adherence. Group PCIT was not inferior to individual PCIT and may be a valuable format to reach more families in need of services. Future work should explore the efficiency and sustainability of Group PCIT in community settings. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
DEFF Research Database (Denmark)
Zhou, Chunfang; Kolmos, Anette
2013-01-01
Recent studies regard Problem and Project Based Learning (PBL) as providing a learning environment which fosters both individual and group creativity. This paper focuses on the question: In a PBL environment, how do students perceive the interplay between individual and group creativity......? Empirically, qualitative interviews were carried out with 53 students (12 groups) in Computer Science, Electronic Systems, Architecture and Design, and Medialogy at Aalborg University, Denmark. The data analysis shows that there are three aspects to the influences of a PBL environment on the interplay between...... creativity in the PBL environment....
The validation and assessment of machine learning: a game of prediction from high-dimensional data
DEFF Research Database (Denmark)
Pers, Tune Hannes; Albrechtsen, A; Holst, C
2009-01-01
In applied statistics, tools from machine learning are popular for analyzing complex and high-dimensional data. However, few theoretical results are available that could guide to the appropriate machine learning tool in a new application. Initial development of an overall strategy thus often...... the ideas, the game is applied to data from the Nugenob Study where the aim is to predict the fat oxidation capacity based on conventional factors and high-dimensional metabolomics data. Three players have chosen to use support vector machines, LASSO, and random forests, respectively....
Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs
Energy Technology Data Exchange (ETDEWEB)
Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)
2016-07-15
In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
Jeong, In Ju; Kim, Soo Jin
2017-04-01
The purpose of this study was to examine the effects of a group counseling program based on goal attainment theory on self-esteem, interpersonal relationships, and school adjustment of middle school students with emotional and behavioral problems. Forty-four middle school students with emotional and behavioral problems (22 in the experimental group and 22 in the control group) from G city participated in this study. Data were collected from July 30 to September 24, 2015. The experimental group received the 8-session program, scheduled once a week, with each session lasting 45 minutes. Outcome variables included self-esteem, interpersonal relationship, and school adjustment. There were significant increases for self-esteem (t=3.69, p=.001), interpersonal relationship (t=8.88, ptheory is very effective in increasing self-esteem, interpersonal relationship, and school adjustment for middle school students with emotional and behavioral problems. Therefore, it is recommended that the group counseling program based on goal attainment theory be used as an effective psychiatric nursing intervention for mental health promotion and the prevention of mental illness in adolescents.
He, Yong; Sun, Li
2015-05-01
In this paper, we introduce a group scheduling model with general deteriorating jobs and learning effects in which deteriorating jobs and learning effects are both considered simultaneously. This means that the actual processing time of a job depends not only on the processing time of the jobs already processed, but also on its scheduled position. In our model, the group setup times are general linear functions of their starting times and the jobs in the same group have general position-dependent learning effects and time-dependent deterioration. The objective of scheduling problems is to minimise the makespan and the sum of completion times, respectively. We show that the problems remain solvable in polynomial time under the proposed model.
Directory of Open Access Journals (Sweden)
Ke Zhang
2003-10-01
Full Text Available Abstract. This study investigated the relative benefits of peer-controlled and moderated online collaboration during group problem solving. Thirty-five self-selected groups of four or five students were randomly assigned to the two conditions, which used the same online collaborative tool to solve twelve problem scenarios in an undergraduate statistics course. A score for the correctness of the solutions and a reasoning score were analyzed. A survey was administered to reveal differences in students' related attitudes. Three conclusions were reached: 1. Groups assigned to moderated forums displayed significantly higher reasoning scores than those in the peer-controlled condition, but the moderation did not affect correctness of solutions. 2. Students in the moderated forums reported being more likely to choose to use an optional online forum for future collaborations. 3. Students who reported having no difficulty during collaboration reported being more likely to choose to use an optional online forum in the future.
THE PROBLEMS OF SOCIALIZATION OF CHILDREN IN A CHILDREN’S MUSICAL GROUP
Directory of Open Access Journals (Sweden)
Larysa Ostapenko
2017-07-01
Full Text Available The article studies the process of the socialization of children in a musical group. The author has studied diverse factors of the socialization of children and its types (spontaneous socialization; relatively controlled socialization and socially controlled socialization. The author has also given characteristics of creative activity stimulation and described the need to be accepted by peers being realized during the participation in children’s music festivals. The notion of socialization was defined as a complex process of a child’s personality development, especially during the school/teen age, whereby an individual acquires a personal identity and learns the norms, values, behavior, and social skills appropriate to his or her social position in the context of a musical group. Educational work conducted be teachers, family members and society contributes to this process. School education in terms of a musical group consists of activities organised in order to educate personality traits through the organization of practical creative communication. Schoolchildren’s interpersonal relations are always based on social relations. It is proved that the personality development in a children’s musical group is placed in social environment and social communication. The key role belongs here to the motivation and the incentive of the schoolchildren’s creative activity. Creative communication in a children’s musical group turns out to be a powerful inner stimulation for children to fulfil their abilities. It pushes a child towards self-assertion and the gain of authority among peers. The article proves that pedagogical guidance of the creative process can be led professionally only in a well-organised musical group.
Directory of Open Access Journals (Sweden)
F. C. Cooper
2013-04-01
Full Text Available The fluctuation-dissipation theorem (FDT has been proposed as a method of calculating the response of the earth's atmosphere to a forcing. For this problem the high dimensionality of the relevant data sets makes truncation necessary. Here we propose a method of truncation based upon the assumption that the response to a localised forcing is spatially localised, as an alternative to the standard method of choosing a number of the leading empirical orthogonal functions. For systems where this assumption holds, the response to any sufficiently small non-localised forcing may be estimated using a set of truncations that are chosen algorithmically. We test our algorithm using 36 and 72 variable versions of a stochastic Lorenz 95 system of ordinary differential equations. We find that, for long integrations, the bias in the response estimated by the FDT is reduced from ~75% of the true response to ~30%.
Personalized Education; Solving a Group Formation and Scheduling Problem for Educational Content
Bahargam, Sanaz; Erdos, Dóra; Bestavros, Azer; Terzi, Evimaria
2015-01-01
Whether teaching in a classroom or a Massive Online Open Course it is crucial to present the material in a way that benefits the audience as a whole. We identify two important tasks to solve towards this objective; (1) group students so that they can maximally benefit from peer interaction and (2) find an optimal schedule of the educational…
Talking through the Problems: A Study of Discourse in Peer-Led Small Groups
Repice, Michelle D.; Sawyer, R. Keith; Hogrebe, Mark C.; Brown, Patrick L.; Luesse, Sarah B.; Gealy, Daniel J.; Frey, Regina F.
2016-01-01
Increasingly, studies are investigating the factors that influence student discourse in science courses, and specifically the mechanisms and discourse processes within small groups, to better understand the learning that takes place as students work together. This paper contributes to a growing body of research by analyzing how students engage in…
Generation of high-dimensional energy-time-entangled photon pairs
Zhang, Da; Zhang, Yiqi; Li, Xinghua; Zhang, Dan; Cheng, Lin; Li, Changbiao; Zhang, Yanpeng
2017-11-01
High-dimensional entangled photon pairs have many excellent properties compared to two-dimensional entangled two-photon states, such as greater information capacity, stronger nonlocality, and higher security. Traditionally, the degree of freedom that can produce high-dimensional entanglement mainly consists of angular momentum and energy time. In this paper, we propose a type of high-dimensional energy-time-entangled qudit, which is different from the traditional model with an extended propagation path. In addition, our method mainly focuses on the generation with multiple frequency modes, while two- and three-dimensional frequency-entangled qudits are examined as examples in detail through the linear or nonlinear optical response of the medium. The generation of high-dimensional energy-time-entangled states can be verified by coincidence counts in the damped Rabi oscillation regime, where the paired Stokes-anti-Stokes wave packet is determined by the structure of resonances in the third-order nonlinearity. Finally, we extend the dimension to N in the sequential-cascade mode. Our results have potential applications in quantum communication and quantum computation.
Characterization of differentially expressed genes using high-dimensional co-expression networks
DEFF Research Database (Denmark)
Coelho Goncalves de Abreu, Gabriel; Labouriau, Rodrigo S.
2010-01-01
We present a technique to characterize differentially expressed genes in terms of their position in a high-dimensional co-expression network. The set-up of Gaussian graphical models is used to construct representations of the co-expression network in such a way that redundancy and the propagation...
Penalized estimation for competing risks regression with applications to high-dimensional covariates
DEFF Research Database (Denmark)
Ambrogi, Federico; Scheike, Thomas H.
2016-01-01
High-dimensional regression has become an increasingly important topic for many research fields. For example, biomedical research generates an increasing amount of data to characterize patients' bio-profiles (e.g. from a genomic high-throughput assay). The increasing complexity in the characteriz......High-dimensional regression has become an increasingly important topic for many research fields. For example, biomedical research generates an increasing amount of data to characterize patients' bio-profiles (e.g. from a genomic high-throughput assay). The increasing complexity...... in the characterization of patients' bio-profiles is added to the complexity related to the prolonged follow-up of patients with the registration of the occurrence of possible adverse events. This information may offer useful insight into disease dynamics and in identifying subset of patients with worse prognosis...... and better response to the therapy. Although in the last years the number of contributions for coping with high and ultra-high-dimensional data in standard survival analysis have increased (Witten and Tibshirani, 2010. Survival analysis with high-dimensional covariates. Statistical Methods in Medical...
Ferdosi, Bilkis J.; Buddelmeijer, Hugo; Trager, Scott; Wilkinson, Michael H.F.; Roerdink, Jos B.T.M.
2010-01-01
Data sets in astronomy are growing to enormous sizes. Modern astronomical surveys provide not only image data but also catalogues of millions of objects (stars, galaxies), each object with hundreds of associated parameters. Exploration of this very high-dimensional data space poses a huge challenge.
High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm
Cai, Li
2010-01-01
A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…
General report on legal problems in radiation protection. Working group 4
Energy Technology Data Exchange (ETDEWEB)
NONE
1995-12-31
The legal implications and especially how some of the less definite concepts of the ICRP recommendations in Publication 60, issued in 1991, are given regulatory form are analysed. The preparation of the new EC directive on radiation protection and the attempt by IAEA and NEA to integrate the ICRP radiological protection principles with the nuclear safety principles are also examined. A special paragraph deals with long-debated question of exemptions. The report then analyses the right to be informed and the obligation to inform, in the field of radiation protection of the public, highlighting the different approaches in the regulatory systems developed during the past years at Community level and in the US. The problems of coordination between the provisions of the EC and the EURATOM treaties on environmental protection and radiation protection respectively, are then considered, partly with a view to the possible merging of these provisions into a single Treaty. Lastly, some considerations are developed concerning the different possible approaches to compensation for potentially radiation-induced diseases. 27 refs., 1 tab.
Blood lead levels in a group of children: the potential risk factors and health problems.
AbuShady, Mones M; Fathy, Hanan A; Fathy, Gihan A; Fatah, Samer Abd El; Ali, Alaa; Abbas, Mohamed A
To investigate blood lead levels in schoolchildren in two areas of Egypt to understand the current lead pollution exposure and its risk factors, aiming to improve prevention politicies. This was a cross-sectional study in children (n=400) aged 6-12 years recruited from two areas in Egypt (industrial and urban). Blood lead levels were measured using an atomic absorption method. Detailed questionnaires on sources of lead exposure and history of school performance and any behavioral changes were obtained. The mean blood lead level in the urban area of Egypt (Dokki) was 5.45±3.90μg/dL, while that in the industrial area (Helwan) was 10.37±7.94μg/dL, with a statistically significant difference between both areas (plead levels≥10μg/dL, versus 42% of those in Helwan. A significant association was found between children with abnormal behavior and those with pallor with blood lead level≥10μg/dL, when compared with those with blood lead levellead level≥10μg/dL. Lead remains a public health problem in Egypt. High blood lead levels were significantly associated with bad health habits and housing with increased exposure, as well as abnormal behavior and pallor. Copyright © 2017 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
Comparison of clustering methods for high-dimensional single-cell flow and mass cytometry data.
Weber, Lukas M; Robinson, Mark D
2016-12-01
Recent technological developments in high-dimensional flow cytometry and mass cytometry (CyTOF) have made it possible to detect expression levels of dozens of protein markers in thousands of cells per second, allowing cell populations to be characterized in unprecedented detail. Traditional data analysis by "manual gating" can be inefficient and unreliable in these high-dimensional settings, which has led to the development of a large number of automated analysis methods. Methods designed for unsupervised analysis use specialized clustering algorithms to detect and define cell populations for further downstream analysis. Here, we have performed an up-to-date, extensible performance comparison of clustering methods for high-dimensional flow and mass cytometry data. We evaluated methods using several publicly available data sets from experiments in immunology, containing both major and rare cell populations, with cell population identities from expert manual gating as the reference standard. Several methods performed well, including FlowSOM, X-shift, PhenoGraph, Rclusterpp, and flowMeans. Among these, FlowSOM had extremely fast runtimes, making this method well-suited for interactive, exploratory analysis of large, high-dimensional data sets on a standard laptop or desktop computer. These results extend previously published comparisons by focusing on high-dimensional data and including new methods developed for CyTOF data. R scripts to reproduce all analyses are available from GitHub (https://github.com/lmweber/cytometry-clustering-comparison), and pre-processed data files are available from FlowRepository (FR-FCM-ZZPH), allowing our comparisons to be extended to include new clustering methods and reference data sets. © 2016 The Authors. Cytometry Part A published by Wiley Periodicals, Inc. on behalf of ISAC. © 2016 The Authors. Cytometry Part A Published by Wiley Periodicals, Inc. on behalf of ISAC.
SNP interaction detection with Random Forests in high-dimensional genetic data.
Winham, Stacey J; Colby, Colin L; Freimuth, Robert R; Wang, Xin; de Andrade, Mariza; Huebner, Marianne; Biernacka, Joanna M
2012-07-15
Identifying variants associated with complex human traits in high-dimensional data is a central goal of genome-wide association studies. However, complicated etiologies such as gene-gene interactions are ignored by the univariate analysis usually applied in these studies. Random Forests (RF) are a popular data-mining technique that can accommodate a large number of predictor variables and allow for complex models with interactions. RF analysis produces measures of variable importance that can be used to rank the predictor variables. Thus, single nucleotide polymorphism (SNP) analysis using RFs is gaining popularity as a potential filter approach that considers interactions in high-dimensional data. However, the impact of data dimensionality on the power of RF to identify interactions has not been thoroughly explored. We investigate the ability of rankings from variable importance measures to detect gene-gene interaction effects and their potential effectiveness as filters compared to p-values from univariate logistic regression, particularly as the data becomes increasingly high-dimensional. RF effectively identifies interactions in low dimensional data. As the total number of predictor variables increases, probability of detection declines more rapidly for interacting SNPs than for non-interacting SNPs, indicating that in high-dimensional data the RF variable importance measures are capturing marginal effects rather than capturing the effects of interactions. While RF remains a promising data-mining technique that extends univariate methods to condition on multiple variables simultaneously, RF variable importance measures fail to detect interaction effects in high-dimensional data in the absence of a strong marginal component, and therefore may not be useful as a filter technique that allows for interaction effects in genome-wide data.
Directory of Open Access Journals (Sweden)
Mazur Valerij Anatol'evich
2011-09-01
Full Text Available The question of regulatory framework for special medical group students' physical education, and their physical condition in particular is elaborated. It is found that in the current program the identified question is missing, although the assessment of individual performance standards for the physical condition of the students was envisaged in the programs of 1977 and 1982. The need for such an assessment is indicated by the large number of Ukrainian and foreign pediatricians and specialists in therapeutic physical culture. At the same time the standards for assessing these indicators are not developed. It complicates the formation of positive motivation of students to regular classes, and does not promote their self-confidence, capabilities and effectiveness of monitoring the effectiveness of exercise in various forms. The findings suggest the need to define the optimal composition of the bulk of tests and functional tests to assess the physical condition of special medical group students with various diseases and to develop appropriate indicators for their evaluation standards.
Hesitant Fuzzy Soft Sets with Application in Multicriteria Group Decision Making Problems
Directory of Open Access Journals (Sweden)
Jian-qiang Wang
2015-01-01
Full Text Available Soft sets have been regarded as a useful mathematical tool to deal with uncertainty. In recent years, many scholars have shown an intense interest in soft sets and extended standard soft sets to intuitionistic fuzzy soft sets, interval-valued fuzzy soft sets, and generalized fuzzy soft sets. In this paper, hesitant fuzzy soft sets are defined by combining fuzzy soft sets with hesitant fuzzy sets. And some operations on hesitant fuzzy soft sets based on Archimedean t-norm and Archimedean t-conorm are defined. Besides, four aggregation operations, such as the HFSWA, HFSWG, GHFSWA, and GHFSWG operators, are given. Based on these operators, a multicriteria group decision making approach with hesitant fuzzy soft sets is also proposed. To demonstrate its accuracy and applicability, this approach is finally employed to calculate a numerical example.
Directory of Open Access Journals (Sweden)
N. S. Morozova
2015-01-01
Full Text Available The article considers a problem of the decentralization-based approach to formation control of a group of agents, which simulate mobile autonomous robots. The agents use only local information limited by the covering range of their sensors. The agents have to build and maintain the formation, which fits to the defined target geometric formation structure with desired accuracy during the movement to the target point. At any point in time the number of agents in the group can change unexpectedly (for example, as a result of the agent failure or if a new agent joins the group.The aim of the article is to provide the base control rule, which solves the formation control problem, and to develop its modifications, which provide the correct behavior in case the agent number in the group is not equal to the size of the target geometric formation structure. The proposed base control rule, developed by the author, uses the method of involving virtual leaders. The coordinates of the virtual leaders and also the priority to follow the specific leader are calculated by each agent itself according to specific rules.The following results are presented in the article: the base control rule for solving the formation control problem, its modifications for the cases when the number of agents is greater/less than the size of the target geometric formation structure and also the computer modeling results proving the efficiency of the modified control rules. The specific feature of the control rule, developed by the author, is that each agent itself calculates the virtual leaders and each agent performs dynamic choice of the place within the formation (there is no predefined one-to-one relation between agents and places within the geometric formation structure. The results, provided in this article, can be used in robotics for developing control algorithms for the tasks, which require preserving specific relational positions among the agents while moving. One of the
Directory of Open Access Journals (Sweden)
Rosario García González
2009-12-01
información a los administradores y orientaciones prácticas a las personas con diabetes mellitus podría disminuir los problemas afrontados.INTRODUCTION: diabetes mellitus can affect working activities as well as different aspects of working activities can difficult the accomplishment of diabetes treatment requirements. Nevertheless there are not enough papers on the matter and the purpose of this study was to identify possible difficulties affronting by a group of people with diabetes to develop working activities. OBJETIVES: to determine the frequency of working difficulties acknowledged by the group, to characterize difficulties found, and to identify opinions on the relationship between diabetes and working activities. METHODS: the universe was formed by patients admitted in the daily service of Diabetes Care Center from september to december of 2006 and the sample has included those incorporated to working activities at the moment and those retired in 2005. Surveys and group discussions were used for collecting information. Quantitative data were processed using descriptive statistics, and data of group discussions were processed using the inductive exploratory qualitative method. RESULTS: 25 % of workers and 35 % of retired have recognized some interference between diabetes and working activities. The problems identified were difficulties to accomplish foot requirements (75 %, extension of working day (21.6 %, physical efforts (42.2 % and a lack of information from managers to understand diabetes situation. All retired people recognized the diabetes mellitus as cause of their decision to have recourse to working retirement. CONCLUSIONS: problems to face the working life was present in workers and retired people. The major problems identified included alimentary requirements, physical efforts, and the extension of working days. The group studied has considered that more information to managers and practical guidance for people with diabetes mellitus could decrease
Koufogiannakis, Denise; Buckingham, Jeanette; Alibhai, Arif; Rayner, David
2005-09-01
Librarians at the University of Alberta have been involved with teaching undergraduate medical and dental education for several years. After 1 year of increased librarian involvement at the problem-based learning (PBL), small-group level, informal feedback from faculty and students suggested that librarians' participation in PBL groups was beneficial. There was, however, no real evidence to support this claim or justify the high demand on librarians' time. The study aimed to determine whether having a librarian present in the small-group, problem-based learning modules for first-year medical and dental students results in an improved understanding of evidence-based medicine concepts, the nature of medical literature, and information access skills. One hundred and sixty-four first-year medical and dental students participated in the study. There were a total of 18 PBL groups, each with approximately nine students and one faculty tutor. Six librarians participated and were assigned randomly to the six intervention groups. Students were given pre- and post-tests at the outset and upon completion of the 6-week course. Post-test scores showed that there was a small positive librarian impact, but final exam scores showed no impact. There was also no difference in attitudes or comfort levels between students who had a librarian in their group and those who did not. Impact was not sufficient to warrant continued participation of librarians in PBL. In future instruction, librarians at the John W. Scott Health Sciences Library will continue to teach at the larger group level.
Boot, Walter R; Simons, Daniel J; Stothart, Cary; Stutts, Cassie
2013-07-01
To draw causal conclusions about the efficacy of a psychological intervention, researchers must compare the treatment condition with a control group that accounts for improvements caused by factors other than the treatment. Using an active control helps to control for the possibility that improvement by the experimental group resulted from a placebo effect. Although active control groups are superior to "no-contact" controls, only when the active control group has the same expectation of improvement as the experimental group can we attribute differential improvements to the potency of the treatment. Despite the need to match expectations between treatment and control groups, almost no psychological interventions do so. This failure to control for expectations is not a minor omission-it is a fundamental design flaw that potentially undermines any causal inference. We illustrate these principles with a detailed example from the video-game-training literature showing how the use of an active control group does not eliminate expectation differences. The problem permeates other interventions as well, including those targeting mental health, cognition, and educational achievement. Fortunately, measuring expectations and adopting alternative experimental designs makes it possible to control for placebo effects, thereby increasing confidence in the causal efficacy of psychological interventions. © The Author(s) 2013.
Su, Shaobing; Li, Xiaoming; Zhang, Liying; Lin, Danhua; Zhang, Chen; Zhou, Yuejiao
2014-01-01
HIV risk and mental health problems are prevalent among female sex workers (FSWs) in China. The purpose of this research was to study age group differences in HIV risk and mental health problems in this population. In the current study, we divided a sample of 1022 FSWs into three age groups (≤ 20 years, 21-34 years, and ≥ 35 years). Results showed that among the three groups (1) older FSWs (≥ 35 years) were likely to be socioeconomically disadvantaged (e.g., rural residency, little education, employment in low-paying venues, and low monthly income); (2) older FSWs reported the highest rates of inconsistent, ineffective condom use, and sexually transmitted diseases history; (3) younger FSWs (≤ 20 years) reported the highest level of depression, suicidal thoughts and suicide attempts, regular-partner violence, and substance use; (4) all health-related risks except casual-partner violence were more prevalent among older and younger FSWs than among FSWs aged 21-34 years; and (5) age had a significant effect on all health indicators except suicide attempts after controlling for several key demographic factors. These findings indicate the need for intervention efforts to address varying needs among FSWs in different age groups. Specific interventional efforts are needed to reduce older FSWs' exposure to HIV risk; meanwhile, more attention should be given to improve FSWs' mental health status, especially among younger FSWs.
Directory of Open Access Journals (Sweden)
Vladimir eKozunov
2015-04-01
Full Text Available Although MEG/EEG signals are highly variable between subjects, they allow characterizing systematic changes of cortical activity in both space and time. Traditionally a two-step procedure is used. The first step is a transition from sensor to source space by the means of solving an ill-posed inverse problem for each subject individually. The second is mapping of cortical regions consistently active across subjects. In practice the first step often leads to a set of active cortical regions whose location and timecourses display a great amount of interindividual variability hindering the subsequent group analysis.We propose Group Analysis Leads to Accuracy (GALA - a solution that combines the two steps into one. GALA takes advantage of individual variations of cortical geometry and sensor locations. It exploits the ensuing variability in electromagnetic forward model as a source of additional information. We assume that for different subjects functionally identical cortical regions are located in close proximity and partially overlap and their timecourses are correlated. This relaxed similarity constraint on the inverse solution can be expressed within a probabilistic framework, allowing for an iterative algorithm solving the inverse problem jointly for all subjects.A systematic simulation study showed that GALA, as compared with the standard min-norm approach, improves accuracy of true activity recovery, when accuracy is assessed both in terms of spatial proximity of the estimated and true activations and correct specification of spatial extent of the activated regions. This improvement obtained without using any noise normalization techniques for both solutions, preserved for a wide range of between-subject variations in both spatial and temporal features of regional activation. The corresponding activation timecourses exhibit significantly higher similarity across subjects. Similar results were obtained for a real MEG dataset of face
Compressively Characterizing High-Dimensional Entangled States with Complementary, Random Filtering
Directory of Open Access Journals (Sweden)
Gregory A. Howland
2016-05-01
Full Text Available The resources needed to conventionally characterize a quantum system are overwhelmingly large for high-dimensional systems. This obstacle may be overcome by abandoning traditional cornerstones of quantum measurement, such as general quantum states, strong projective measurement, and assumption-free characterization. Following this reasoning, we demonstrate an efficient technique for characterizing high-dimensional, spatial entanglement with one set of measurements. We recover sharp distributions with local, random filtering of the same ensemble in momentum followed by position—something the uncertainty principle forbids for projective measurements. Exploiting the expectation that entangled signals are highly correlated, we use fewer than 5000 measurements to characterize a 65,536-dimensional state. Finally, we use entropic inequalities to witness entanglement without a density matrix. Our method represents the sea change unfolding in quantum measurement, where methods influenced by the information theory and signal-processing communities replace unscalable, brute-force techniques—a progression previously followed by classical sensing.
Distribution of high-dimensional entanglement via an intra-city free-space link
Steinlechner, Fabian; Ecker, Sebastian; Fink, Matthias; Liu, Bo; Bavaresco, Jessica; Huber, Marcus; Scheidl, Thomas; Ursin, Rupert
2017-07-01
Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links.
High-dimensional quantum state transfer through a quantum spin chain
Qin, Wei; Wang, Chuan; Long, Gui Lu
2013-01-01
In this paper, we investigate a high-dimensional quantum state transfer protocol. An arbitrary unknown high-dimensional state can be transferred with high fidelity between two remote registers through an XX coupling spin chain of arbitrary length. The evolution of the state transfer is determined by the natural dynamics of the chain without external modulation and coupling strength engineering. As a consequence, entanglement distribution with a high efficiency can be achieved. Also the strong field and high spin quantum number can partly counteract the effect of finite temperature to ensure the high fidelity of the protocol when the quantum data bus is in the thermal equilibrium state under an external magnetic field.
Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014
Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina
2016-01-01
This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...
Distribution of high-dimensional entanglement via an intra-city free-space link.
Steinlechner, Fabian; Ecker, Sebastian; Fink, Matthias; Liu, Bo; Bavaresco, Jessica; Huber, Marcus; Scheidl, Thomas; Ursin, Rupert
2017-07-24
Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links.
Su, Yapeng; Shi, Qihui; Wei, Wei
2017-02-01
New insights on cellular heterogeneity in the last decade provoke the development of a variety of single cell omics tools at a lightning pace. The resultant high-dimensional single cell data generated by these tools require new theoretical approaches and analytical algorithms for effective visualization and interpretation. In this review, we briefly survey the state-of-the-art single cell proteomic tools with a particular focus on data acquisition and quantification, followed by an elaboration of a number of statistical and computational approaches developed to date for dissecting the high-dimensional single cell data. The underlying assumptions, unique features, and limitations of the analytical methods with the designated biological questions they seek to answer will be discussed. Particular attention will be given to those information theoretical approaches that are anchored in a set of first principles of physics and can yield detailed (and often surprising) predictions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Estimating and testing high-dimensional mediation effects in epigenetic studies.
Zhang, Haixiang; Zheng, Yinan; Zhang, Zhou; Gao, Tao; Joyce, Brian; Yoon, Grace; Zhang, Wei; Schwartz, Joel; Just, Allan; Colicino, Elena; Vokonas, Pantel; Zhao, Lihui; Lv, Jinchi; Baccarelli, Andrea; Hou, Lifang; Liu, Lei
2016-10-15
High-dimensional DNA methylation markers may mediate pathways linking environmental exposures with health outcomes. However, there is a lack of analytical methods to identify significant mediators for high-dimensional mediation analysis. Based on sure independent screening and minimax concave penalty techniques, we use a joint significance test for mediation effect. We demonstrate its practical performance using Monte Carlo simulation studies and apply this method to investigate the extent to which DNA methylation markers mediate the causal pathway from smoking to reduced lung function in the Normative Aging Study. We identify 2 CpGs with significant mediation effects. R package, source code, and simulation study are available at https://github.com/YinanZheng/HIMA CONTACT: lei.liu@northwestern.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Bit-Table Based Biclustering and Frequent Closed Itemset Mining in High-Dimensional Binary Data
Directory of Open Access Journals (Sweden)
András Király
2014-01-01
Full Text Available During the last decade various algorithms have been developed and proposed for discovering overlapping clusters in high-dimensional data. The two most prominent application fields in this research, proposed independently, are frequent itemset mining (developed for market basket data and biclustering (applied to gene expression data analysis. The common limitation of both methodologies is the limited applicability for very large binary data sets. In this paper we propose a novel and efficient method to find both frequent closed itemsets and biclusters in high-dimensional binary data. The method is based on simple but very powerful matrix and vector multiplication approaches that ensure that all patterns can be discovered in a fast manner. The proposed algorithm has been implemented in the commonly used MATLAB environment and freely available for researchers.
Efficient uncertainty quantification methodologies for high-dimensional climate land models
Energy Technology Data Exchange (ETDEWEB)
Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Berry, Robert Dan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Debusschere, Bert J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)
2011-11-01
In this report, we proposed, examined and implemented approaches for performing efficient uncertainty quantification (UQ) in climate land models. Specifically, we applied Bayesian compressive sensing framework to a polynomial chaos spectral expansions, enhanced it with an iterative algorithm of basis reduction, and investigated the results on test models as well as on the community land model (CLM). Furthermore, we discussed construction of efficient quadrature rules for forward propagation of uncertainties from high-dimensional, constrained input space to output quantities of interest. The work lays grounds for efficient forward UQ for high-dimensional, strongly non-linear and computationally costly climate models. Moreover, to investigate parameter inference approaches, we have applied two variants of the Markov chain Monte Carlo (MCMC) method to a soil moisture dynamics submodel of the CLM. The evaluation of these algorithms gave us a good foundation for further building out the Bayesian calibration framework towards the goal of robust component-wise calibration.
Extreme Learning Machines on High Dimensional and Large Data Applications: A Survey
Directory of Open Access Journals (Sweden)
Jiuwen Cao
2015-01-01
Full Text Available Extreme learning machine (ELM has been developed for single hidden layer feedforward neural networks (SLFNs. In ELM algorithm, the connections between the input layer and the hidden neurons are randomly assigned and remain unchanged during the learning process. The output connections are then tuned via minimizing the cost function through a linear system. The computational burden of ELM has been significantly reduced as the only cost is solving a linear system. The low computational complexity attracted a great deal of attention from the research community, especially for high dimensional and large data applications. This paper provides an up-to-date survey on the recent developments of ELM and its applications in high dimensional and large data. Comprehensive reviews on image processing, video processing, medical signal processing, and other popular large data applications with ELM are presented in the paper.
Atom-centered symmetry functions for constructing high-dimensional neural network potentials
Behler, Jörg
2011-02-01
Neural networks offer an unbiased and numerically very accurate approach to represent high-dimensional ab initio potential-energy surfaces. Once constructed, neural network potentials can provide the energies and forces many orders of magnitude faster than electronic structure calculations, and thus enable molecular dynamics simulations of large systems. However, Cartesian coordinates are not a good choice to represent the atomic positions, and a transformation to symmetry functions is required. Using simple benchmark systems, the properties of several types of symmetry functions suitable for the construction of high-dimensional neural network potential-energy surfaces are discussed in detail. The symmetry functions are general and can be applied to all types of systems such as molecules, crystalline and amorphous solids, and liquids.
Mpofu, D J; Lanphear, J; Stewart, T; Das, M; Ridding, P; Dunn, E
1998-09-01
The Faculty of Medicine and Health Sciences (FMHS), United Arab Emirates (UAE) University is in a unique position to explore issues related to English language proficiency and medical student performance. All students entering the FMHS have English as a second language. This study focused on the issues of students' proficiency in English as measured by the TOEFL test, student background factors and interaction in problem-based learning (PBL) groups. Using a modification of Bales Interaction Process Analysis, four problem-based learning groups were observed over four thematic units, to measure the degree of student interaction within PBL groups and to compare this to individual TOEFL scores and key background variables. The students' contributions correlated highly with TOEFL test results in the giving of information (range r = 0.67-0.74). The female students adhered to interacting in English during group sessions, whereas the male students were more likely to revert to using Arabic in elaborating unclear phenomena (p TOEFL scores for the male students, but not for female students. Multivariate analysis was undertaken to analyse the relative contribution of the TOEFL, parental education and years of studying in English. The best predictor of students' contributions in PBL groups was identified as TOEFL scores. The study demonstrates the importance of facilitating a locally acceptable level of English proficiency prior to admission to the FMHS. However, it also highlights the importance of not focusing only on English proficiency but paying attention to additional factors in facilitating medical students in maximizing benefits from interactions in PBL settings.
An Unbiased Distance-based Outlier Detection Approach for High-dimensional Data
DEFF Research Database (Denmark)
Nguyen, Hoang Vu; Gopalkrishnan, Vivekanand; Assent, Ira
2011-01-01
than a global property. Different from existing approaches, it is not grid-based and dimensionality unbiased. Thus, its performance is impervious to grid resolution as well as the curse of dimensionality. In addition, our approach ranks the outliers, allowing users to select the number of desired...... outliers, thus mitigating the issue of high false alarm rate. Extensive empirical studies on real datasets show that our approach efficiently and effectively detects outliers, even in high-dimensional spaces....
Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen
2018-01-25
Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online.
Nikooienejad, Amir; Wang, Wenyi; Johnson, Valen E.
2017-01-01
Variable selection in high dimensional cancer genomic studies has become very popular in the past decade, due to the interest in discovering significant genes pertinent to a specific cancer type. Censored survival data is the main data structure in such studies and performing variable selection for such data type requires certain methodology. With recent developments in computational power, Bayesian methods have become more attractive in the context of variable selection. In this article we i...
Kameda, Tatsuya; Tsukasaki, Takafumi; Hastie, Reid; Berg, Nathan
2011-01-01
We introduce a game theory model of individual decisions to cooperate by contributing personal resources to group decisions versus by free riding on the contributions of other members. In contrast to most public-goods games that assume group returns are linear in individual contributions, the present model assumes decreasing marginal group production as a function of aggregate individual contributions. This diminishing marginal returns assumption is more realistic and generates starkly different predictions compared to the linear model. One important implication is that, under most conditions, there exist equilibria where some, but not all, members of a group contribute, even with completely self-interested motives. An agent-based simulation confirmed the individual and group advantages of the equilibria in which behavioral asymmetry emerges from a game structure that is a priori perfectly symmetric for all agents (all agents have the same payoff function and action space but take different actions in equilibria). A behavioral experiment demonstrated that cooperators and free riders coexist in a stable manner in groups performing with the nonlinear production function. A collateral result demonstrated that, compared to a dictatorial decision scheme guided by the best member in a group, the majority/plurality decision rules can pool information effectively and produce greater individual net welfare at equilibrium, even if free riding is not sanctioned. This is an original proof that cooperation in ad hoc decision-making groups can be understood in terms of self-interested motivations and that, despite the free-rider problem, majority/plurality decision rules can function robustly as simple, efficient social decision heuristics.
Onishchenko, G G
2008-01-01
In 2006, being the presiding country at the Group of Eight Summit for the first time, Russia proposed the issue of counteraction with infectious diseases as one of the priority issues. In addition to the realization of the priority National Health Project, which is to a large degree dedicated to the immunoprophylaxis of infectious diseases as well as the prevention and treatment of HIV-infection/AIDS and hepatites B and C, a meeting of the Presidium of Russian Federation State Council presided by President V. V. Putin, dedicated to the problem of HIV-infection epidemic spread, was held on April 21; the meeting resulted in the formation of Governmental Commission on the problems of HIV-infection/AIDS. On July 16, the leaders of Group of Eight during their meeting in Saint-Petersburg, discussed and validated the Declaration on counteraction with infectious diseases, reflecting the position of the leaders on the entire complex of problems connected with the spread of infectious diseases, and determining the main principles of the global strategy of counteraction with epidemics under the threats associated with the appearance of new infections, such as avian influenza, HIV-infection/AIDS, tuberculosis, and malaria. While preparing for the Summit, Russia made a range of suggestion aimed mostly on the reinforcement of possibilities to control infectious diseases in Eastern Europe and Central Asia. Practically all Russia's initiatives were supported by the partners, which was also reflected in the conclusive document of the Summit. Following Russian initiatives, Group of Eight intends to increase the effectiveness of international affords on the prevention and elimination of the consequences of natural disasters, including the use of fast response teams. To provide Russia's contribution to this initiative, modernized specialized antiepidemic teams will be used. Taking into consideration the present-day financial participation of Russian Federation in the realization of
Transformation of a high-dimensional color space for material classification.
Liu, Huajian; Lee, Sang-Heon; Chahl, Javaan Singh
2017-04-01
Images in red-green-blue (RGB) color space need to be transformed to other color spaces for image processing or analysis. For example, the well-known hue-saturation-intensity (HSI) color space, which separates hue from saturation and intensity and is similar to the color perception of humans, can aid many computer vision applications. For high-dimensional images, such as multispectral or hyperspectral images, transformation images to a color space that can separate hue from saturation and intensity would be useful; however, the related works are limited. Some methods could interpret a set of high-dimensional images to hue, saturation, and intensity, but these methods need to reduce the dimension of original images to three images and then map them to the trichromatic color space of RGB. Generally, dimension reduction could cause loss or distortion of original data, and, therefore, the transformed color spaces could not be suitable for material classification in critical conditions. This paper describes a method that can transform high-dimensional images to a color space called hyper-hue-saturation-intensity (HHSI), which is analogous to HSI in high dimensions. The transformation does not need dimension reduction, and, therefore, it can preserve the original information. Experimental results indicate that the hyper-hue is independent of saturation and intensity and it is more suitable for material classification of proximal or remote sensing images captured in a natural environment where illumination usually cannot be controlled.
Designing Progressive and Interactive Analytics Processes for High-Dimensional Data Analysis.
Turkay, Cagatay; Kaya, Erdem; Balcisoy, Selim; Hauser, Helwig
2017-01-01
In interactive data analysis processes, the dialogue between the human and the computer is the enabling mechanism that can lead to actionable observations about the phenomena being investigated. It is of paramount importance that this dialogue is not interrupted by slow computational mechanisms that do not consider any known temporal human-computer interaction characteristics that prioritize the perceptual and cognitive capabilities of the users. In cases where the analysis involves an integrated computational method, for instance to reduce the dimensionality of the data or to perform clustering, such non-optimal processes are often likely. To remedy this, progressive computations, where results are iteratively improved, are getting increasing interest in visual analytics. In this paper, we present techniques and design considerations to incorporate progressive methods within interactive analysis processes that involve high-dimensional data. We define methodologies to facilitate processes that adhere to the perceptual characteristics of users and describe how online algorithms can be incorporated within these. A set of design recommendations and according methods to support analysts in accomplishing high-dimensional data analysis tasks are then presented. Our arguments and decisions here are informed by observations gathered over a series of analysis sessions with analysts from finance. We document observations and recommendations from this study and present evidence on how our approach contribute to the efficiency and productivity of interactive visual analysis sessions involving high-dimensional data.
High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.
Andras, Peter
2018-02-01
Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.
Lee, Jenny Hyunjung; McDonnell, Kevin T; Zelenyuk, Alla; Imre, Dan; Mueller, Klaus
2014-03-01
Although the euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging intercluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multidimensional scaling (MDS) where one can often observe nonintuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly in high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our biscale framework distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate euclidean distance.
High-dimensional decoy-state quantum key distribution over multicore telecommunication fibers
Cañas, G.; Vera, N.; Cariñe, J.; González, P.; Cardenas, J.; Connolly, P. W. R.; Przysiezna, A.; Gómez, E. S.; Figueroa, M.; Vallone, G.; Villoresi, P.; da Silva, T. Ferreira; Xavier, G. B.; Lima, G.
2017-08-01
Multiplexing is a strategy to augment the transmission capacity of a communication system. It consists of combining multiple signals over the same data channel and it has been very successful in classical communications. However, the use of enhanced channels has only reached limited practicality in quantum communications (QC) as it requires the manipulation of quantum systems of higher dimensions. Considerable effort is being made towards QC using high-dimensional quantum systems encoded into the transverse momentum of single photons, but so far no approach has been proven to be fully compatible with the existing telecommunication fibers. Here we overcome such a challenge and demonstrate a secure high-dimensional decoy-state quantum key distribution session over a 300-m-long multicore optical fiber. The high-dimensional quantum states are defined in terms of the transverse core modes available for the photon transmission over the fiber, and theoretical analyses show that positive secret key rates can be achieved through metropolitan distances.
The A sub y problem in refined resonating group model calculations for p- sup 3 He scattering
Reiss, C
2003-01-01
We report on a microscopic Refined Resonating Group Model (RRGM) calculation of scattering of p off sup 3 He employing the Argonne-v sub 1 sub 4 and the Bonn nucleon-nucleon potentials without three-nucleon forces at low energies up to 30 MeV. The calculated phase shifts verify the well-known proton analyzing power A sub y problem. We demonstrate that with corrected sup 3 P sub 2 phase shifts experimental differential cross-section and analyzing power data can be explained.
Directory of Open Access Journals (Sweden)
Yu-Feng Sun
2016-04-01
Full Text Available The fireworks algorithm (FA is a new parallel diffuse optimization algorithm to simulate the fireworks explosion phenomenon, which realizes the balance between global exploration and local searching by means of adjusting the explosion mode of fireworks bombs. By introducing the grouping strategy of the shuffled frog leaping algorithm (SFLA, an improved FA-SFLA hybrid algorithm is put forward, which can effectively make the FA jump out of the local optimum and accelerate the global search ability. The simulation results show that the hybrid algorithm greatly improves the accuracy and convergence velocity for solving the function optimization problems.
McParland, D; Phillips, C M; Brennan, L; Roche, H M; Gormley, I C
2017-12-10
The LIPGENE-SU.VI.MAX study, like many others, recorded high-dimensional continuous phenotypic data and categorical genotypic data. LIPGENE-SU.VI.MAX focuses on the need to account for both phenotypic and genetic factors when studying the metabolic syndrome (MetS), a complex disorder that can lead to higher risk of type 2 diabetes and cardiovascular disease. Interest lies in clustering the LIPGENE-SU.VI.MAX participants into homogeneous groups or sub-phenotypes, by jointly considering their phenotypic and genotypic data, and in determining which variables are discriminatory. A novel latent variable model that elegantly accommodates high dimensional, mixed data is developed to cluster LIPGENE-SU.VI.MAX participants using a Bayesian finite mixture model. A computationally efficient variable selection algorithm is incorporated, estimation is via a Gibbs sampling algorithm and an approximate BIC-MCMC criterion is developed to select the optimal model. Two clusters or sub-phenotypes ('healthy' and 'at risk') are uncovered. A small subset of variables is deemed discriminatory, which notably includes phenotypic and genotypic variables, highlighting the need to jointly consider both factors. Further, 7 years after the LIPGENE-SU.VI.MAX data were collected, participants underwent further analysis to diagnose presence or absence of the MetS. The two uncovered sub-phenotypes strongly correspond to the 7-year follow-up disease classification, highlighting the role of phenotypic and genotypic factors in the MetS and emphasising the potential utility of the clustering approach in early screening. Additionally, the ability of the proposed approach to define the uncertainty in sub-phenotype membership at the participant level is synonymous with the concepts of precision medicine and nutrition. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Fisher, Charles K; Mehta, Pankaj
2015-06-01
Feature selection, identifying a subset of variables that are relevant for predicting a response, is an important and challenging component of many methods in statistics and machine learning. Feature selection is especially difficult and computationally intensive when the number of variables approaches or exceeds the number of samples, as is often the case for many genomic datasets. Here, we introduce a new approach--the Bayesian Ising Approximation (BIA)-to rapidly calculate posterior probabilities for feature relevance in L2 penalized linear regression. In the regime where the regression problem is strongly regularized by the prior, we show that computing the marginal posterior probabilities for features is equivalent to computing the magnetizations of an Ising model with weak couplings. Using a mean field approximation, we show it is possible to rapidly compute the feature selection path described by the posterior probabilities as a function of the L2 penalty. We present simulations and analytical results illustrating the accuracy of the BIA on some simple regression problems. Finally, we demonstrate the applicability of the BIA to high-dimensional regression by analyzing a gene expression dataset with nearly 30 000 features. These results also highlight the impact of correlations between features on Bayesian feature selection. An implementation of the BIA in C++, along with data for reproducing our gene expression analyses, are freely available at http://physics.bu.edu/∼pankajm/BIACode. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Directory of Open Access Journals (Sweden)
Liangtian He
2014-01-01
Full Text Available We propose a new effective algorithm for recovering a group sparse signal from very limited observations or measured data. As we know that a better reconstruction quality can be achieved when encoding more structural information besides sparsity, the commonly employed l2,1-regularization incorporating the prior grouping information has a better performance than the plain l1-regularized models as expected. In this paper we make a further use of the prior grouping information as well as possibly other prior information by considering a weighted l2,1 model. Specifically, we propose a multistage convex relaxation procedure to alternatively estimate weights and solve the resulted weighted problem. The procedure of estimating weights makes better use of the prior grouping information and is implemented based on the iterative support detection (Wang and Yin, 2010. Comprehensive numerical experiments show that our approach brings significant recovery enhancements compared with the plain l2,1 model, solved via the alternating direction method (ADM (Deng et al., 2013, either in noiseless or in noisy environments.
Grady, R; Gouldsborough, I; Sheader, E; Speake, T
2009-11-01
Problem-based learning (PBL) in medical and dental curricula is now well established, as such courses are seen to equip students with valuable transferable skills (e.g. problem-solving or team-working abilities), in addition to knowledge acquisition. However, it is often assumed that students improve in such skills without actually providing direct opportunity for practice, and without giving students feedback on their performance. 'The Manchester Dental Programme' (TMDP) was developed at The University of Manchester, UK as a 5-year, integrated enquiry-led curriculum. The existing PBL course was redesigned to include a unique, additional PBL session ('Session 4') that incorporated an activity for the group to complete, based on the subject material covered during student self-study. A summative mark was awarded for each activity that reflected the teamwork, organisational and overall capabilities of the groups. This paper describes the different types of activities developed for the Session 4 and presents an analysis of the perceptions of the students and staff involved. The student response to the Session 4 activities, obtained via questionnaires, was extremely positive, with the majority finding them fun, yet challenging, and 'worthwhile'. The activities were perceived to enhance subject understanding; develop students' problem-solving skills; allow the application of knowledge to new situations, and helped to identify gaps in knowledge to direct further study. Staff found the activities innovative and exciting learning tools for the students. The Session 4 activities described here are useful educational resources that could be adapted for other PBL courses in a wide variety of subject areas.
Frank, A
1998-01-01
This article describes the main themes, funding needs, and policy options of the Working Group on the Environment in US-China Relations that was created in November 1996. Meetings are chaired by members of the Council of Foreign Relations and the Carnegie Endowment for International Peace. The 40+ member Working Group is coordinated by the Environmental Change and Security Project and the Woodrow Wilson Center's Asia Program. It offers a forum for discussion of environmental and foreign policy concerns. The aims are to identify important environmental and sustainable development issues related to US and Chinese interests; to develop creative strategies for government and nongovernment projects between the US and China; and to discuss strategies for using environmental issues for building improved relations between countries. Monthly meetings focus on energy issues, water quantity and quality, funds for environmental protection, and biodiversity issues. The group meetings emphasize the themes of multilateral cooperation, local Chinese environmental issues of significance to the US, and obstacles to cooperation on US-led projects within China. Improved relations may be achieved by articulation of a coherent China policy with explicit goals and guidelines, provision of funding, and linking local environmental problems with global ones. The US should support private business in marketing environmental technology and assist in the development of policy changes in the energy and water sectors in China. China needs improved irrigation techniques and comprehensive watershed management plans.
On-chip generation of high-dimensional entangled quantum states and their coherent control.
Kues, Michael; Reimer, Christian; Roztocki, Piotr; Cortés, Luis Romero; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T; Little, Brent E; Moss, David J; Caspani, Lucia; Azaña, José; Morandotti, Roberto
2017-06-28
Optical quantum states based on entangled photons are essential for solving questions in fundamental physics and are at the heart of quantum information science. Specifically, the realization of high-dimensional states (D-level quantum systems, that is, qudits, with D > 2) and their control are necessary for fundamental investigations of quantum mechanics, for increasing the sensitivity of quantum imaging schemes, for improving the robustness and key rate of quantum communication protocols, for enabling a richer variety of quantum simulations, and for achieving more efficient and error-tolerant quantum computation. Integrated photonics has recently become a leading platform for the compact, cost-efficient, and stable generation and processing of non-classical optical states. However, so far, integrated entangled quantum sources have been limited to qubits (D = 2). Here we demonstrate on-chip generation of entangled qudit states, where the photons are created in a coherent superposition of multiple high-purity frequency modes. In particular, we confirm the realization of a quantum system with at least one hundred dimensions, formed by two entangled qudits with D = 10. Furthermore, using state-of-the-art, yet off-the-shelf telecommunications components, we introduce a coherent manipulation platform with which to control frequency-entangled states, capable of performing deterministic high-dimensional gate operations. We validate this platform by measuring Bell inequality violations and performing quantum state tomography. Our work enables the generation and processing of high-dimensional quantum states in a single spatial mode.
Sparse redundancy analysis of high-dimensional genetic and genomic data.
Csala, Attila; Voorbraak, Frans P J M; Zwinderman, Aeilko H; Hof, Michel H
2017-10-15
Recent technological developments have enabled the possibility of genetic and genomic integrated data analysis approaches, where multiple omics datasets from various biological levels are combined and used to describe (disease) phenotypic variations. The main goal is to explain and ultimately predict phenotypic variations by understanding their genetic basis and the interaction of the associated genetic factors. Therefore, understanding the underlying genetic mechanisms of phenotypic variations is an ever increasing research interest in biomedical sciences. In many situations, we have a set of variables that can be considered to be the outcome variables and a set that can be considered to be explanatory variables. Redundancy analysis (RDA) is an analytic method to deal with this type of directionality. Unfortunately, current implementations of RDA cannot deal optimally with the high dimensionality of omics data (p≫n). The existing theoretical framework, based on Ridge penalization, is suboptimal, since it includes all variables in the analysis. As a solution, we propose to use Elastic Net penalization in an iterative RDA framework to obtain a sparse solution. We proposed sparse redundancy analysis (sRDA) for high dimensional omics data analysis. We conducted simulation studies with our software implementation of sRDA to assess the reliability of sRDA. Both the analysis of simulated data, and the analysis of 485 512 methylation markers and 18,424 gene-expression values measured in a set of 55 patients with Marfan syndrome show that sRDA is able to deal with the usual high dimensionality of omics data. http://uva.csala.me/rda. a.csala@amc.uva.nl. Supplementary data are available at Bioinformatics online.
On-chip generation of high-dimensional entangled quantum states and their coherent control
Kues, Michael; Reimer, Christian; Roztocki, Piotr; Cortés, Luis Romero; Sciara, Stefania; Wetzel, Benjamin; Zhang, Yanbing; Cino, Alfonso; Chu, Sai T.; Little, Brent E.; Moss, David J.; Caspani, Lucia; Azaña, José; Morandotti, Roberto
2017-06-01
Optical quantum states based on entangled photons are essential for solving questions in fundamental physics and are at the heart of quantum information science. Specifically, the realization of high-dimensional states (D-level quantum systems, that is, qudits, with D > 2) and their control are necessary for fundamental investigations of quantum mechanics, for increasing the sensitivity of quantum imaging schemes, for improving the robustness and key rate of quantum communication protocols, for enabling a richer variety of quantum simulations, and for achieving more efficient and error-tolerant quantum computation. Integrated photonics has recently become a leading platform for the compact, cost-efficient, and stable generation and processing of non-classical optical states. However, so far, integrated entangled quantum sources have been limited to qubits (D = 2). Here we demonstrate on-chip generation of entangled qudit states, where the photons are created in a coherent superposition of multiple high-purity frequency modes. In particular, we confirm the realization of a quantum system with at least one hundred dimensions, formed by two entangled qudits with D = 10. Furthermore, using state-of-the-art, yet off-the-shelf telecommunications components, we introduce a coherent manipulation platform with which to control frequency-entangled states, capable of performing deterministic high-dimensional gate operations. We validate this platform by measuring Bell inequality violations and performing quantum state tomography. Our work enables the generation and processing of high-dimensional quantum states in a single spatial mode.
Shiqing Wang; Limin Su
2013-01-01
During the last few years, a great deal of attention has been focused on Lasso and Dantzig selector in high-dimensional linear regression when the number of variables can be much larger than the sample size. Under a sparsity scenario, the authors (see, e.g., Bickel et al., 2009, Bunea et al., 2007, Candes and Tao, 2007, Candès and Tao, 2007, Donoho et al., 2006, Koltchinskii, 2009, Koltchinskii, 2009, Meinshausen and Yu, 2009, Rosenbaum and Tsybakov, 2010, Tsybakov, 2006, van de Geer, 2008, a...
Efficient Estimation of first Passage Probability of high-Dimensional Nonlinear Systems
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Bucher, Christian
2011-01-01
An efficient method for estimating low first passage probabilities of high-dimensional nonlinear systems based on asymptotic estimation of low probabilities is presented. The method does not require any a priori knowledge of the system, i.e. it is a black-box method, and has very low requirements......, the failure probabilities of three well-known nonlinear systems are estimated. Next, a reduced degree-of-freedom model of a wind turbine is developed and is exposed to a turbulent wind field. The model incorporates very high dimensions and strong nonlinearities simultaneously. The failure probability...
CSIR Research Space (South Africa)
Giovannini, D
2013-06-01
Full Text Available : QELS_Fundamental Science, San Jose, California United States, 9-14 June 2013 Reconstruction of High-Dimensional States Entangled in Orbital Angular Momentum Using Mutually Unbiased Measurements D. Giovannini1, ⇤, J. Romero1, 2, J. Leach3, A.... Dudley4, A. Forbes4, 5 and M. J. Padgett1 1 School of Physics and Astronomy, SUPA, University of Glasgow, Glasgow G12 8QQ, United Kingdom 2 Department of Physics, SUPA, University of Strathclyde, Glasgow G4 ONG, United Kingdom 3 School of Engineering...
Computing and visualizing time-varying merge trees for high-dimensional data
Energy Technology Data Exchange (ETDEWEB)
Oesterling, Patrick [Univ. of Leipzig (Germany); Heine, Christian [Univ. of Kaiserslautern (Germany); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Morozov, Dmitry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scheuermann, Gerik [Univ. of Leipzig (Germany)
2017-06-03
We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.
High-dimensional, massive sample-size Cox proportional hazards regression for survival analysis.
Mittal, Sushil; Madigan, David; Burd, Randall S; Suchard, Marc A
2014-04-01
Survival analysis endures as an old, yet active research field with applications that spread across many domains. Continuing improvements in data acquisition techniques pose constant challenges in applying existing survival analysis methods to these emerging data sets. In this paper, we present tools for fitting regularized Cox survival analysis models on high-dimensional, massive sample-size (HDMSS) data using a variant of the cyclic coordinate descent optimization technique tailored for the sparsity that HDMSS data often present. Experiments on two real data examples demonstrate that efficient analyses of HDMSS data using these tools result in improved predictive performance and calibration.
High-dimensional nonlinear diffusion stochastic processes modelling for engineering applications
Mamontov, Yevgeny
2001-01-01
This book is the first one devoted to high-dimensional (or large-scale) diffusion stochastic processes (DSPs) with nonlinear coefficients. These processes are closely associated with nonlinear Ito's stochastic ordinary differential equations (ISODEs) and with the space-discretized versions of nonlinear Ito's stochastic partial integro-differential equations. The latter models include Ito's stochastic partial differential equations (ISPDEs). The book presents the new analytical treatment which can serve as the basis of a combined, analytical-numerical approach to greater computational efficienc
Stochastic Neural Network Approach for Learning High-Dimensional Free Energy Surfaces
Schneider, Elia; Dai, Luke; Topper, Robert Q.; Drechsel-Grau, Christof; Tuckerman, Mark E.
2017-10-01
The generation of free energy landscapes corresponding to conformational equilibria in complex molecular systems remains a significant computational challenge. Adding to this challenge is the need to represent, store, and manipulate the often high-dimensional surfaces that result from rare-event sampling approaches employed to compute them. In this Letter, we propose the use of artificial neural networks as a solution to these issues. Using specific examples, we discuss network training using enhanced-sampling methods and the use of the networks in the calculation of ensemble averages.
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.
Kong, Shengchun; Nan, Bin
2014-01-01
We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.
Inferring biological tasks using Pareto analysis of high-dimensional data.
Hart, Yuval; Sheftel, Hila; Hausser, Jean; Szekely, Pablo; Ben-Moshe, Noa Bossel; Korem, Yael; Tendler, Avichai; Mayo, Avraham E; Alon, Uri
2015-03-01
We present the Pareto task inference method (ParTI; http://www.weizmann.ac.il/mcb/UriAlon/download/ParTI) for inferring biological tasks from high-dimensional biological data. Data are described as a polytope, and features maximally enriched closest to the vertices (or archetypes) allow identification of the tasks the vertices represent. We demonstrate that human breast tumors and mouse tissues are well described by tetrahedrons in gene expression space, with specific tumor types and biological functions enriched at each of the vertices, suggesting four key tasks.
Ceotto, Michele; Di Liberto, Giovanni; Conte, Riccardo
2017-07-01
A new semiclassical "divide-and-conquer" method is presented with the aim of demonstrating that quantum dynamics simulations of high dimensional molecular systems are doable. The method is first tested by calculating the quantum vibrational power spectra of water, methane, and benzene—three molecules of increasing dimensionality for which benchmark quantum results are available—and then applied to C60 , a system characterized by 174 vibrational degrees of freedom. Results show that the approach can accurately account for quantum anharmonicities, purely quantum features like overtones, and the removal of degeneracy when the molecular symmetry is broken.
Yue, Lei; Guan, Zailin; Saif, Ullah; Zhang, Fei; Wang, Hao
2016-01-01
Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems.
Di Liberto, Giovanni; Conte, Riccardo; Ceotto, Michele
2018-01-07
We extensively describe our recently established "divide-and-conquer" semiclassical method [M. Ceotto, G. Di Liberto, and R. Conte, Phys. Rev. Lett. 119, 010401 (2017)] and propose a new implementation of it to increase the accuracy of results. The technique permits us to perform spectroscopic calculations of high-dimensional systems by dividing the full-dimensional problem into a set of smaller dimensional ones. The partition procedure, originally based on a dynamical analysis of the Hessian matrix, is here more rigorously achieved through a hierarchical subspace-separation criterion based on Liouville's theorem. Comparisons of calculated vibrational frequencies to exact quantum ones for a set of molecules including benzene show that the new implementation performs better than the original one and that, on average, the loss in accuracy with respect to full-dimensional semiclassical calculations is reduced to only 10 wavenumbers. Furthermore, by investigating the challenging Zundel cation, we also demonstrate that the "divide-and-conquer" approach allows us to deal with complex strongly anharmonic molecular systems. Overall the method very much helps the assignment and physical interpretation of experimental IR spectra by providing accurate vibrational fundamentals and overtones decomposed into reduced dimensionality spectra.
Storm, Emma; Weniger, Christoph; Calore, Francesca
2017-08-01
We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (gtrsim 105) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that are motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |l|<90o and |b|<20o, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.
Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying
2017-08-01
Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.
Dorn, M; Bönisch, A; Ehlebracht-König, I
2011-02-01
The treatment programme "Vocational Perspective" was developed for patients with health-related social problems, e. g. long-term sick leave, job loss due to disability, job insecurity and psychosocial disabilities. It intends the patient-oriented imparting of information referring to social system, legal rights, earning capacity and occupational reintegration as well as an early feedback of the sociomedical assessment by the physicians. Participants during in-patient rehabilitation are supported to deal with their occupational situation; motivation to stay employed is strengthened. The group programme contains five psychoeducative modules and an additional sociomedical "ward round". The aim of the study was to examine the acceptance of the newly developed sociomedical vocational therapy module. A total of 179 patients participated in 21 "vocational perspective" seminars within the scope of a controlled quasi-experimental trial. In the experimental group the data on acceptance of the treatment was assessed by questionnaire at the end of the intervention. Experiences with implementation of the programme are described in order to complete the patient-related perspective. The identification of a demand for work-related interventions in medical rehabilitation seemed successful: Sociodemographic and socioeconomical parameters of the sample proved high risk in view of the social-medical perspective (poor education, high unemployment rates and long-term sick leave). Self-estimations revealed high suffering of the participants, e. g. due to the occupational situation, anxiety and depression, and confirmed high interest in work-related issues. The patients showed quite high acceptance of the programme (regarding importance of seminar, comprehensibility, usefulness of information, atmosphere of the group, mode and extent of the programme). 82.7% of the participants would recommend the programme to other people with work-related problems. Altogether, the experiences during the
Tikhonov, Mikhail; Monasson, Remi
2018-01-01
Much of our understanding of ecological and evolutionary mechanisms derives from analysis of low-dimensional models: with few interacting species, or few axes defining "fitness". It is not always clear to what extent the intuition derived from low-dimensional models applies to the complex, high-dimensional reality. For instance, most naturally occurring microbial communities are strikingly diverse, harboring a large number of coexisting species, each of which contributes to shaping the environment of others. Understanding the eco-evolutionary interplay in these systems is an important challenge, and an exciting new domain for statistical physics. Recent work identified a promising new platform for investigating highly diverse ecosystems, based on the classic resource competition model of MacArthur. Here, we describe how the same analytical framework can be used to study evolutionary questions. Our analysis illustrates how, at high dimension, the intuition promoted by a one-dimensional (scalar) notion of fitness can become misleading. Specifically, while the low-dimensional picture emphasizes organism cost or efficiency, we exhibit a regime where cost becomes irrelevant for survival, and link this observation to generic properties of high-dimensional geometry.
Nguyen, Lan Huong; Holmes, Susan
2017-09-13
Detecting patterns in high-dimensional multivariate datasets is non-trivial. Clustering and dimensionality reduction techniques often help in discerning inherent structures. In biological datasets such as microbial community composition or gene expression data, observations can be generated from a continuous process, often unknown. Estimating data points' 'natural ordering' and their corresponding uncertainties can help researchers draw insights about the mechanisms involved. We introduce a Bayesian Unidimensional Scaling (BUDS) technique which extracts dominant sources of variation in high dimensional datasets and produces their visual data summaries, facilitating the exploration of a hidden continuum. The method maps multivariate data points to latent one-dimensional coordinates along their underlying trajectory, and provides estimated uncertainty bounds. By statistically modeling dissimilarities and applying a DiSTATIS registration method to their posterior samples, we are able to incorporate visualizations of uncertainties in the estimated data trajectory across different regions using confidence contours for individual data points. We also illustrate the estimated overall data density across different areas by including density clouds. One-dimensional coordinates recovered by BUDS help researchers discover sample attributes or covariates that are factors driving the main variability in a dataset. We demonstrated usefulness and accuracy of BUDS on a set of published microbiome 16S and RNA-seq and roll call data. Our method effectively recovers and visualizes natural orderings present in datasets. Automatic visualization tools for data exploration and analysis are available at: https://nlhuong.shinyapps.io/visTrajectory/ .
A New Ensemble Method with Feature Space Partitioning for High-Dimensional Data Classification
Directory of Open Access Journals (Sweden)
Yongjun Piao
2015-01-01
Full Text Available Ensemble data mining methods, also known as classifier combination, are often used to improve the performance of classification. Various classifier combination methods such as bagging, boosting, and random forest have been devised and have received considerable attention in the past. However, data dimensionality increases rapidly day by day. Such a trend poses various challenges as these methods are not suitable to directly apply to high-dimensional datasets. In this paper, we propose an ensemble method for classification of high-dimensional data, with each classifier constructed from a different set of features determined by partitioning of redundant features. In our method, the redundancy of features is considered to divide the original feature space. Then, each generated feature subset is trained by a support vector machine, and the results of each classifier are combined by majority voting. The efficiency and effectiveness of our method are demonstrated through comparisons with other ensemble techniques, and the results show that our method outperforms other methods.
High-Dimensional Single-Photon Quantum Gates: Concepts and Experiments
Babazadeh, Amin; Erhard, Manuel; Wang, Feiran; Malik, Mehul; Nouroozi, Rahman; Krenn, Mario; Zeilinger, Anton
2017-11-01
Transformations on quantum states form a basic building block of every quantum information system. From photonic polarization to two-level atoms, complete sets of quantum gates for a variety of qubit systems are well known. For multilevel quantum systems beyond qubits, the situation is more challenging. The orbital angular momentum modes of photons comprise one such high-dimensional system for which generation and measurement techniques are well studied. However, arbitrary transformations for such quantum states are not known. Here we experimentally demonstrate a four-dimensional generalization of the Pauli X gate and all of its integer powers on single photons carrying orbital angular momentum. Together with the well-known Z gate, this forms the first complete set of high-dimensional quantum gates implemented experimentally. The concept of the X gate is based on independent access to quantum states with different parities and can thus be generalized to other photonic degrees of freedom and potentially also to other quantum systems.
High-Dimensional Single-Photon Quantum Gates: Concepts and Experiments.
Babazadeh, Amin; Erhard, Manuel; Wang, Feiran; Malik, Mehul; Nouroozi, Rahman; Krenn, Mario; Zeilinger, Anton
2017-11-03
Transformations on quantum states form a basic building block of every quantum information system. From photonic polarization to two-level atoms, complete sets of quantum gates for a variety of qubit systems are well known. For multilevel quantum systems beyond qubits, the situation is more challenging. The orbital angular momentum modes of photons comprise one such high-dimensional system for which generation and measurement techniques are well studied. However, arbitrary transformations for such quantum states are not known. Here we experimentally demonstrate a four-dimensional generalization of the Pauli X gate and all of its integer powers on single photons carrying orbital angular momentum. Together with the well-known Z gate, this forms the first complete set of high-dimensional quantum gates implemented experimentally. The concept of the X gate is based on independent access to quantum states with different parities and can thus be generalized to other photonic degrees of freedom and potentially also to other quantum systems.
High-dimensional quantum key distribution with the entangled single-photon-added coherent state
Energy Technology Data Exchange (ETDEWEB)
Wang, Yang [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Wan-Su, E-mail: 2010thzz@sina.com [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Bao, Hai-Ze; Zhou, Chun; Jiang, Mu-Sheng; Li, Hong-Wei [Zhengzhou Information Science and Technology Institute, Zhengzhou, 450001 (China); Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China)
2017-04-25
High-dimensional quantum key distribution (HD-QKD) can generate more secure bits for one detection event so that it can achieve long distance key distribution with a high secret key capacity. In this Letter, we present a decoy state HD-QKD scheme with the entangled single-photon-added coherent state (ESPACS) source. We present two tight formulas to estimate the single-photon fraction of postselected events and Eve's Holevo information and derive lower bounds on the secret key capacity and the secret key rate of our protocol. We also present finite-key analysis for our protocol by using the Chernoff bound. Our numerical results show that our protocol using one decoy state can perform better than that of previous HD-QKD protocol with the spontaneous parametric down conversion (SPDC) using two decoy states. Moreover, when considering finite resources, the advantage is more obvious. - Highlights: • Implement the single-photon-added coherent state source into the high-dimensional quantum key distribution. • Enhance both the secret key capacity and the secret key rate compared with previous schemes. • Show an excellent performance in view of statistical fluctuations.
Pang, Herbert; Jung, Sin-Ho
2013-01-01
A variety of prediction methods are used to relate high-dimensional genome data with a clinical outcome using a prediction model. Once a prediction model is developed from a data set, it should be validated using a resampling method or an independent data set. Although the existing prediction methods have been intensively evaluated by many investigators, there has not been a comprehensive study investigating the performance of the validation methods, especially with a survival clinical outcome. Understanding the properties of the various validation methods can allow researchers to perform more powerful validations while controlling for type I error. In addition, sample size calculation strategy based on these validation methods is lacking. We conduct extensive simulations to examine the statistical properties of these validation strategies. In both simulations and a real data example, we have found that 10-fold cross-validation with permutation gave the best power while controlling type I error close to the nominal level. Based on this, we have also developed a sample size calculation method that will be used to design a validation study with a user-chosen combination of prediction. Microarray and genome-wide association studies data are used as illustrations. The power calculation method in this presentation can be used for the design of any biomedical studies involving high-dimensional data and survival outcomes. PMID:23471879
Yue, Mu; Li, Jialiang
2017-05-18
Motivated by risk prediction studies with ultra-high dimensional bio markers, we propose a novel improvement screening methodology. Accurate risk prediction can be quite useful for patient treatment selection, prevention strategy or disease management in evidence-based medicine. The question of how to choose new markers in addition to the conventional ones is especially important. In the past decade, a number of new measures for quantifying the added value from the new markers were proposed, among which the integrated discrimination improvement (IDI) and net reclassification improvement (NRI) stand out. Meanwhile, C-statistics are routinely used to quantify the capacity of the estimated risk score in discriminating among subjects with different event times. In this paper, we will examine these improvement statistics as well as the norm-based approach for evaluating the incremental values of new markers and compare these four measures by analyzing ultra-high dimensional censored survival data. In particular, we consider Cox proportional hazards models with varying coefficients. All measures perform very well in simulations and we illustrate our methods in an application to a lung cancer study.
Sauerbrei, Willi; Boulesteix, Anne-Laure; Binder, Harald
2011-11-01
Multivariable regression models can link a potentially large number of variables to various kinds of outcomes, such as continuous, binary, or time-to-event endpoints. Selection of important variables and selection of the functional form for continuous covariates are key parts of building such models but are notoriously difficult due to several reasons. Caused by multicollinearity between predictors and a limited amount of information in the data, (in)stability can be a serious issue of models selected. For applications with a moderate number of variables, resampling-based techniques have been developed for diagnosing and improving multivariable regression models. Deriving models for high-dimensional molecular data has led to the need for adapting these techniques to settings where the number of variables is much larger than the number of observations. Three studies with a time-to-event outcome, of which one has high-dimensional data, are used to illustrate several techniques. Investigations at the covariate level and at the predictor level are seen to provide considerable insight into model stability and performance. While some areas are indicated where resampling techniques for model building still need further refinement, our case studies illustrate that these techniques can already be recommended for wider use.
Hierarchical classification of microorganisms based on high-dimensional phenotypic data.
Tafintseva, Valeria; Vigneau, Evelyne; Shapaval, Volha; Cariou, Véronique; Qannari, El Mostafa; Kohler, Achim
2017-11-09
The classification of microorganisms by high-dimensional phenotyping methods such as FTIR spectroscopy is often a complicated process due to the complexity of microbial phylogenetic taxonomy. A hierarchical structure developed for such data can often facilitate the classification analysis. The hierarchical tree structure can either be imposed to a given set of phenotypic data by integrating the phylogenetic taxonomic structure or set up by revealing the inherent clusters in the phenotypic data. In this study, we wanted to compare different approaches to hierarchical classification of microorganisms based on high-dimensional phenotypic data. A set of 19 different species of moulds (filamentous fungi) obtained from the mycological strain collection of the Norwegian Veterinary Institute (Oslo, Norway) is used for the study. Hierarchical cluster analysis is performed for setting up the classification trees. Classification algorithms such as Artificial Neural Networks (ANN), Partial Least Squared Discriminant Analysis (PLSDA), and Random Forest (RF) are used and compared. The two methods ANN and RF outperformed all the other approaches even though they did not utilize predefined hierarchical structure. To our knowledge, the Random Forest approach is used here for the first time to classify microorganisms by FTIR spectroscopy. This article is protected by copyright. All rights reserved.
Quality metrics in high-dimensional data visualization: an overview and systematization.
Bertini, Enrico; Tatu, Andrada; Keim, Daniel
2011-12-01
In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research. © 2010 IEEE
Prediction of Incident Diabetes in the Jackson Heart Study Using High-Dimensional Machine Learning.
Directory of Open Access Journals (Sweden)
Ramon Casanova
Full Text Available Statistical models to predict incident diabetes are often based on limited variables. Here we pursued two main goals: 1 investigate the relative performance of a machine learning method such as Random Forests (RF for detecting incident diabetes in a high-dimensional setting defined by a large set of observational data, and 2 uncover potential predictors of diabetes. The Jackson Heart Study collected data at baseline and in two follow-up visits from 5,301 African Americans. We excluded those with baseline diabetes and no follow-up, leaving 3,633 individuals for analyses. Over a mean 8-year follow-up, 584 participants developed diabetes. The full RF model evaluated 93 variables including demographic, anthropometric, blood biomarker, medical history, and echocardiogram data. We also used RF metrics of variable importance to rank variables according to their contribution to diabetes prediction. We implemented other models based on logistic regression and RF where features were preselected. The RF full model performance was similar (AUC = 0.82 to those more parsimonious models. The top-ranked variables according to RF included hemoglobin A1C, fasting plasma glucose, waist circumference, adiponectin, c-reactive protein, triglycerides, leptin, left ventricular mass, high-density lipoprotein cholesterol, and aldosterone. This work shows the potential of RF for incident diabetes prediction while dealing with high-dimensional data.
Directory of Open Access Journals (Sweden)
Lymn Joanne S
2008-06-01
Full Text Available Abstract Background Problem-based learning is recognised as promoting integration of knowledge and fostering a deeper approach to life-long learning, but is associated with significant resource implications. In order to encourage second year undergraduate medical students to integrate their pharmacological knowledge in a professionally relevant clinical context, with limited staff resources, we developed a novel clustered PBL approach. This paper utilises preliminary data from both the facilitator and student viewpoint to determine whether the use of this novel methodology is feasible with large groups of students. Methods Students were divided into 16 groups (20–21 students/group and were allocated a PBL facilitator. Each group was then divided into seven subgroups, or clusters, of 2 or 3 students wh each cluster being allocated a specific case. Each cluster was then provided with more detailed clinical information and studied an individual and distinct case-study. An electronic questionnaire was used to evaluate both student and facilitator perception of this clustered PBL format, with each being asked to rate the content, structure, facilitator effectiveness, and their personal view of the wider learning experience. Results Despite initial misgivings, facilitators managed this more complex clustered PBL methodology effectively within the time restraints and reported that they enjoyed the process. They felt that the cases effectively illustrated medical concepts and fitted and reinforced the students' pharmacological knowledge, but were less convinced that the scenario motivated students to use additional resources or stimulated their interest in pharmacology. Student feedback was broadly similar to that of the facilitators; although they were more positive about the scenario stimulating the use of additional resources and an interest in pharmacology. Conclusion This clustered PBL methodology can be successfully used with larger groups of
Stores, Rebecca; Stores, Gregory
2004-01-01
Background: The study concerns the unknown value of group instruction for mothers of young children with Down syndrome (DS) in preventing or minimizing sleep problems. Method: (1) Children with DS were randomly allocated to an Instruction group (given basic information about children's sleep) and a Control group for later comparison including…
Concave 1-norm group selection.
Jiang, Dingfeng; Huang, Jian
2015-04-01
Grouping structures arise naturally in many high-dimensional problems. Incorporation of such information can improve model fitting and variable selection. Existing group selection methods, such as the group Lasso, require correct membership. However, in practice it can be difficult to correctly specify group membership of all variables. Thus, it is important to develop group selection methods that are robust against group mis-specification. Also, it is desirable to select groups as well as individual variables in many applications. We propose a class of concave [Formula: see text]-norm group penalties that is robust to grouping structure and can perform bi-level selection. A coordinate descent algorithm is developed to calculate solutions of the proposed group selection method. Theoretical convergence of the algorithm is proved under certain regularity conditions. Comparison with other methods suggests the proposed method is the most robust approach under membership mis-specification. Simulation studies and real data application indicate that the [Formula: see text]-norm concave group selection approach achieves better control of false discovery rates. An R package grppenalty implementing the proposed method is available at CRAN. © Published by Oxford University Press 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Rumbach, Anna F
2013-11-01
To determine the anatomical and physiological nature of voice problems and their treatment in those group fitness instructors (GFIs) who have sought a medical diagnosis; the impact of voice disorders on quality of life and their contribution to activity limitations and participation restrictions; and the perceived attitudes and level of support from the industry at large in response to instructor's voice disorders and need for treatment. Prospective self-completion questionnaire design. Thirty-eight individuals (3 males and 35 females) currently active in the Australian fitness industry who had been diagnosed with a voice disorder completed an online self-completion questionnaire administered via SurveyMonkey. Laryngeal pathology included vocal fold nodules (N = 24), vocal fold cysts (N = 2), vocal fold hemorrhage (N = 1), and recurrent chronic laryngitis (N = 3). Eight individuals reported vocal strain and muscle tension dysphonia without concurrent vocal fold pathology. Treatment methods were variable, with 73.68% (N = 28) receiving voice therapy alone, 7.89% (N = 3) having voice therapy in combination with surgery, and 10.53% (N = 4) having voice therapy in conjunction with medication. Three individuals (7.89%) received no treatment for their voice disorder. During treatment, 82% of the cohort altered their teaching practices. Half of the cohort reported that their voice problems led to social withdrawal, decreased job satisfaction, and emotional distress. Greater than 65% also reported being dissatisfied with the level of industry and coworker support during the period of voice recovery. This study identifies that GFIs are susceptible to a number of voice disorders that impact their social and professional lives, and there is a need for more proactive training and advice on voice care for instructors, as well as those in management positions within the industry to address mixed approaches and opinions regarding the importance of voice care. Copyright © 2013
Schoenfeld-Tacher, Regina; Bright, Janice M; McConnell, Sherry L; Marley, Wanda S; Kogan, Lori R
2005-01-01
The objective of this investigation was to ascertain whether and how the introduction of a new technology (WebCT) influenced faculty teaching styles while facilitating small group problem-based learning (PBL) sessions in a professional veterinary medical (PVM) program. The following questions guided the study: (1) How does the use of technology affect faculty teaching behaviors? (2) Do the facilitators' interactions with WebCT technology change over the course of one semester? (3) What is the perceived impact of WebCT on facilitators' role in PBL? The study employed a combination of qualitative (case study) and semi-quantitative (survey) methods to explore these issues. Nine clinical sciences faculty members, leading a total of six PBL groups, were observed over the course of an academic semester for a total of 20 instructional sessions. The qualitative data gathered by observing faculty as they facilitated PBL sessions yielded three major themes: (1) How do PBL facilitators adapt to the addition of WebCT technology? (2) Does this technology affect teaching? and (3) How do PBL facilitators interact with their students and each other over the course of a semester? No direct evidence was found to suggest that use of WebCT affected teaching behaviors (e.g., student-centered vs. teacher-centered instruction). However, all facilitators showed a moderate increase in comfort with the technology during the semester, and one participant showed remarkable gains in technology skills. The teaching theme provided insight into how facilitators foster learning in a PBL setting as compared to a traditional lecture. A high degree of variability in teaching styles was observed, but individuals' styles tended to remain stable over the course of the semester. Nevertheless, all facilitators interacted similarly with students, in a more caring and approachable manner, when compared to the classroom or clinic atmospheres.
High dimensional biological data retrieval optimization with NoSQL technology.
Wang, Shicai; Pandis, Ioannis; Wu, Chao; He, Sijin; Johnson, David; Emam, Ibrahim; Guitton, Florian; Guo, Yike
2014-01-01
High-throughput transcriptomic data generated by microarray experiments is the most abundant and frequently stored kind of data currently used in translational medicine studies. Although microarray data is supported in data warehouses such as tranSMART, when querying relational databases for hundreds of different patient gene expression records queries are slow due to poor performance. Non-relational data models, such as the key-value model implemented in NoSQL databases, hold promise to be more performant solutions. Our motivation is to improve the performance of the tranSMART data warehouse with a view to supporting Next Generation Sequencing data. In this paper we introduce a new data model better suited for high-dimensional data storage and querying, optimized for database scalability and performance. We have designed a key-value pair data model to support faster queries over large-scale microarray data and implemented the model using HBase, an implementation of Google's BigTable storage system. An experimental performance comparison was carried out against the traditional relational data model implemented in both MySQL Cluster and MongoDB, using a large publicly available transcriptomic data set taken from NCBI GEO concerning Multiple Myeloma. Our new key-value data model implemented on HBase exhibits an average 5.24-fold increase in high-dimensional biological data query performance compared to the relational model implemented on MySQL Cluster, and an average 6.47-fold increase on query performance on MongoDB. The performance evaluation found that the new key-value data model, in particular its implementation in HBase, outperforms the relational model currently implemented in tranSMART. We propose that NoSQL technology holds great promise for large-scale data management, in particular for high-dimensional biological data such as that demonstrated in the performance evaluation described in this paper. We aim to use this new data model as a basis for migrating
Decorrelation of the True and Estimated Classifier Errors in High-Dimensional Settings
Directory of Open Access Journals (Sweden)
Hua Jianping
2007-01-01
Full Text Available The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error, in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting. Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, -fold cross-validation, and .632 bootstrap. Moreover, three scenarios are considered: (1 feature selection, (2 known-feature set, and (3 all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection
Directory of Open Access Journals (Sweden)
Wyper Russell B
2012-09-01
Full Text Available Abstract Background The purpose of this exploratory study is to pilot a biopsychosocial instrument called the Perceived Impact of Problem Profile (PIPP on a cohort of landmine/Unexploded Ordnance (UXO victims with lower limb disability versus a cohort of persons with similar disability due to other trauma or medical causes. The aim is to provide greater understanding of the psychosocial impact of landmine/UXO injury to inform victim assistance a interventions within Lao PDR. Methods This study employs a mixed methods design, which involved piloting the PIPP instrument through an interviewer administered questionnaire and demographic questionnaire. Fifty one participants were interviewed in both urban and rural locations within Lao PDR. Results An analysis of the data reveals significant differences in perceived impact for pain, anxiety and how recently the injury/illness occurred. Both groups complained of high levels of anxiety and depression; landmine/UXO victims who complained of anxiety and depression reported a much greater impact on life satisfaction and mood. Conclusion The perceived impact of the disability is greatest on psychosocial factors for both cohorts, but especially in landmine/UXO victims emphasising the need to focus on improving psychosocial interventions for landmine/UXO victims within Victim assistance programmes in Lao PDR.
Inference for feature selection using the Lasso with high-dimensional data
DEFF Research Database (Denmark)
Brink-Jensen, Kasper; Ekstrøm, Claus Thorn
2014-01-01
that involve various effects strengths and correlation between predictors. The algorithm is also applied to a prostate cancer dataset that has been analyzed in recent papers on the subject. The proposed method is found to provide a powerful way to make inference for feature selection even for small samples......Penalized regression models such as the Lasso have proved useful for variable selection in many fields - especially for situations with high-dimensional data where the numbers of predictors far exceeds the number of observations. These methods identify and rank variables of importance but do...... not generally provide any inference of the selected variables. Thus, the variables selected might be the "most important" but need not be significant. We propose a significance test for the selection found by the Lasso. We introduce a procedure that computes inference and p-values for features chosen...
Bilionis, Ilias; Gonzalez, Marcial
2016-01-01
The prohibitive cost of performing Uncertainty Quantification (UQ) tasks with a very large number of input parameters can be addressed, if the response exhibits some special structure that can be discovered and exploited. Several physical responses exhibit a special structure known as an active subspace (AS), a linear manifold of the stochastic space characterized by maximal response variation. The idea is that one should first identify this low dimensional manifold, project the high-dimensional input onto it, and then link the projection to the output. In this work, we develop a probabilistic version of AS which is gradient-free and robust to observational noise. Our approach relies on a novel Gaussian process regression with built-in dimensionality reduction with the AS represented as an orthogonal projection matrix that serves as yet another covariance function hyper-parameter to be estimated from the data. To train the model, we design a two-step maximum likelihood optimization procedure that ensures the ...
PyDREAM: High-dimensional parameter inference for biological models in Python.
Shockley, Erin M; Vrugt, Jasper A; Lopez, Carlos F
2017-10-04
Biological models contain many parameters whose values are difficult to measure directly via experimentation and therefore require calibration against experimental data. Markov chain Monte Carlo (MCMC) methods are suitable to estimate multivariate posterior model parameter distributions, but these methods may exhibit slow or premature convergence in high-dimensional search spaces. Here, we present PyDREAM, a Python implementation of the (Multiple-Try) Differential Evolution Adaptive Metropolis (DREAM(ZS)) algorithm developed by Vrugt and ter Braak (2008) and Laloy and Vrugt (2012). PyDREAM achieves excellent performance for complex, parameter-rich models and takes full advantage of distributed computing resources, facilitating parameter inference and uncertainty estimation of CPU-intensive biological models. PyDREAM is freely available under the GNU GPLv3 license from the Lopez lab GitHub repository at http://github.com/LoLab-VU/PyDREAM. c.lopez@vanderbilt.edu. Supplementary data are available at Bioinformatics online.
DEFF Research Database (Denmark)
Ding, Yunhong; Bacco, Davide; Dalgaard, Kjeld
2017-01-01
-dimensional quantum states, and enables breaking the information efficiency limit of traditional quantum key distribution protocols. In addition, the silicon photonic circuits used in our work integrate variable optical attenuators, highly efficient multicore fiber couplers, and Mach-Zehnder interferometers, enabling......Quantum key distribution provides an efficient means to exchange information in an unconditionally secure way. Historically, quantum key distribution protocols have been based on binary signal formats, such as two polarization states, and the transmitted information efficiency of the quantum key...... is intrinsically limited to 1 bit/photon. Here we propose and experimentally demonstrate, for the first time, a high-dimensional quantum key distribution protocol based on space division multiplexing in multicore fiber using silicon photonic integrated lightwave circuits. We successfully realized three mutually...
High-dimensional single-cell analysis reveals the immune signature of narcolepsy.
Hartmann, Felix J; Bernard-Valnet, Raphaël; Quériault, Clémence; Mrdjen, Dunja; Weber, Lukas M; Galli, Edoardo; Krieg, Carsten; Robinson, Mark D; Nguyen, Xuan-Hung; Dauvilliers, Yves; Liblau, Roland S; Becher, Burkhard
2016-11-14
Narcolepsy type 1 is a devastating neurological sleep disorder resulting from the destruction of orexin-producing neurons in the central nervous system (CNS). Despite its striking association with the HLA-DQB1*06:02 allele, the autoimmune etiology of narcolepsy has remained largely hypothetical. Here, we compared peripheral mononucleated cells from narcolepsy patients with HLA-DQB1*06:02-matched healthy controls using high-dimensional mass cytometry in combination with algorithm-guided data analysis. Narcolepsy patients displayed multifaceted immune activation in CD4 + and CD8 + T cells dominated by elevated levels of B cell-supporting cytokines. Additionally, T cells from narcolepsy patients showed increased production of the proinflammatory cytokines IL-2 and TNF. Although it remains to be established whether these changes are primary to an autoimmune process in narcolepsy or secondary to orexin deficiency, these findings are indicative of inflammatory processes in the pathogenesis of this enigmatic disease. © 2016 Hartmann et al.
High-Dimensional Disorder-Driven Phenomena in Weyl Semimetals, Semiconductors and Related Systems
Syzranov, S V
2016-01-01
It is commonly believed that a non-interacting disordered electronic system can undergo only the Anderson metal-insulator transition. It has been suggested, however, that a broad class of systems can display disorder-driven transitions distinct from Anderson localisation that have manifestations in the disorder-averaged density of states, conductivity and other observables. Such transitions have received particular attention in the context of recently discovered 3D Weyl and Dirac materials but have also been predicted in cold-atom systems with long-range interactions, quantum kicked rotors and all sufficiently high-dimensional systems. Moreover, such systems exhibit unconventional behaviour of Lifshitz tails, energy-level statistics and ballistic-transport properties. Here we review recent progress and the status of results on non-Anderson disorder-driven transitions and related phenomena.
Wang, Zhiping; Chen, Jinyu; Yu, Benli
2017-02-20
We investigate the two-dimensional (2D) and three-dimensional (3D) atom localization behaviors via spontaneously generated coherence in a microwave-driven four-level atomic system. Owing to the space-dependent atom-field interaction, it is found that the detecting probability and precision of 2D and 3D atom localization behaviors can be significantly improved via adjusting the system parameters, the phase, amplitude, and initial population distribution. Interestingly, the atom can be localized in volumes that are substantially smaller than a cubic optical wavelength. Our scheme opens a promising way to achieve high-precision and high-efficiency atom localization, which provides some potential applications in high-dimensional atom nanolithography.
Rupp, Matthias; Schneider, Petra; Schneider, Gisbert
2009-11-15
Measuring the (dis)similarity of molecules is important for many cheminformatics applications like compound ranking, clustering, and property prediction. In this work, we focus on real-valued vector representations of molecules (as opposed to the binary spaces of fingerprints). We demonstrate the influence which the choice of (dis)similarity measure can have on results, and provide recommendations for such choices. We review the mathematical concepts used to measure (dis)similarity in vector spaces, namely norms, metrics, inner products, and, similarity coefficients, as well as the relationships between them, employing (dis)similarity measures commonly used in cheminformatics as examples. We present several phenomena (empty space phenomenon, sphere volume related phenomena, distance concentration) in high-dimensional descriptor spaces which are not encountered in two and three dimensions. These phenomena are theoretically characterized and illustrated on both artificial and real (bioactivity) data. 2009 Wiley Periodicals, Inc.
High-dimensional neural-network potentials for multicomponent systems: Applications to zinc oxide
Artrith, Nongnuch; Morawietz, Tobias; Behler, Jörg
2011-04-01
Artificial neural networks represent an accurate and efficient tool to construct high-dimensional potential-energy surfaces based on first-principles data. However, so far the main drawback of this method has been the limitation to a single atomic species. We present a generalization to compounds of arbitrary chemical composition, which now enables simulations of a wide range of systems containing large numbers of atoms. The required incorporation of long-range interactions is achieved by combining the numerical accuracy of neural networks with an electrostatic term based on environment-dependent charges. Using zinc oxide as a benchmark system we show that the neural network potential-energy surface is in excellent agreement with density-functional theory reference calculations, while the evaluation is many orders of magnitude faster.
High-dimensional neural network potentials for metal surfaces: A prototype study for copper
Artrith, Nongnuch; Behler, Jörg
2012-01-01
The atomic environments at metal surfaces differ strongly from the bulk, and, in particular, in case of reconstructions or imperfections at “real surfaces,” very complicated atomic configurations can be present. This structural complexity poses a significant challenge for the development of accurate interatomic potentials suitable for large-scale molecular dynamics simulations. In recent years, artificial neural networks (NN) have become a promising new method for the construction of potential-energy surfaces for difficult systems. In the present work, we explore the applicability of such high-dimensional NN potentials to metal surfaces using copper as a benchmark system. A detailed analysis of the properties of bulk copper and of a wide range of surface structures shows that NN potentials can provide results of almost density functional theory (DFT) quality at a small fraction of the computational costs.
Buchanan-Pascall, Sarah; Gray, Kylie M; Gordon, Michael; Melvin, Glenn A
2017-07-11
This systematic review and meta-analysis evaluates the efficacy of parent training group interventions to treat child externalizing and/or internalizing problems. A search identified 21 randomized controlled trials of parent group interventions aimed at ameliorating child externalizing and/or internalizing problems in children aged 4-12 years. Random effects meta-analyses yielded significant pooled treatment effect size (g) estimates for child externalizing (g = -0.38) and internalizing problems (g = -0.18). Child anxiety symptoms or internalizing problems evident in children with externalizing behavior problems did not change significantly following intervention. Study quality was a statistically significant moderator of treatment response for child externalizing problems, however hours of planned parent group treatment and treatment recipient were not. Findings support the use of parent group interventions as an effective treatment for reducing externalizing problems in children aged 4-12 years. Whilst statistically significant, programs had a limited impact on internalizing symptoms, indicating a need for further investigation.
Niec, LN; Barnett, ML; Prewett, MS; Chatham, JRS
2016-01-01
Although efficacious interventions exist for childhood conduct problems, a majority of families in need of services do not receive them. To address problems of treatment access and adherence, innovative adaptations of current interventions are needed. This randomized control trial investigated the relative efficacy of a novel format of parent-child interaction therapy (PCIT), a treatment for young children with conduct problems.Eighty-one families with 3- to 6-year-old children (71.6% boys, 8...
Energy Technology Data Exchange (ETDEWEB)
Brooks, B.R.
1979-09-01
The Graphical Unitary Group Approach (GUGA) was cast into an extraordinarily powerful form by restructuring the Hamiltonian in terms of loop types. This restructuring allows the adoption of the loop-driven formulation which illuminates vast numbers of previously unappreciated relationships between otherwise distinct Hamiltonian matrix elements. The theoretical/methodological contributions made here include the development of the loop-driven formula generation algorithm, a solution of the upper walk problem used to develop a loop breakdown algorithm, the restriction of configuration space employed to the multireference interacting space, and the restructuring of the Hamiltonian in terms of loop types. Several other developments are presented and discussed. Among these developments are the use of new segment coefficients, improvements in the loop-driven algorithm, implicit generation of loops wholly within the external space adapted within the framework of the loop-driven methodology, and comparisons of the diagonalization tape method to the direct method. It is also shown how it is possible to implement the GUGA method without the time-consuming full (m/sup 5/) four-index transformation. A particularly promising new direction presented here involves the use of the GUGA methodology to obtain one-electron and two-electron density matrices. Once these are known, analytical gradients (first derivatives) of the CI potential energy are easily obtained. Several test calculations are examined in detail to illustrate the unique features of the method. Also included is a calculation on the asymmetric 2/sup 1/A' state of SO/sub 2/ with 23,613 configurations to demonstrate methods for the diagonalization of very large matrices on a minicomputer. 6 figures, 6 tables.
Directory of Open Access Journals (Sweden)
Clark MD
2009-09-01
Full Text Available Abstract Background The aim was to compare effectiveness of group versus individual sessions of physiotherapy in terms of symptoms, quality of life, and costs, and to investigate the effect of patient preference on uptake and outcome of treatment. Methods A pragmatic, multi-centre randomised controlled trial in five British National Health Service physiotherapy departments. 174 women with stress and/or urge incontinence were randomised to receive treatment from a physiotherapist delivered in a group or individual setting over three weekly sessions. Outcome were measured as Symptom Severity Index; Incontinence-related Quality of Life questionnaire; National Health Service costs, and out of pocket expenses. Results The majority of women expressed no preference (55% or preference for individual treatment (36%. Treatment attendance was good, with similar attendance with both service delivery models. Overall, there were no statistically significant differences in symptom severity or quality of life outcomes between the models. Over 85% of women reported a subjective benefit of treatment, with a slightly higher rating in the individual compared with the group setting. When all health care costs were considered, average cost per patient was lower for group sessions (Mean cost difference £52.91 95%, confidence interval (£25.82 - £80.00. Conclusion Indications are that whilst some women may have an initial preference for individual treatment, there are no substantial differences in the symptom, quality of life outcomes or non-attendance. Because of the significant difference in mean cost, group treatment is recommended. Trial Registration Trial Registration number: ISRCTN 16772662
Hall, David; Buzwell, Simone
2013-01-01
The increase in popularity of group work in higher education has been accompanied by an increase in the frequency of reports of students not equally contributing to work within the groups. Referred to as "free-riders", the effect of this behaviour on other students can make group work an unpleasant experience for some. Of most…
Bonini, Nicolao; Grecucci, Alessandro; Nicolè, Manuel; Savadori, Lucia
2017-08-02
A group of pathological gamblers and a group of problem gamblers (i.e., gamblers at risk of becoming pathological) were compared to healthy controls on their risk-taking propensity after prior losses. Each participant played both the Balloon Analogue Risk Taking task (BART) and a modified version of the same task, where individuals face five repeated predetermined early losses at the onset of the game. No significant difference in risk-taking was found between groups on the standard BART task, while significant differences emerged when comparing behaviors in the two tasks: both pathological gamblers and controls reduced their risk-taking tendency after prior losses in the modified BART compared to the standard BART, whereas problem gamblers showed no reduction in risk-taking after prior losses. We interpret these results as a sign of a reduced sensitivity to negative feedback in problem gamblers which might contribute to explain their loss-chasing tendency.
Directory of Open Access Journals (Sweden)
Shiqing Wang
2013-01-01
Full Text Available During the last few years, a great deal of attention has been focused on Lasso and Dantzig selector in high-dimensional linear regression when the number of variables can be much larger than the sample size. Under a sparsity scenario, the authors (see, e.g., Bickel et al., 2009, Bunea et al., 2007, Candes and Tao, 2007, Candès and Tao, 2007, Donoho et al., 2006, Koltchinskii, 2009, Koltchinskii, 2009, Meinshausen and Yu, 2009, Rosenbaum and Tsybakov, 2010, Tsybakov, 2006, van de Geer, 2008, and Zhang and Huang, 2008 discussed the relations between Lasso and Dantzig selector and derived sparsity oracle inequalities for the prediction risk and bounds on the estimation loss. In this paper, we point out that some of the authors overemphasize the role of some sparsity conditions, and the assumptions based on this sparsity condition may cause bad results. We give better assumptions and the methods that avoid using the sparsity condition. As a comparison with the results by Bickel et al., 2009, more precise oracle inequalities for the prediction risk and bounds on the estimation loss are derived when the number of variables can be much larger than the sample size.
Mapping the human DC lineage through the integration of high-dimensional techniques.
See, Peter; Dutertre, Charles-Antoine; Chen, Jinmiao; Günther, Patrick; McGovern, Naomi; Irac, Sergio Erdal; Gunawan, Merry; Beyer, Marc; Händler, Kristian; Duan, Kaibo; Sumatoh, Hermi Rizal Bin; Ruffin, Nicolas; Jouve, Mabel; Gea-Mallorquí, Ester; Hennekam, Raoul C M; Lim, Tony; Yip, Chan Chung; Wen, Ming; Malleret, Benoit; Low, Ivy; Shadan, Nurhidaya Binte; Fen, Charlene Foong Shu; Tay, Alicia; Lum, Josephine; Zolezzi, Francesca; Larbi, Anis; Poidinger, Michael; Chan, Jerry K Y; Chen, Qingfeng; Rénia, Laurent; Haniffa, Muzlifah; Benaroch, Philippe; Schlitzer, Andreas; Schultze, Joachim L; Newell, Evan W; Ginhoux, Florent
2017-06-09
Dendritic cells (DC) are professional antigen-presenting cells that orchestrate immune responses. The human DC population comprises two main functionally specialized lineages, whose origins and differentiation pathways remain incompletely defined. Here, we combine two high-dimensional technologies-single-cell messenger RNA sequencing (scmRNAseq) and cytometry by time-of-flight (CyTOF)-to identify human blood CD123+CD33+CD45RA+ DC precursors (pre-DC). Pre-DC share surface markers with plasmacytoid DC (pDC) but have distinct functional properties that were previously attributed to pDC. Tracing the differentiation of DC from the bone marrow to the peripheral blood revealed that the pre-DC compartment contains distinct lineage-committed subpopulations, including one early uncommitted CD123high pre-DC subset and two CD45RA+CD123low lineage-committed subsets exhibiting functional differences. The discovery of multiple committed pre-DC populations opens promising new avenues for the therapeutic exploitation of DC subset-specific targeting. Copyright © 2017, American Association for the Advancement of Science.
Huang, Yen-Tsung; Pan, Wen-Chi
2016-06-01
Causal mediation modeling has become a popular approach for studying the effect of an exposure on an outcome through a mediator. However, current methods are not applicable to the setting with a large number of mediators. We propose a testing procedure for mediation effects of high-dimensional continuous mediators. We characterize the marginal mediation effect, the multivariate component-wise mediation effects, and the L2 norm of the component-wise effects, and develop a Monte-Carlo procedure for evaluating their statistical significance. To accommodate the setting with a large number of mediators and a small sample size, we further propose a transformation model using the spectral decomposition. Under the transformation model, mediation effects can be estimated using a series of regression models with a univariate transformed mediator, and examined by our proposed testing procedure. Extensive simulation studies are conducted to assess the performance of our methods for continuous and dichotomous outcomes. We apply the methods to analyze genomic data investigating the effect of microRNA miR-223 on a dichotomous survival status of patients with glioblastoma multiforme (GBM). We identify nine gene ontology sets with expression values that significantly mediate the effect of miR-223 on GBM survival. © 2015, The International Biometric Society.
Multi-Scale Factor Analysis of High-Dimensional Brain Signals
Ting, Chee-Ming
2017-05-18
In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive spatio-temporal data defined over the complex networks into a finite set of regional clusters. To achieve further dimension reduction, we represent the signals in each cluster by a small number of latent factors. The correlation matrix for all nodes in the network are approximated by lower-dimensional sub-structures derived from the cluster-specific factors. To estimate regional connectivity between numerous nodes (within each cluster), we apply principal components analysis (PCA) to produce factors which are derived as the optimal reconstruction of the observed signals under the squared loss. Then, we estimate global connectivity (between clusters or sub-networks) based on the factors across regions using the RV-coefficient as the cross-dependence measure. This gives a reliable and computationally efficient multi-scale analysis of both regional and global dependencies of the large networks. The proposed novel approach is applied to estimate brain connectivity networks using functional magnetic resonance imaging (fMRI) data. Results on resting-state fMRI reveal interesting modular and hierarchical organization of human brain networks during rest.
High-Dimensional Neural Network Potentials for Organic Reactions and an Improved Training Algorithm.
Gastegger, Michael; Marquetand, Philipp
2015-05-12
Artificial neural networks (NNs) represent a relatively recent approach for the prediction of molecular potential energies, suitable for simulations of large molecules and long time scales. By using NNs to fit electronic structure data, it is possible to obtain empirical potentials of high accuracy combined with the computational efficiency of conventional force fields. However, as opposed to the latter, changing bonding patterns and unusual coordination geometries can be described due to the underlying flexible functional form of the NNs. One of the most promising approaches in this field is the high-dimensional neural network (HDNN) method, which is especially adapted to the prediction of molecular properties. While HDNNs have been mostly used to model solid state systems and surface interactions, we present here the first application of the HDNN approach to an organic reaction, the Claisen rearrangement of allyl vinyl ether to 4-pentenal. To construct the corresponding HDNN potential, a new training algorithm is introduced. This algorithm is termed "element-decoupled" global extended Kalman filter (ED-GEKF) and is based on the decoupled Kalman filter. Using a metadynamics trajectory computed with density functional theory as reference data, we show that the ED-GEKF exhibits superior performance - both in terms of accuracy and training speed - compared to other variants of the Kalman filter hitherto employed in HDNN training. In addition, the effect of including forces during ED-GEKF training on the resulting potentials was studied.
Spanning high-dimensional expression space using ribosome-binding site combinatorics.
Zelcbuch, Lior; Antonovsky, Niv; Bar-Even, Arren; Levin-Karp, Ayelet; Barenholz, Uri; Dayagi, Michal; Liebermeister, Wolfram; Flamholz, Avi; Noor, Elad; Amram, Shira; Brandis, Alexander; Bareia, Tasneem; Yofe, Ido; Jubran, Halim; Milo, Ron
2013-05-01
Protein levels are a dominant factor shaping natural and synthetic biological systems. Although proper functioning of metabolic pathways relies on precise control of enzyme levels, the experimental ability to balance the levels of many genes in parallel is a major outstanding challenge. Here, we introduce a rapid and modular method to span the expression space of several proteins in parallel. By combinatorially pairing genes with a compact set of ribosome-binding sites, we modulate protein abundance by several orders of magnitude. We demonstrate our strategy by using a synthetic operon containing fluorescent proteins to span a 3D color space. Using the same approach, we modulate a recombinant carotenoid biosynthesis pathway in Escherichia coli to reveal a diversity of phenotypes, each characterized by a distinct carotenoid accumulation profile. In a single combinatorial assembly, we achieve a yield of the industrially valuable compound astaxanthin 4-fold higher than previously reported. The methodology presented here provides an efficient tool for exploring a high-dimensional expression space to locate desirable phenotypes.
A novel algorithm for simultaneous SNP selection in high-dimensional genome-wide association studies
Directory of Open Access Journals (Sweden)
Zuber Verena
2012-10-01
Full Text Available Abstract Background Identification of causal SNPs in most genome wide association studies relies on approaches that consider each SNP individually. However, there is a strong correlation structure among SNPs that needs to be taken into account. Hence, increasingly modern computationally expensive regression methods are employed for SNP selection that consider all markers simultaneously and thus incorporate dependencies among SNPs. Results We develop a novel multivariate algorithm for large scale SNP selection using CAR score regression, a promising new approach for prioritizing biomarkers. Specifically, we propose a computationally efficient procedure for shrinkage estimation of CAR scores from high-dimensional data. Subsequently, we conduct a comprehensive comparison study including five advanced regression approaches (boosting, lasso, NEG, MCP, and CAR score and a univariate approach (marginal correlation to determine the effectiveness in finding true causal SNPs. Conclusions Simultaneous SNP selection is a challenging task. We demonstrate that our CAR score-based algorithm consistently outperforms all competing approaches, both uni- and multivariate, in terms of correctly recovered causal SNPs and SNP ranking. An R package implementing the approach as well as R code to reproduce the complete study presented here is available from http://strimmerlab.org/software/care/.
Xia, Yin; Cai, Tianxi; Cai, T Tony
2018-01-01
Motivated by applications in genomics, we consider in this paper global and multiple testing for the comparisons of two high-dimensional linear regression models. A procedure for testing the equality of the two regression vectors globally is proposed and shown to be particularly powerful against sparse alternatives. We then introduce a multiple testing procedure for identifying unequal coordinates while controlling the false discovery rate and false discovery proportion. Theoretical justifications are provided to guarantee the validity of the proposed tests and optimality results are established under sparsity assumptions on the regression coefficients. The proposed testing procedures are easy to implement. Numerical properties of the procedures are investigated through simulation and data analysis. The results show that the proposed tests maintain the desired error rates under the null and have good power under the alternative at moderate sample sizes. The procedures are applied to the Framingham Offspring study to investigate the interactions between smoking and cardiovascular related genetic mutations important for an inflammation marker.
Biomarker identification and effect estimation on schizophrenia –a high dimensional data analysis
Directory of Open Access Journals (Sweden)
Yuanzhang eLi
2015-05-01
Full Text Available Biomarkers have been examined in schizophrenia research for decades. Medical morbidity and mortality rates, as well as personal and societal costs, are associated with schizophrenia patients. The identification of biomarkers and alleles, which often have a small effect individually, may help to develop new diagnostic tests for early identification and treatment. Currently, there is not a commonly accepted statistical approach to identify predictive biomarkers from high dimensional data. We used space Decomposition-Gradient-Regression method (DGR to select biomarkers, which are associated with the risk of schizophrenia. Then, we used the gradient scores, generated from the selected biomarkers, as the prediction factor in regression to estimate their effects. We also used an alternative approach, classification and regression tree (CART, to compare the biomarker selected by DGR and found about 70% of the selected biomarkers were the same. However, the advantage of DGR is that it can evaluate individual effects for each biomarker from their combined effect. In DGR analysis of serum specimens of US military service members with a diagnosis of schizophrenia from 1992 to 2005 and their controls, Alpha-1-Antitrypsin (AAT, Interleukin-6 receptor (IL-6r and Connective Tissue Growth Factor (CTGF were selected to identify schizophrenia for males; and Alpha-1-Antitrypsin (AAT, Apolipoprotein B (Apo B and Sortilin were selected for females. If these findings from military subjects are replicated by other studies, they suggest the possibility of a novel biomarker panel as an adjunct to earlier diagnosis and initiation of treatment.
Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data
Hu, Zongliang
2017-10-27
We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling\\'s tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.
SPRING: a kinetic interface for visualizing high dimensional single-cell expression data.
Weinreb, Caleb; Wolock, Samuel; Klein, Allon
2017-12-07
Single-cell gene expression profiling technologies can map the cell states in a tissue or organism. As these technologies become more common, there is a need for computational tools to explore the data they produce. In particular, visualizing continuous gene expression topologies can be improved, since current tools tend to fragment gene expression continua or capture only limited features of complex population topologies. Force-directed layouts of k-nearest-neighbor graphs can visualize continuous gene expression topologies in a manner that preserves high-dimensional relationships and captures complex population topologies. We describe SPRING, a pipeline for data filtering, normalization and visualization using force-directed layouts, and show that it reveals more detailed biological relationships than existing approaches when applied to branching gene expression trajectories from hematopoietic progenitor cells and cells of the upper airway epithelium. Visualizations from SPRING are also more reproducible than those of stochastic visualization methods such as tSNE, a state-of-the-art tool. We provide SPRING as an interactive web-tool with an easy to use GUI. https://kleintools.hms.harvard.edu/tools/spring.html, https://github.com/AllonKleinLab/SPRING/. calebsw@gmail.com, allon_klein@hms.harvard.edu.
Energy Technology Data Exchange (ETDEWEB)
Snyder, Abigail C. [University of Pittsburgh; Jiao, Yu [ORNL
2010-10-01
Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.
Mwangi, Benson; Soares, Jair C; Hasan, Khader M
2014-10-30
Neuroimaging machine learning studies have largely utilized supervised algorithms - meaning they require both neuroimaging scan data and corresponding target variables (e.g. healthy vs. diseased) to be successfully 'trained' for a prediction task. Noticeably, this approach may not be optimal or possible when the global structure of the data is not well known and the researcher does not have an a priori model to fit the data. We set out to investigate the utility of an unsupervised machine learning technique; t-distributed stochastic neighbour embedding (t-SNE) in identifying 'unseen' sample population patterns that may exist in high-dimensional neuroimaging data. Multimodal neuroimaging scans from 92 healthy subjects were pre-processed using atlas-based methods, integrated and input into the t-SNE algorithm. Patterns and clusters discovered by the algorithm were visualized using a 2D scatter plot and further analyzed using the K-means clustering algorithm. t-SNE was evaluated against classical principal component analysis. Remarkably, based on unlabelled multimodal scan data, t-SNE separated study subjects into two very distinct clusters which corresponded to subjects' gender labels (cluster silhouette index value=0.79). The resulting clusters were used to develop an unsupervised minimum distance clustering model which identified 93.5% of subjects' gender. Notably, from a neuropsychiatric perspective this method may allow discovery of data-driven disease phenotypes or sub-types of treatment responders. Copyright © 2014 Elsevier B.V. All rights reserved.
Evolutionary fields can explain patterns of high-dimensional complexity in ecology.
Wilsenach, James; Landi, Pietro; Hui, Cang
2017-04-01
One of the properties that make ecological systems so unique is the range of complex behavioral patterns that can be exhibited by even the simplest communities with only a few species. Much of this complexity is commonly attributed to stochastic factors that have very high-degrees of freedom. Orthodox study of the evolution of these simple networks has generally been limited in its ability to explain complexity, since it restricts evolutionary adaptation to an inertia-free process with few degrees of freedom in which only gradual, moderately complex behaviors are possible. We propose a model inspired by particle-mediated field phenomena in classical physics in combination with fundamental concepts in adaptation, which suggests that small but high-dimensional chaotic dynamics near to the adaptive trait optimum could help explain complex properties shared by most ecological datasets, such as aperiodicity and pink, fractal noise spectra. By examining a simple predator-prey model and appealing to real ecological data, we show that this type of complexity could be easily confused for or confounded by stochasticity, especially when spurred on or amplified by stochastic factors that share variational and spectral properties with the underlying dynamics.
Schran, Christoph; Uhl, Felix; Behler, Jörg; Marx, Dominik
2018-03-01
The design of accurate helium-solute interaction potentials for the simulation of chemically complex molecules solvated in superfluid helium has long been a cumbersome task due to the rather weak but strongly anisotropic nature of the interactions. We show that this challenge can be met by using a combination of an effective pair potential for the He-He interactions and a flexible high-dimensional neural network potential (NNP) for describing the complex interaction between helium and the solute in a pairwise additive manner. This approach yields an excellent agreement with a mean absolute deviation as small as 0.04 kJ mol-1 for the interaction energy between helium and both hydronium and Zundel cations compared with coupled cluster reference calculations with an energetically converged basis set. The construction and improvement of the potential can be performed in a highly automated way, which opens the door for applications to a variety of reactive molecules to study the effect of solvation on the solute as well as the solute-induced structuring of the solvent. Furthermore, we show that this NNP approach yields very convincing agreement with the coupled cluster reference for properties like many-body spatial and radial distribution functions. This holds for the microsolvation of the protonated water monomer and dimer by a few helium atoms up to their solvation in bulk helium as obtained from path integral simulations at about 1 K.
Evolutionary fields can explain patterns of high-dimensional complexity in ecology
Wilsenach, James; Landi, Pietro; Hui, Cang
2017-04-01
One of the properties that make ecological systems so unique is the range of complex behavioral patterns that can be exhibited by even the simplest communities with only a few species. Much of this complexity is commonly attributed to stochastic factors that have very high-degrees of freedom. Orthodox study of the evolution of these simple networks has generally been limited in its ability to explain complexity, since it restricts evolutionary adaptation to an inertia-free process with few degrees of freedom in which only gradual, moderately complex behaviors are possible. We propose a model inspired by particle-mediated field phenomena in classical physics in combination with fundamental concepts in adaptation, which suggests that small but high-dimensional chaotic dynamics near to the adaptive trait optimum could help explain complex properties shared by most ecological datasets, such as aperiodicity and pink, fractal noise spectra. By examining a simple predator-prey model and appealing to real ecological data, we show that this type of complexity could be easily confused for or confounded by stochasticity, especially when spurred on or amplified by stochastic factors that share variational and spectral properties with the underlying dynamics.
Hou, Jiayi; Archer, Kellie J
2015-02-01
Abstract An ordinal scale is commonly used to measure health status and disease related outcomes in hospital settings as well as in translational medical research. In addition, repeated measurements are common in clinical practice for tracking and monitoring the progression of complex diseases. Classical methodology based on statistical inference, in particular, ordinal modeling has contributed to the analysis of data in which the response categories are ordered and the number of covariates (p) remains smaller than the sample size (n). With the emergence of genomic technologies being increasingly applied for more accurate diagnosis and prognosis, high-dimensional data where the number of covariates (p) is much larger than the number of samples (n), are generated. To meet the emerging needs, we introduce our proposed model which is a two-stage algorithm: Extend the generalized monotone incremental forward stagewise (GMIFS) method to the cumulative logit ordinal model; and combine the GMIFS procedure with the classical mixed-effects model for classifying disease status in disease progression along with time. We demonstrate the efficiency and accuracy of the proposed models in classification using a time-course microarray dataset collected from the Inflammation and the Host Response to Injury study.
PCA leverage: outlier detection for high-dimensional functional magnetic resonance imaging data.
Mejia, Amanda F; Nebel, Mary Beth; Eloyan, Ani; Caffo, Brian; Lindquist, Martin A
2017-07-01
Outlier detection for high-dimensional (HD) data is a popular topic in modern statistical research. However, one source of HD data that has received relatively little attention is functional magnetic resonance images (fMRI), which consists of hundreds of thousands of measurements sampled at hundreds of time points. At a time when the availability of fMRI data is rapidly growing-primarily through large, publicly available grassroots datasets-automated quality control and outlier detection methods are greatly needed. We propose principal components analysis (PCA) leverage and demonstrate how it can be used to identify outlying time points in an fMRI run. Furthermore, PCA leverage is a measure of the influence of each observation on the estimation of principal components, which are often of interest in fMRI data. We also propose an alternative measure, PCA robust distance, which is less sensitive to outliers and has controllable statistical properties. The proposed methods are validated through simulation studies and are shown to be highly accurate. We also conduct a reliability study using resting-state fMRI data from the Autism Brain Imaging Data Exchange and find that removal of outliers using the proposed methods results in more reliable estimation of subject-level resting-state networks using independent components analysis. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Construction of high-dimensional neural network potentials using environment-dependent atom pairs.
Jose, K V Jovan; Artrith, Nongnuch; Behler, Jörg
2012-05-21
An accurate determination of the potential energy is the crucial step in computer simulations of chemical processes, but using electronic structure methods on-the-fly in molecular dynamics (MD) is computationally too demanding for many systems. Constructing more efficient interatomic potentials becomes intricate with increasing dimensionality of the potential-energy surface (PES), and for numerous systems the accuracy that can be achieved is still not satisfying and far from the reliability of first-principles calculations. Feed-forward neural networks (NNs) have a very flexible functional form, and in recent years they have been shown to be an accurate tool to construct efficient PESs. High-dimensional NN potentials based on environment-dependent atomic energy contributions have been presented for a number of materials. Still, these potentials may be improved by a more detailed structural description, e.g., in form of atom pairs, which directly reflect the atomic interactions and take the chemical environment into account. We present an implementation of an NN method based on atom pairs, and its accuracy and performance are compared to the atom-based NN approach using two very different systems, the methanol molecule and metallic copper. We find that both types of NN potentials provide an excellent description of both PESs, with the pair-based method yielding a slightly higher accuracy making it a competitive alternative for addressing complex systems in MD simulations.
Construction of high-dimensional neural network potentials using environment-dependent atom pairs
Jose, K. V. Jovan; Artrith, Nongnuch; Behler, Jörg
2012-05-01
An accurate determination of the potential energy is the crucial step in computer simulations of chemical processes, but using electronic structure methods on-the-fly in molecular dynamics (MD) is computationally too demanding for many systems. Constructing more efficient interatomic potentials becomes intricate with increasing dimensionality of the potential-energy surface (PES), and for numerous systems the accuracy that can be achieved is still not satisfying and far from the reliability of first-principles calculations. Feed-forward neural networks (NNs) have a very flexible functional form, and in recent years they have been shown to be an accurate tool to construct efficient PESs. High-dimensional NN potentials based on environment-dependent atomic energy contributions have been presented for a number of materials. Still, these potentials may be improved by a more detailed structural description, e.g., in form of atom pairs, which directly reflect the atomic interactions and take the chemical environment into account. We present an implementation of an NN method based on atom pairs, and its accuracy and performance are compared to the atom-based NN approach using two very different systems, the methanol molecule and metallic copper. We find that both types of NN potentials provide an excellent description of both PESs, with the pair-based method yielding a slightly higher accuracy making it a competitive alternative for addressing complex systems in MD simulations.
Relating high dimensional stochastic complex systems to low-dimensional intermittency
Diaz-Ruelas, Alvaro; Jensen, Henrik Jeldtoft; Piovani, Duccio; Robledo, Alberto
2017-02-01
We evaluate the implication and outlook of an unanticipated simplification in the macroscopic behavior of two high-dimensional sto-chastic models: the Replicator Model with Mutations and the Tangled Nature Model (TaNa) of evolutionary ecology. This simplification consists of the apparent display of low-dimensional dynamics in the non-stationary intermittent time evolution of the model on a coarse-grained scale. Evolution on this time scale spans generations of individuals, rather than single reproduction, death or mutation events. While a local one-dimensional map close to a tangent bifurcation can be derived from a mean-field version of the TaNa model, a nonlinear dynamical model consisting of successive tangent bifurcations generates time evolution patterns resembling those of the full TaNa model. To advance the interpretation of this finding, here we consider parallel results on a game-theoretic version of the TaNa model that in discrete time yields a coupled map lattice. This in turn is represented, a la Langevin, by a one-dimensional nonlinear map. Among various kinds of behaviours we obtain intermittent evolution associated with tangent bifurcations. We discuss our results.
Bonete, Saray; Calero, María Dolores; Fernández-Parra, Antonio
2015-05-01
Adults with Asperger syndrome show persistent difficulties in social situations which psychosocial treatments may address. Despite the multiple studies focusing on social skills interventions, only some have focused specifically on problem-solving skills and have not targeted workplace adaptation training in the adult population. This study describes preliminary data from a group format manual-based intervention, the Interpersonal Problem-Solving for Workplace Adaptation Programme, aimed at improving the cognitive and metacognitive process of social problem-solving skills focusing on typical social situations in the workplace based on mediation as the main strategy. A total of 50 adults with Asperger syndrome received the programme and were compared with a control group of typical development. The feasibility and effectiveness of the treatment were explored. Participants were assessed at pre-treatment and post-treatment on a task of social problem-solving skills and two secondary measures of socialisation and work profile using self- and caregiver-report. Using a variety of methods, the results showed that scores were significantly higher at post-treatment in the social problem-solving task and socialisation skills based on reports by parents. Differences in comparison to the control group had decreased after treatment. The treatment was acceptable to families and subject adherence was high. The Interpersonal Problem-Solving for Workplace Adaptation Programme appears to be a feasible training programme. © The Author(s) 2014.
van der Velden, P.G.; Kleber, R.J.; Fournier, M.; Grievink, Linda; Drogendijk, A.; Gersons, B.P.R.
2007-01-01
Background: It is unclear whether the associations between the level of dispositional optimism on the one hand, and depression symptoms and other health problems on the other hand among disaster victims differ from the associations among non-affected residents. Methods: To assess the associations
van der Velden, Peter G.; Kleber, Rolf J.; Fournier, Marijda; Grievink, Linda; Drogendijk, Annelieke; Gersons, Berthold P. R.
2007-01-01
BACKGROUND: It is unclear whether the associations between the level of dispositional optimism on the one hand, and depression symptoms and other health problems on the other hand among disaster victims differ from the associations among non-affected residents. METHODS: To assess the associations
Kameda, Tatsuya; Tsukasaki, Takafumi; Hastie, Reid; Berg, Nathan
2011-01-01
We introduce a game theory model of individual decisions to cooperate by contributing personal resources to group decisions versus by free riding on the contributions of other members. In contrast to most public-goods games that assume group returns are linear in individual contributions, the present model assumes decreasing marginal group…
Multivariate linear regression of high-dimensional fMRI data with multiple target variables.
Valente, Giancarlo; Castellanos, Agustin Lage; Vanacore, Gianluca; Formisano, Elia
2014-05-01
Multivariate regression is increasingly used to study the relation between fMRI spatial activation patterns and experimental stimuli or behavioral ratings. With linear models, informative brain locations are identified by mapping the model coefficients. This is a central aspect in neuroimaging, as it provides the sought-after link between the activity of neuronal populations and subject's perception, cognition or behavior. Here, we show that mapping of informative brain locations using multivariate linear regression (MLR) may lead to incorrect conclusions and interpretations. MLR algorithms for high dimensional data are designed to deal with targets (stimuli or behavioral ratings, in fMRI) separately, and the predictive map of a model integrates information deriving from both neural activity patterns and experimental design. Not accounting explicitly for the presence of other targets whose associated activity spatially overlaps with the one of interest may lead to predictive maps of troublesome interpretation. We propose a new model that can correctly identify the spatial patterns associated with a target while achieving good generalization. For each target, the training is based on an augmented dataset, which includes all remaining targets. The estimation on such datasets produces both maps and interaction coefficients, which are then used to generalize. The proposed formulation is independent of the regression algorithm employed. We validate this model on simulated fMRI data and on a publicly available dataset. Results indicate that our method achieves high spatial sensitivity and good generalization and that it helps disentangle specific neural effects from interaction with predictive maps associated with other targets. Copyright © 2013 Wiley Periodicals, Inc.
Zhao, Lue Ping; Bolouri, Hamid
2016-04-01
Maturing omics technologies enable researchers to generate high dimension omics data (HDOD) routinely in translational clinical studies. In the field of oncology, The Cancer Genome Atlas (TCGA) provided funding support to researchers to generate different types of omics data on a common set of biospecimens with accompanying clinical data and has made the data available for the research community to mine. One important application, and the focus of this manuscript, is to build predictive models for prognostic outcomes based on HDOD. To complement prevailing regression-based approaches, we propose to use an object-oriented regression (OOR) methodology to identify exemplars specified by HDOD patterns and to assess their associations with prognostic outcome. Through computing patient's similarities to these exemplars, the OOR-based predictive model produces a risk estimate using a patient's HDOD. The primary advantages of OOR are twofold: reducing the penalty of high dimensionality and retaining the interpretability to clinical practitioners. To illustrate its utility, we apply OOR to gene expression data from non-small cell lung cancer patients in TCGA and build a predictive model for prognostic survivorship among stage I patients, i.e., we stratify these patients by their prognostic survival risks beyond histological classifications. Identification of these high-risk patients helps oncologists to develop effective treatment protocols and post-treatment disease management plans. Using the TCGA data, the total sample is divided into training and validation data sets. After building up a predictive model in the training set, we compute risk scores from the predictive model, and validate associations of risk scores with prognostic outcome in the validation data (P-value=0.015). Copyright © 2016 Elsevier Inc. All rights reserved.
From Ambiguities to Insights: Query-based Comparisons of High-Dimensional Data
Kowalski, Jeanne; Talbot, Conover; Tsai, Hua L.; Prasad, Nijaguna; Umbricht, Christopher; Zeiger, Martha A.
2007-11-01
Genomic technologies will revolutionize drag discovery and development; that much is universally agreed upon. The high dimension of data from such technologies has challenged available data analytic methods; that much is apparent. To date, large-scale data repositories have not been utilized in ways that permit their wealth of information to be efficiently processed for knowledge, presumably due in large part to inadequate analytical tools to address numerous comparisons of high-dimensional data. In candidate gene discovery, expression comparisons are often made between two features (e.g., cancerous versus normal), such that the enumeration of outcomes is manageable. With multiple features, the setting becomes more complex, in terms of comparing expression levels of tens of thousands transcripts across hundreds of features. In this case, the number of outcomes, while enumerable, become rapidly large and unmanageable, and scientific inquiries become more abstract, such as "which one of these (compounds, stimuli, etc.) is not like the others?" We develop analytical tools that promote more extensive, efficient, and rigorous utilization of the public data resources generated by the massive support of genomic studies. Our work innovates by enabling access to such metadata with logically formulated scientific inquires that define, compare and integrate query-comparison pair relations for analysis. We demonstrate our computational tool's potential to address an outstanding biomedical informatics issue of identifying reliable molecular markers in thyroid cancer. Our proposed query-based comparison (QBC) facilitates access to and efficient utilization of metadata through logically formed inquires expressed as query-based comparisons by organizing and comparing results from biotechnologies to address applications in biomedicine.
Platon, Ludovic; Pejoski, David; Gautreau, Guillaume; Targat, Brice; Le Grand, Roger; Beignon, Anne-Sophie; Tchitchek, Nicolas
2018-01-01
Cytometry is an experimental technique used to measure molecules expressed by cells at a single cell resolution. Recently, several technological improvements have made possible to increase greatly the number of cell markers that can be simultaneously measured. Many computational methods have been proposed to identify clusters of cells having similar phenotypes. Nevertheless, only a limited number of computational methods permits to compare the phenotypes of the cell clusters identified by different clustering approaches. These phenotypic comparisons are necessary to choose the appropriate clustering methods and settings. Because of this lack of tools, comparisons of cell cluster phenotypes are often performed manually, a highly biased and time-consuming process. We designed CytoCompare, an R package that performs comparisons between the phenotypes of cell clusters with the purpose of identifying similar and different ones, based on the distribution of marker expressions. For each phenotype comparison of two cell clusters, CytoCompare provides a distance measure as well as a p-value asserting the statistical significance of the difference. CytoCompare can import clustering results from various algorithms including SPADE, viSNE/ACCENSE, and Citrus, the most current widely used algorithms. Additionally, CytoCompare can generate parallel coordinates, parallel heatmaps, multidimensional scaling or circular graph representations to visualize easily cell cluster phenotypes and the comparison results. CytoCompare is a flexible analysis pipeline for comparing the phenotypes of cell clusters identified by automatic gating algorithms in high-dimensional cytometry data. This R package is ideal for benchmarking different clustering algorithms and associated parameters. CytoCompare is freely distributed under the GPL-3 license and is available on https://github.com/tchitchek-lab/CytoCompare. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Local two-sample testing: a new tool for analysing high-dimensional astronomical data
Freeman, P. E.; Kim, I.; Lee, A. B.
2017-11-01
Modern surveys have provided the astronomical community with a flood of high-dimensional data, but analyses of these data often occur after their projection to lower dimensional spaces. In this work, we introduce a local two-sample hypothesis test framework that an analyst may directly apply to data in their native space. In this framework, the analyst defines two classes based on a response variable of interest (e.g. higher mass galaxies versus lower mass galaxies) and determines at arbitrary points in predictor space whether the local proportions of objects that belong to the two classes significantly differ from the global proportion. Our framework has a potential myriad of uses throughout astronomy; here, we demonstrate its efficacy by applying it to a sample of 2487 I-band-selected galaxies observed by the HST-ACS in four of the CANDELS programme fields. For each galaxy, we have seven morphological summary statistics along with an estimated stellar mass and star formation rate (SFR). We perform two studies: one in which we determine regions of the seven-dimensional space of morphological statistics where high-mass galaxies are significantly more numerous than low-mass galaxies, and vice versa, and another study where we use SFR in place of mass. We find that we are able to identify such regions, and show how high-mass/low-SFR regions are associated with concentrated and undisturbed galaxies, while galaxies in low-mass/high-SFR regions appear more extended and/or disturbed than their high-mass/low-SFR counterparts.
Kopp, Birgitta; Hasenbein, Melanie; Mandl, Heinz
2014-01-01
This article analyzes the collaborative problem solving activities and learning outcomes of five groups that worked on two different complex cases in a virtual professional training course. In this asynchronous virtual learning environment, all knowledge management content was delivered virtually and collaboration took place through forums. To…
Rustling, Ruth; And Others
This manual offers detailed guidelines for parent group trainers who conduct workshops on problem solving, math, and science for parents of young children. In addition, discussion starters, a list of hands-on activities, directions for drawing and using a poster, and learning activities for children are described. Counting books are briefly…
Eissa, Mourad Ali; Mostafa, Amaal Ahmed
2013-01-01
This study investigated the effect of using differentiated instruction by integrating multiple intelligences and learning styles on solving problems, achievement in, and attitudes towards math in six graders with learning disabilities in cooperative groups. A total of 60 students identified with LD were invited to participate. The sample was…
Directory of Open Access Journals (Sweden)
Pablo Seoane
2012-06-01
Full Text Available Over recent years, Genetic Algorithms have proven to be an appropriate tool for solving certain problems. However, it does not matter if the search space has several valid solutions, as their classic approach is insufficient. To this end, the idea of dividing the individuals into species has been successfully raised. However, this solution is not free of drawbacks, such as the emergence of redundant species, overlapping or performance degradation by significantly increasing the number of individuals to be evaluated. This paper presents the implementation of a method based on the predator-prey technique, with the aim of providing a solution to the problem, as well as a number of examples to prove its effectiveness.
Prafulla Joglekar; Q. B. Chung; Madjid Tavana
2001-01-01
Over the last three decades, numerous algorithms have been proposed to solve the work-cell formation problem. For practicing manufacturing managers it would be nice to know as to which algorithm would be most effective and efficient for their specific situation. While several studies have attempted to fulfill this need, most have not resulted in any definitive recommendations and a better methodology of evaluation of cell formation algorithms is urgently needed. Prima facie, the methodology u...
Salavera, Carlos; Tricás, José M; Lucha, Orosia
2011-12-11
Homeless people have high dropout rates when they participate in therapeutic processes. The causes of this failure are not always known. This study investigates whether dropping-out is mediated by personality disorders or whether psychosocial problems are more important. Eighty-nine homeless people in a socio-laboral integration process were assessed. An initial interview was used, and the MCMI II questionnaire was applied to investigate the presence of psychosocial disorders (DSM-IV-TR axis IV). This was designed as an ex post-facto prospective study. Personality disorders were very frequent among the homeless people examined. Moreover, the high index of psychosocial problems (axis IV) in this population supported the proposal that axis IV disorders are influential in failure to complete therapy. The outcomes of the study show that the homeless people examined presented with more psychopathological symptoms, in both axis II and axis IV, than the general population. This supports the need to take into account the comorbidity between these two types of disorder among homeless people, in treatment and in the development of specific intervention programs. In conclusion, the need for more psychosocial treatments addressing the individual problems of homeless people is supported.
National Research Council Canada - National Science Library
Andreozzi, Gregory
2001-01-01
This report documents the insights developed by the Low Spectrum Conflict Subgroup of the Operations Analysis Working Group at the 30 January - 1 February 2001 Military Operations Research Society (MORS...
Stocks, Jennifer Dugan; Taneja, Baldeo K; Baroldi, Paolo; Findling, Robert L
2012-04-01
To evaluate safety and tolerability of four doses of immediate-release molindone hydrochloride in children with attention-deficit/hyperactivity disorder (ADHD) and serious conduct problems. This open-label, parallel-group, dose-ranging, multicenter trial randomized children, aged 6-12 years, with ADHD and persistent, serious conduct problems to receive oral molindone thrice daily for 9-12 weeks in four treatment groups: Group 1-10 mg (5 mg if weight children with ADHD and serious conduct problems. Secondary outcome measures included change in Nisonger Child Behavior Rating Form-Typical Intelligence Quotient (NCBRF-TIQ) Conduct Problem subscale scores, change in Clinical Global Impressions-Severity (CGI-S) and -Improvement (CGI-I) subscale scores from baseline to end point, and Swanson, Nolan, and Pelham rating scale-revised (SNAP-IV) ADHD-related subscale scores. The study randomized 78 children; 55 completed the study. Treatment with molindone was generally well tolerated, with no clinically meaningful changes in laboratory or physical examination findings. The most common treatment-related adverse events (AEs) included somnolence (n=9), weight increase (n=8), akathisia (n=4), sedation (n=4), and abdominal pain (n=4). Mean weight increased by 0.54 kg, and mean body mass index by 0.24 kg/m(2). The incidence of AEs and treatment-related AEs increased with increasing dose. NCBRF-TIQ subscale scores improved in all four treatment groups, with 34%, 34%, 32%, and 55% decreases from baseline in groups 1, 2, 3, and 4, respectively. CGI-S and SNAP-IV scores improved over time in all treatment groups, and CGI-I scores improved to the greatest degree in group 4. Molindone at doses of 5-20 mg/day (children weighing blind, placebo-controlled trials are needed to further investigate molindone in this pediatric population.
Costa, Diogo; Matanov, Aleksandra; Canavan, Reamonn; Gabor, Edina; Greacen, Tim; Vondráčková, Petra; Kluge, Ulrike; Nicaise, Pablo; Moskalewicz, Jacek; Díaz–Olalla, José Manuel; Straßmayr, Christa; Kikkert, Martijn; Soares, Joaquim JF; Gaddini, Andrea; Barros, Henrique
2014-01-01
Background: Different service characteristics are known to influence mental health care delivery. Much less is known about the impact of contextual factors, such as the socioeconomic circumstances, on the provision of care to socially marginalized groups. The objectives of this work were to assess the organisational characteristics of services providing mental health care for marginalized groups in 14 European capital cities and to explore the associations between organisational quality, serv...
Singaram, V S; Dolmans, D H J M; Lachman, N; van der Vleuten, C P M
2008-07-01
A key aspect of the success of a PBL curriculum is the effective implementation of its small group tutorials. Diversity among students participating in tutorials may affect the effectiveness of the tutorials and may require different implementation strategies. To determine how students from diverse backgrounds perceive the effectiveness of the processes and content of the PBL tutorials. This study also aims to explore the relationship between students' perceptions of their PBL tutorials and their gender, age, language, prior educational training, and secondary schooling. Data were survey results from 244 first-year student-respondents at the Nelson Mandela School of Medicine at the University of KwaZulu-Natal in South Africa. Exploratory factor analysis was conducted to verify scale constructs in the questionnaire. Relationships between independent and dependent variables were investigated in an analysis of variance. The average scores for the items measured varied between 3.3 and 3.8 (scale value 1 indicated negative regard and 5 indicated positive regard). Among process measures, approximately two-thirds of students felt that learning in a group was neither frustrating nor stressful and that they enjoyed learning how to work with students from different social and cultural backgrounds. Among content measures, 80% of the students felt that they learned to work successfully with students from different social and cultural groups and 77% felt that they benefited from the input of other group members. Mean ratings on these measures did not vary with students' gender, age, first language, prior educational training, and the types of schools they had previously attended. Medical students of the University of KwaZulu-Natal, regardless of their backgrounds, generally have positive perceptions of small group learning. These findings support previous studies in highlighting the role that small group tutorials can play in overcoming cultural barriers and promoting unity and
Directory of Open Access Journals (Sweden)
Raftery Adrian E
2009-02-01
Full Text Available Abstract Background Microarray technology is increasingly used to identify potential biomarkers for cancer prognostics and diagnostics. Previously, we have developed the iterative Bayesian Model Averaging (BMA algorithm for use in classification. Here, we extend the iterative BMA algorithm for application to survival analysis on high-dimensional microarray data. The main goal in applying survival analysis to microarray data is to determine a highly predictive model of patients' time to event (such as death, relapse, or metastasis using a small number of selected genes. Our multivariate procedure combines the effectiveness of multiple contending models by calculating the weighted average of their posterior probability distributions. Our results demonstrate that our iterative BMA algorithm for survival analysis achieves high prediction accuracy while consistently selecting a small and cost-effective number of predictor genes. Results We applied the iterative BMA algorithm to two cancer datasets: breast cancer and diffuse large B-cell lymphoma (DLBCL data. On the breast cancer data, the algorithm selected a total of 15 predictor genes across 84 contending models from the training data. The maximum likelihood estimates of the selected genes and the posterior probabilities of the selected models from the training data were used to divide patients in the test (or validation dataset into high- and low-risk categories. Using the genes and models determined from the training data, we assigned patients from the test data into highly distinct risk groups (as indicated by a p-value of 7.26e-05 from the log-rank test. Moreover, we achieved comparable results using only the 5 top selected genes with 100% posterior probabilities. On the DLBCL data, our iterative BMA procedure selected a total of 25 genes across 3 contending models from the training data. Once again, we assigned the patients in the validation set to significantly distinct risk groups (p
Pachter, Lee M.; Auinger, Peggy; Palmer, Ray; Weitzman, Michael
2006-01-01
OBJECTIVE To determine whether the processes through which parenting practices, maternal depression, neighborhood, and chronic poverty affect child behavioral problems are similar or different in minority and nonminority children in the United States. METHODS Data from 884 white, 538 black, and 404 Latino families with children who were 6 to 9 years of age in the National Longitudinal Survey of Youth were analyzed. The outcome, child behavioral problems, was measured using the Behavior Problems Index externalizing and internalizing subscales. The effects of chronic poverty, neighborhood, maternal depression, and parenting on the outcome were analyzed using multigroup structural equation modeling. RESULTS Chronic poverty affected child behavioral problems indirectly through the other variables, and parenting practices had direct effects in each racial/ethnic group. The effects of maternal depression were partially mediated through parenting in the white and Latino samples but were direct and unmediated through parenting practices in the black sample. Neighborhood effects were present in the white and black samples but were not significant for the Latino sample. CONCLUSIONS Chronic poverty, neighborhood, maternal depression, and parenting practices have effects on child behavioral problems in white, black, and Latino children, but the processes and mechanisms through which they exert their effects differ among the groups. The differences may be related to social stratification mechanisms as well as sociocultural differences in family and childrearing practices. PMID:16585331
Karatzias, Thanos; Ferguson, Sandra; Chouliara, Zoë; Gullone, Angela; Cosgrove, Katie; Douglas, Anne
2014-10-01
There has been limited published research on the effectiveness of manualized psychoeducational approaches for the mental health and behavioral problems of child sexual abuse (CSA) survivors. The present study aims to add to the evidence base for the effectiveness and acceptability of such interventions. A total of 37 enrolled into a brief psychoeducation program (i.e., 10 sessions) aiming to help stabilize mental health and behavioral outcomes (e.g., self-harm), while on the waiting list for mental health services. Participants completed a set of self-rated measures at baseline, pre-intervention, post-intervention and 3-month follow-up. Although there was no change over time with regard to general distress, traumatic symptomatology, depression, anxiety, self-esteem, and life satisfaction, completers were less likely to report self-harm and presented with decreased rates of smoking, alcohol and substance misuse, and involvement in illegal and antisocial behaviors at post-treatment and follow-up. Qualitative data also suggested that overall the program is well tolerated by participants, despite the high attrition rate (43%). Although further research is required to establish the efficacy of this intervention, preliminary results indicate that the new intervention may be useful for stabilizing behavioral problems at post-treatment and follow-up. Strategies to improve attrition rates in future research and clinical practice are discussed.
Passolunghi, Maria Chiara; Mammarella, Irene Cristina
2012-01-01
This study examines visual and spatial working memory skills in 35 third to fifth graders with both mathematics learning disabilities (MLD) and poor problem-solving skills and 35 of their peers with typical development (TD) on tasks involving both low and high attentional control. Results revealed that children with MLD, relative to TD children, failed spatial working memory tasks that had either low or high attentional demands but did not fail the visual tasks. In addition, children with MLD made more intrusion errors in the spatial working memory tasks requiring high attentional control than did their TD peers. Finally, as a post hoc analysis the sample of MLD was divided in two: children with severe MLD and children with low mathematical achievement. Results showed that only children with severe MLD failed in spatial working memory (WM) tasks if compared with children with low mathematical achievement and TD. The findings are discussed on the basis of their theoretical and clinical implications, in particular considering that children with MLD can benefit from spatial WM processes to solve arithmetic word problems, which involves the ability to both maintain and manipulate relevant information.
Merriman, Donald; Codding, Robin S.; Tryon, Georgiana Shick; Minami, Takuya
2016-01-01
Research on the effectiveness of homework provides ample evidence that homework has a positive effect on learning, particularly for secondary students. Unfortunately, the rate of consistent homework completion for students, with and without disabilities, is low. This study used a between-groups design to examine the differential effectiveness of…
Hill, Walter A.; Allen, William R.
A field experiment was used to investigate the effects, if any, of changing group size and racial composition on the attitudes, behaviors, and effectiveness of black and white leaders. Subjects were 288 naval recruits, half black and half white, performing two tasks which were watched by a pair of racially mixed observers through a one-way mirror.…
Costa, Diogo; Matanov, Aleksandra; Canavan, Reamonn; Gabor, Edina; Greacen, Tim; Vondráčková, Petra; Kluge, Ulrike; Nicaise, Pablo; Moskalewicz, Jacek; Díaz-Olalla, José Manuel; Straßmayr, Christa; Kikkert, Martijn; Soares, Joaquim J F; Gaddini, Andrea; Barros, Henrique; Priebe, Stefan
2014-02-03
Different service characteristics are known to influence mental health care delivery. Much less is known about the impact of contextual factors, such as the socioeconomic circumstances, on the provision of care to socially marginalized groups.The objectives of this work were to assess the organisational characteristics of services providing mental health care for marginalized groups in 14 European capital cities and to explore the associations between organisational quality, service features and country-level characteristics. 617 services were assessed in two highly deprived areas in 14 European capital cities. A Quality Index of Service Organisation (QISO) was developed and applied across all sites. Service characteristics and country level socioeconomic indicators were tested and related with the Index using linear regressions and random intercept linear models. The mean (standard deviation) of the QISO score (minimum = 0; maximum = 15) varied from 8.63 (2.23) in Ireland to 12.40 (2.07) in Hungary. The number of different programmes provided was the only service characteristic significantly correlated with the QISO (p GDP) was inversely associated with the QISO. Nearly 15% of the variance of the QISO was attributed to country-level variables, with GDP explaining 12% of this variance. Socioeconomic contextual factors, in particular the national GDP are likely to influence the organisational quality of services providing mental health care for marginalized groups. Such factors should be considered in international comparative studies. Their significance for different types of services should be explored in further research.
Hua, Yi-Lin; Zhou, Zong-Quan; Liu, Xiao; Yang, Tian-Shu; Li, Zong-Feng; Li, Pei-Yun; Chen, Geng; Xu, Xiao-Ye; Tang, Jian-Shun; Xu, Jin-Shi; Li, Chuan-Feng; Guo, Guang-Can
2018-01-01
A photon pair can be entangled in many degrees of freedom such as polarization, time bins, and orbital angular momentum (OAM). Among them, the OAM of photons can be entangled in an infinite-dimensional Hilbert space which enhances the channel capacity of sharing information in a network. Twisted photons generated by spontaneous parametric down-conversion offer an opportunity to create this high-dimensional entanglement, but a photon pair generated by this process is typically wideband, which makes it difficult to interface with the quantum memories in a network. Here we propose an annual-ring-type quasi-phase-matching (QPM) crystal for generation of the narrowband high-dimensional entanglement. The structure of the QPM crystal is designed by tracking the geometric divergences of the OAM modes that comprise the entangled state. The dimensionality and the quality of the entanglement can be greatly enhanced with the annual-ring-type QPM crystal.
Oakes, J; Pols, R; Battersby, M; Lawn, S; Pulvirenti, M; Smith, D
2012-09-01
This study aimed to develop an empirically based description of relapse in Electronic Gaming Machine (EGM) problem gambling (PG) by describing the processes and factors that 'pull' the problem gambler away from relapse contrasted with the 'push' towards relapse. These conceptualisations describe two opposing, interacting emotional processes occurring within the problem gambler during any relapse episode. Each relapse episode comprises a complex set of psychological and social behaviours where many factors interact sequentially and simultaneously within the problem gambler to produce a series of mental and behaviour events that end (1) with relapse where 'push' overcomes 'pull' or (2) continued abstinence where 'pull' overcomes 'push'. Four focus groups comprising thirty participants who were EGM problem gamblers, gamblers' significant others, therapists and counsellors described their experiences and understanding of relapse. The groups were recorded, recordings were then transcribed and analysed using thematic textual analysis. It was established that vigilance, motivation to commit to change, positive social support, cognitive strategies such as remembering past gambling harms or distraction techniques to avoid thinking about gambling to enable gamblers to manage the urge to gamble and urge extinction were key factors that protected against relapse. Three complementary theories emerged from the analysis. Firstly, a process of reappraisal of personal gambling behaviour pulls the gambler away from relapse. This results in a commitment to change that develops over time and affects but is independent of each episode of relapse. Secondly, relapse may be halted by interacting factors that 'pull' the problem gambler away from the sequence of mental and behavioural events, which follow the triggering of the urge and cognitions to gamble. Thirdly, urge extinction and apparent 'cure' is possible for EGM gambling. This study provides a qualitative, empirical model for
High dimensional bowling - n-dimensional ball rolling on (n-1)-dimensional surface
DEFF Research Database (Denmark)
Deryabin, M.V.; Hjorth, Poul G.
2003-01-01
We consider the non-holonomic system of a n-dimensional ball rolling on a (n - 1)-dimensional surface. We discuss the structure of the equations of motion, the existence of an invariant measure and some generalizations of the problem.......We consider the non-holonomic system of a n-dimensional ball rolling on a (n - 1)-dimensional surface. We discuss the structure of the equations of motion, the existence of an invariant measure and some generalizations of the problem....
Trout, Joseph; Bland, Jared
2013-03-01
In this pilot project, one hour of lecture time was replaced with one hour of in-class assignments, which groups of students collaborated on. These in-class assignments consisted of problems or projects selected for the calculus-based introductory physics students The first problem was at a level of difficulty that the majority of the students could complete with a small to moderate amount of difficulty. Each successive problem was increasingly more difficult, the last problem being having a level of difficulty that was beyond the capabilities of the majority of the students and required some instructor intervention. The students were free to choose their own groups. Students were encouraged to interact and help each other understand. The success of the in-class exercises were measured using pre-tests and post-tests. The pre-test and post-test were completed by each student independently. Statistics were also compiled on each student's attendance record and the amount of time spent reading and studying, as reported by the student. Statistics were also completed on the student responses when asked if they had sufficient time to complete the pre-test and post-test and if they would have completed the test with the correct answers if they had more time. The pre-tests and post-tests were not used in the computation of the grades of the students.
Oakes, J; Pols, R; Battersby, M; Lawn, S; Pulvirenti, M; Smith, D
2012-09-01
This study aimed to develop an empirically based description of relapse in Electronic Gaming Machine problem gambling. In this paper the authors describe part one of a two part, linked relapse process: the 'push' towards relapse. In this two-part process, factors interact sequentially and simultaneously within the problem gambler to produce a series of mental and behavioural events that ends with relapse when the 'push' overcomes 'pull' (part one); or as described in part two, continued abstinence when 'pull' overcomes 'push'. In the second paper, the authors describe how interacting factors 'pull' the problem gambler away from relapse. This study used four focus groups comprising thirty participants who were gamblers, gamblers' significant others, therapists and counsellors. The groups were recorded, recordings were then transcribed and analysed using thematic, textual analysis. With the large number of variables considered to be related to relapse in problem gamblers, five key factors emerged that 'push' the gambler towards relapse. These were urge, erroneous cognitions about the outcomes of gambling, negative affect, dysfunctional relationships and environmental gambling triggers. Two theories emerged: (1) each relapse episode comprised a sequence of mental and behavioural events, which evolves over time and was modified by factors that 'push' this sequence towards relapse and (2) a number of gamblers develop an altered state of consciousness during relapse described as the 'zone' which prolongs the relapse.
Hassmiller Lich, Kristen; Urban, Jennifer Brown; Frerichs, Leah; Dave, Gaurav
2017-02-01
Group concept mapping (GCM) has been successfully employed in program planning and evaluation for over 25 years. The broader set of systems thinking methodologies (of which GCM is one), have only recently found their way into the field. We present an overview of systems thinking emerging from a system dynamics (SD) perspective, and illustrate the potential synergy between GCM and SD. As with GCM, participatory processes are frequently employed when building SD models; however, it can be challenging to engage a large and diverse group of stakeholders in the iterative cycles of divergent thinking and consensus building required, while maintaining a broad perspective on the issue being studied. GCM provides a compelling resource for overcoming this challenge, by richly engaging a diverse set of stakeholders in broad exploration, structuring, and prioritization. SD provides an opportunity to extend GCM findings by embedding constructs in a testable hypothesis (SD model) describing how system structure and changes in constructs affect outcomes over time. SD can be used to simulate the hypothesized dynamics inherent in GCM concept maps. We illustrate the potential of the marriage of these methodologies in a case study of BECOMING, a federally-funded program aimed at strengthening the cross-sector system of care for youth with severe emotional disturbances. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
Whyard, Claire
2010-01-01
Staying Calm’ is a small group programme designed to promote emotional skills, anger control and social problem solving skills in children. This study outlines an evaluation of the programme completed with 48 Year 5 and 6 children in two schools within a large shire county in the Midlands. The study begins by examining previous research and literature relevant to children’s emotional and social skills. A range of concepts and interventions that influence children’s emotional literacy, regu...
Anonas, Maria Roberta L.; Alampay, Liane Peña
2015-01-01
This study investigates the relation between parental verbal punishment and externalizing and internalizing behavior problems in Filipino children, and the moderating role of parental warmth in this relation, for same-sex (mothers-girls; fathers-boys) and cross-sex parent-child groups (mothers-boys; fathers-girls). Measures used were the Rohner Parental Acceptance-Rejection and Control Scale (PARQ/Control), the Achenbach Child Behavior Checklist (CBC), and a discipline measure (DI) constructe...
Igor V. Karyakin; Elvira G. Nikolenko
2017-01-01
From 19 to 24 September, 2016 VII International Conference of the Working Group on Raptors of Northern Eurasia “Birds of prey of Northern Eurasia: problems and adaptation under modern conditions” was held on the basis of the Sochi National Park. Materials for the conference were presented by 198 ornithologists from Russia, Ukraine, Belarus, Kazakhstan, Moldova, Turkmenistan, Austria, Great Britain, Hungary, Mongolia, Poland, Estonia and the USA, who published 148 articles in two collections “...
Bhadra, Anindya
2013-04-22
We describe a Bayesian technique to (a) perform a sparse joint selection of significant predictor variables and significant inverse covariance matrix elements of the response variables in a high-dimensional linear Gaussian sparse seemingly unrelated regression (SSUR) setting and (b) perform an association analysis between the high-dimensional sets of predictors and responses in such a setting. To search the high-dimensional model space, where both the number of predictors and the number of possibly correlated responses can be larger than the sample size, we demonstrate that a marginalization-based collapsed Gibbs sampler, in combination with spike and slab type of priors, offers a computationally feasible and efficient solution. As an example, we apply our method to an expression quantitative trait loci (eQTL) analysis on publicly available single nucleotide polymorphism (SNP) and gene expression data for humans where the primary interest lies in finding the significant associations between the sets of SNPs and possibly correlated genetic transcripts. Our method also allows for inference on the sparse interaction network of the transcripts (response variables) after accounting for the effect of the SNPs (predictor variables). We exploit properties of Gaussian graphical models to make statements concerning conditional independence of the responses. Our method compares favorably to existing Bayesian approaches developed for this purpose. © 2013, The International Biometric Society.
Lee, Jenny Hyunjung; McDonnell, Kevin T; Zelenyuk, Alla; Imre, Dan; Mueller, Klaus
2013-07-11
Although the Euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging inter-cluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multi-dimensional scaling (MDS) where one can often observe non-intuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly in high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our bi-scale framework distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate Euclidean distance.
Ma, Yun-Ming; Wang, Tie-Jun
2017-10-01
Higher-dimensional quantum system is of great interest owing to the outstanding features exhibited in the implementation of novel fundamental tests of nature and application in various quantum information tasks. High-dimensional quantum logic gate is a key element in scalable quantum computation and quantum communication. In this paper, we propose a scheme to implement a controlled-phase gate between a 2 N -dimensional photon and N three-level artificial atoms. This high-dimensional controlled-phase gate can serve as crucial components of the high-capacity, long-distance quantum communication. We use the high-dimensional Bell state analysis as an example to show the application of this device. Estimates on the system requirements indicate that our protocol is realizable with existing or near-further technologies. This scheme is ideally suited to solid-state integrated optical approaches to quantum information processing, and it can be applied to various system, such as superconducting qubits coupled to a resonator or nitrogen-vacancy centers coupled to a photonic-band-gap structures.
Directory of Open Access Journals (Sweden)
Lydia eMorris
2016-02-01
Full Text Available Background: Increasingly, research supports the utility of a transdiagnostic understanding of psychopathology. However, there is no consensus regarding the theoretical approach that best explains this. Transdiagnostic interventions can offer service delivery advantages; this is explored in the current review, focusing on group modalities and primary care settings. Objective: This review seeks to explore whether a Perceptual Control Theory (PCT explanation of psychopathology across disorders is a valid one. Further, this review illustrates the process of developing a novel transdiagnostic intervention (Take Control Course; TCC from a PCT theory of functioning.Method: Narrative review.Results and Conclusions: Considerable evidence supports key tenets of PCT. Further, PCT offers a novel perspective regarding the mechanisms by which a number of familiar techniques, such as exposure and awareness, are effective. However, additional research is required to directly test the relative contribution of some PCT mechanisms predicted to underlie psychopathology. Directions for future research are considered.
Kritz, Marlene; Gschwandtner, Manfred; Stefanov, Veronika; Hanbury, Allan
2013-01-01
Background There is a large body of research suggesting that medical professionals have unmet information needs during their daily routines. Objective To investigate which online resources and tools different groups of European physicians use to gather medical information and to identify barriers that prevent the successful retrieval of medical information from the Internet. Methods A detailed Web-based questionnaire was sent out to approximately 15,000 physicians across Europe and disseminated through partner websites. 500 European physicians of different levels of academic qualification and medical specialization were included in the analysis. Self-reported frequency of use of different types of online resources, perceived importance of search tools, and perceived search barriers were measured. Comparisons were made across different levels of qualification (qualified physicians vs physicians in training, medical specialists without professorships vs medical professors) and specialization (general practitioners vs specialists). Results Most participants were Internet-savvy, came from Austria (43%, 190/440) and Switzerland (31%, 137/440), were above 50 years old (56%, 239/430), stated high levels of medical work experience, had regular patient contact and were employed in nonacademic health care settings (41%, 177/432). All groups reported frequent use of general search engines and cited “restricted accessibility to good quality information” as a dominant barrier to finding medical information on the Internet. Physicians in training reported the most frequent use of Wikipedia (56%, 31/55). Specialists were more likely than general practitioners to use medical research databases (68%, 185/274 vs 27%, 24/88; χ2 2=44.905, Pinformation on the Internet (59%, 50/85 vs 43%, 111/260; χ2 1=7.231, P=.007) and to restrict their search by language (48%, 43/89 vs 35%, 97/278; χ2 1=5.148, P=.023). They frequently consult general health websites (36%, 31/87 vs 19%, 51
Kritz, Marlene; Gschwandtner, Manfred; Stefanov, Veronika; Hanbury, Allan; Samwald, Matthias
2013-06-26
There is a large body of research suggesting that medical professionals have unmet information needs during their daily routines. To investigate which online resources and tools different groups of European physicians use to gather medical information and to identify barriers that prevent the successful retrieval of medical information from the Internet. A detailed Web-based questionnaire was sent out to approximately 15,000 physicians across Europe and disseminated through partner websites. 500 European physicians of different levels of academic qualification and medical specialization were included in the analysis. Self-reported frequency of use of different types of online resources, perceived importance of search tools, and perceived search barriers were measured. Comparisons were made across different levels of qualification (qualified physicians vs physicians in training, medical specialists without professorships vs medical professors) and specialization (general practitioners vs specialists). Most participants were Internet-savvy, came from Austria (43%, 190/440) and Switzerland (31%, 137/440), were above 50 years old (56%, 239/430), stated high levels of medical work experience, had regular patient contact and were employed in nonacademic health care settings (41%, 177/432). All groups reported frequent use of general search engines and cited "restricted accessibility to good quality information" as a dominant barrier to finding medical information on the Internet. Physicians in training reported the most frequent use of Wikipedia (56%, 31/55). Specialists were more likely than general practitioners to use medical research databases (68%, 185/274 vs 27%, 24/88; χ²₂=44.905, PInternet (59%, 50/85 vs 43%, 111/260; χ²₁=7.231, P=.007) and to restrict their search by language (48%, 43/89 vs 35%, 97/278; χ²₁=5.148, P=.023). They frequently consult general health websites (36%, 31/87 vs 19%, 51/269; χ²₂=12.813, P=.002) and online physician network
Bhatti, Shahzad; Aslam Khan, Muhammad; Abbas, Sana; Attimonelli, Marcella; Gonzalez, Gerardo Rodriguez; Aydin, Hikmet Hakan; de Souza, Erica Martinha Silva
2017-04-09
The insight heterodox genetics of mtDNA infer new perspectives at the level of human mitochondrial control region heteroplasmy, which is substantial in evolutionary as well as forensic interpretation. The main goal of this study is to interrogate the recurrence and resolve the ambiguity of blurry spectrum of heteroplasmy in the human mtDNA control region of 50 Baluchi and 116 Sindhi unrelated individuals. Sanger sequencing was employed classically, that was further investigated by minisequencing. Only 20% Baluchi and 25.8% Sindhi were homoplasmic, whereas rest of 80% Baluchi and 74.1% Sindhi exhibited at least one heteroplasmy within the specimen. In total, 166 individuals have length heteroplasmy (LH) found at positions 16189, 303-315, 568-573, and 514-524, whilst point mutation heteroplasmy (PMH) was detected at positions 73, 16093, 16189, and 16234, respectively. Overall LH was observed albeit high frequency in Sindhi ethnic group (82%) rather than Baluchi's (37%), whereas PMH accumulation was relatively extensive (24%) in Baluchi's than Sindhi's (11.2%). The obtained results ascertained that growing knowledge of heteroplasmy assisted to develop consciences in the forensic community that heteroplasmy plays a pivotal role in the legal interpretation on a regular basis and knowledge of its biological underpinnings has a vital niche in the forensic science. Limited studies have focused on heteroplasmy, yet scientific attention should be given, in order to determine its magnitude in different ethnic boundaries.
Talib, Imran; Belgacem, Fethi Bin Muhammad; Asif, Naseer Ahmad; Khalil, Hammad
2017-01-01
In this research article, we derive and analyze an efficient spectral method based on the operational matrices of three dimensional orthogonal Jacobi polynomials to solve numerically the mixed partial derivatives type multi-terms high dimensions generalized class of fractional order partial differential equations. We transform the considered fractional order problem to an easily solvable algebraic equations with the aid of the operational matrices. Being easily solvable, the associated algebraic system leads to finding the solution of the problem. Some test problems are considered to confirm the accuracy and validity of the proposed numerical method. The convergence of the method is ensured by comparing our Matlab software simulations based obtained results with the exact solutions in the literature, yielding negligible errors. Moreover, comparative results discussed in the literature are extended and improved in this study.
Directory of Open Access Journals (Sweden)
Akbar Hassanzadeh
2017-01-01
Full Text Available Objective. The current study is aimed at investigating the association between stressful life events and psychological problems in a large sample of Iranian adults. Method. In a cross-sectional large-scale community-based study, 4763 Iranian adults, living in Isfahan, Iran, were investigated. Grouped outcomes latent factor regression on latent predictors was used for modeling the association of psychological problems (depression, anxiety, and psychological distress, measured by Hospital Anxiety and Depression Scale (HADS and General Health Questionnaire (GHQ-12, as the grouped outcomes, and stressful life events, measured by a self-administered stressful life events (SLEs questionnaire, as the latent predictors. Results. The results showed that the personal stressors domain has significant positive association with psychological distress (β=0.19, anxiety (β=0.25, depression (β=0.15, and their collective profile score (β=0.20, with greater associations in females (β=0.28 than in males (β=0.13 (all P<0.001. In addition, in the adjusted models, the regression coefficients for the association of social stressors domain and psychological problems profile score were 0.37, 0.35, and 0.46 in total sample, males, and females, respectively (P<0.001. Conclusion. Results of our study indicated that different stressors, particularly those socioeconomic related, have an effective impact on psychological problems. It is important to consider the social and cultural background of a population for managing the stressors as an effective approach for preventing and reducing the destructive burden of psychological problems.
Liu, Changying; Wu, Xinyuan
2017-07-01
In this paper we explore arbitrarily high-order Lagrange collocation-type time-stepping schemes for effectively solving high-dimensional nonlinear Klein-Gordon equations with different boundary conditions. We begin with one-dimensional periodic boundary problems and first formulate an abstract ordinary differential equation (ODE) on a suitable infinity-dimensional function space based on the operator spectrum theory. We then introduce an operator-variation-of-constants formula which is essential for the derivation of our arbitrarily high-order Lagrange collocation-type time-stepping schemes for the nonlinear abstract ODE. The nonlinear stability and convergence are rigorously analysed once the spatial differential operator is approximated by an appropriate positive semi-definite matrix under some suitable smoothness assumptions. With regard to the two dimensional Dirichlet or Neumann boundary problems, our new time-stepping schemes coupled with discrete Fast Sine / Cosine Transformation can be applied to simulate the two-dimensional nonlinear Klein-Gordon equations effectively. All essential features of the methodology are present in one-dimensional and two-dimensional cases, although the schemes to be analysed lend themselves with equal to higher-dimensional case. The numerical simulation is implemented and the numerical results clearly demonstrate the advantage and effectiveness of our new schemes in comparison with the existing numerical methods for solving nonlinear Klein-Gordon equations in the literature.
Pololi, Linda H; Evans, Arthur T
2015-01-01
To address a dearth of mentoring and to avoid the pitfalls of dyadic mentoring, the authors implemented and evaluated a novel collaborative group peer mentoring program in a large academic department of medicine. The mentoring program aimed to facilitate faculty in their career planning, and targeted either early-career or midcareer faculty in 5 cohorts over 4 years, from 2010 to 2014. Each cohort of 9-12 faculty participated in a yearlong program with foundations in adult learning, relationship formation, mindfulness, and culture change. Participants convened for an entire day, once a month. Sessions incorporated facilitated stepwise and values-based career planning, skill development, and reflective practice. Early-career faculty participated in an integrated writing program and midcareer faculty in leadership development. Overall attendance of the 51 participants was 96%, and only 3 of 51 faculty who completed the program left the medical school during the 4 years. All faculty completed a written detailed structured academic development plan. Participants experienced an enhanced, inclusive, and appreciative culture; clarified their own career goals, values, strengths and priorities; enhanced their enthusiasm for collaboration; and developed skills. The program results highlight the need for faculty to personally experience the power of forming deep relationships with their peers for fostering successful career development and vitality. The outcomes of faculty humanity, vitality, professionalism, relationships, appreciation of diversity, and creativity are essential to the multiple missions of academic medicine. © 2015 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on Continuing Medical Education, Association for Hospital Medical Education.
DEFF Research Database (Denmark)
Müller, Emmanuel; Assent, Ira; Günnemann, Stephan
2011-01-01
comparative studies on the advantages and disadvantages of the different algorithms exist. Part of the underlying problem is the lack of available open source implementations that could be used by researchers to understand, compare, and extend subspace and projected clustering algorithms. In this work, we...
Estimating the effect of a variable in a high-dimensional regression model
DEFF Research Database (Denmark)
Jensen, Peter Sandholt; Wurtz, Allan
A problem encountered in some empirical research, e.g. growth empirics, is that the potential number of explanatory variables is large compared to the number of observations. This makes it infeasible to condition on all variables in order to determine whether a particular variable has an effect. We...
Duque, María Osley Garzón; Bernal, Diana Restrepo; Cardona, Doris Alejandra Segura; Vargas, Alejandra Valencia; Salas, Ivony Agudelo; Quintero, Lina Marcela Salazar
2014-01-01
To examine, from the point of view of a group of epidemiologists in training, their life experiences and work related to addressing mental health problems and mental health issues. An exploratory qualitative-descriptive study was conducted using ethnographic tools, non-participant observation, note-taking, and group interviews (FG). The participants mentioned that mental health and mental health issues are managed and poorly differentiated either by them and the community in general. They also said they were not ready to handle mental problems, or have the support of services for patient care, as mental health issues have not yet been clearly dimensioned by society. Epidemiology has its limitations, it focuses on knowledge of the physical-biological aspects and the use of quantitative approach with poor integration of the qualitative approach, thus hindering the understanding of a phenomenon that exceeds the limits of a research approach. This approach to issues of health and mental illness widens the view of knowledge from only a single focus. It includes an understanding of the qualitative approach as an option to advance the knowledge and recognition of a public health problem overshadowed by stigma and apathy of society. Copyright © 2014 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.
Shahiri, Amirah Mohamed; Husain, Wahidah; Rashid, Nur'Aini Abd
2017-10-01
Huge amounts of data in educational datasets may cause the problem in producing quality data. Recently, data mining approach are increasingly used by educational data mining researchers for analyzing the data patterns. However, many research studies have concentrated on selecting suitable learning algorithms instead of performing feature selection process. As a result, these data has problem with computational complexity and spend longer computational time for classification. The main objective of this research is to provide an overview of feature selection techniques that have been used to analyze the most significant features. Then, this research will propose a framework to improve the quality of students' dataset. The proposed framework uses filter and wrapper based technique to support prediction process in future study.
The xyz algorithm for fast interaction search in high-dimensional data
Thanei, Gian-Andrea; Meinshausen, Nicolai; Shah, Rajen D.
2016-01-01
When performing regression on a dataset with $p$ variables, it is often of interest to go beyond using main linear effects and include interactions as products between individual variables. For small-scale problems, these interactions can be computed explicitly but this leads to a computational complexity of at least $\\mathcal{O}(p^2)$ if done naively. This cost can be prohibitive if $p$ is very large. We introduce a new randomised algorithm that is able to discover interactions with high pro...
Directory of Open Access Journals (Sweden)
K. Bristow
2011-01-01
Full Text Available Background. In the UK, most people with mental health problems are managed in primary care. However, many individuals in need of help are not able to access care, either because it is not available, or because the individual's interaction with care-givers deters or diverts help-seeking. Aims. To understand the experience of seeking care for distress from the perspective of potential patients from “hard-to-reach” groups. Methods. A qualitative study using semi-structured interviews, analysed using a thematic framework. Results. Access to primary care is problematic in four main areas: how distress is conceptualised by individuals, the decision to seek help, barriers to help-seeking, and navigating and negotiating services. Conclusion. There are complex reasons why people from “hard-to-reach” groups may not conceptualise their distress as a biomedical problem. In addition, there are particular barriers to accessing primary care when distress is recognised by the person and help-seeking is attempted. We suggest how primary care could be more accessible to people from “hard-to-reach” groups including the need to offer a flexible, non-biomedical response to distress.
Landau, Will; Niemi, Jarad
2016-01-01
Markov chain Monte Carlo (MCMC) is the predominant tool used in Bayesian parameter estimation for hierarchical models. When the model expands due to an increasing number of hierarchical levels, number of groups at a particular level, or number of observations in each group, a fully Bayesian analysis via MCMC can easily become computationally demanding, even intractable. We illustrate how the steps in an MCMC for hierarchical models are predominantly one of two types: conditionally independent...
Diffuse Interface Models on Graphs for Classification of High Dimensional Data
2011-01-01
performs this goal can be substituted for the k-means algorithm. For example, Lang [40] uses separating hyperplanes. A partitioning algorithm is...spatiotemporal grouping using the Nyström method. CVPR, Hawaii, 2001. [26] Charles Fowlkes, Serge Belongie, Fan Chung, and Jitendra Malik. Spectral grouping...Yale University, May 2004. [40] K. Lang . Fixing two weaknesses of the spectral method. In Advances in Neural Information Processing Systems 28 (NIPS 2006
McLean, Michelle; Van Wyk, Jacqueline M; Peters-Futre, Edith M; Higgins-Opitz, Susan B
2006-06-01
In problem-based learning (PBL) curricula, first-year students need to adapt to a new learning environment and an unfamiliar new pedagogy. The small-group tutorial potentially offers a learning environment where students can become self-directed learners, collaborating with other group members to achieve individual and group learning goals. At the end of the first six-week theme in a relatively new PBL curriculum, new medical students were canvassed about coping with PBL (self-directed learning; content; time management; resources) and the value of the small-group tutorial, the latter of which is currently being reported. Almost 84% of students (n = 178) responded. The benefits of participating in small groups were categorized into three domains-cognitive, affective and social-as identified from student responses. Results were analysed in terms of gender and prior educational experience (secondary school vs. prior tertiary educational experience). For almost 94% of students, the small-group tutorial provided a conducive learning environment that influenced their personal development (i.e. tolerance, patience) and socialization into the faculty. Significantly more males indicated that they had developed social skills, while more school-leavers (matriculants) than mature students felt more receptive to the views of others. More mature students claimed to have made friends. Irrespective of some conflicting opinions in the literature, the present results suggest that the PBL tutorial may be important in facilitating student socialization into a new and unfamiliar academic environment, particularly when the pedagogy differs markedly from their past educational experiences. Through interacting with fellow students from diverse origins who hold different views in the intimate setting of the small group, students felt that they had not only increased their knowledge but had also developed personally and socially. It is proposed that the small group may be useful for
Directory of Open Access Journals (Sweden)
Igor V. Karyakin
2017-01-01
Full Text Available From 19 to 24 September, 2016 VII International Conference of the Working Group on Raptors of Northern Eurasia “Birds of prey of Northern Eurasia: problems and adaptation under modern conditions” was held on the basis of the Sochi National Park. Materials for the conference were presented by 198 ornithologists from Russia, Ukraine, Belarus, Kazakhstan, Moldova, Turkmenistan, Austria, Great Britain, Hungary, Mongolia, Poland, Estonia and the USA, who published 148 articles in two collections “Birds of prey of Northern Eurasia” and “Palearctic Harriers”.
DEFF Research Database (Denmark)
Pham, Ninh Dang; Pagh, Rasmus
2012-01-01
projection-based technique that is able to estimate the angle-based outlier factor for all data points in time near-linear in the size of the data. Also, our approach is suitable to be performed in parallel environment to achieve a parallel speedup. We introduce a theoretical analysis of the quality...... of approximation to guarantee the reliability of our estimation algorithm. The empirical experiments on synthetic and real world data sets demonstrate that our approach is efficient and scalable to very large high-dimensional data sets....
Benediktsson, J. A.; Swain, P. H.; Ersoy, O. K.
1993-01-01
Application of neural networks to classification of remote sensing data is discussed. Conventional two-layer backpropagation is found to give good results in classification of remote sensing data but is not efficient in training. A more efficient variant, based on conjugate-gradient optimization, is used for classification of multisource remote sensing and geographic data and very-high-dimensional data. The conjugate-gradient neural networks give excellent performance in classification of multisource data, but do not compare as well with statistical methods in classification of very-high-dimentional data.
2016-02-01
choice for the weight function is the Zelnik-Manor and Perona function [31] for sparse matrices : w(x, y) = e − M(x,y) 2 √ τ(x)τ(y) , (49) using τ(x...Modified Cheeger and Ratio Cut Methods Using the Ginzburg-Landau Functional for Classification of High-Dimensional Data Ekaterina Merkurjev*, Andrea...related Ginzburg-Landau functional is used in the derivation of the methods. The graph framework discussed in this paper is undirected. The resulting
Behler, Jörg; Martoňák, Roman; Donadio, Davide; Parrinello, Michele
2008-05-01
We study in a systematic way the complex sequence of the high-pressure phases of silicon obtained upon compression by combining an accurate high-dimensional neural network representation of the density-functional theory potential-energy surface with the metadynamics scheme. Starting from the thermodynamically stable diamond structure at ambient conditions we are able to identify all structural phase transitions up to the highest-pressure fcc phase at about 100 GPa. The results are in excellent agreement with experiment. The method developed promises to be of great value in the study of inorganic solids, including those having metallic phases.
Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data
Dong, Kai
2015-09-16
DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.
Graph Based Models for Unsupervised High Dimensional Data Clustering and Network Analysis
2015-01-01
References [1] MNIST database. http://yann.lecun.com/exdb/mnist/. [2] Reid Andersen, Fan R. K. Chung, and Kevin Lang . Local graph partitioning using PageRank...Reports, 486(3):75–174, 2010. [34] Charless Fowlkes, Serge Belongie, Fan Chung, and Jitendra Malik. Spec- tral grouping using the Nyström method. IEEE