PERFORMANCE OF SELECTED AGGLOMERATIVE HIERARCHICAL CLUSTERING METHODS
Directory of Open Access Journals (Sweden)
Nusa Erman
2015-01-01
Full Text Available A broad variety of different methods of agglomerative hierarchical clustering brings along problems how to choose the most appropriate method for the given data. It is well known that some methods outperform others if the analysed data have a specific structure. In the presented study we have observed the behaviour of the centroid, the median (Gower median method, and the average method (unweighted pair-group method with arithmetic mean – UPGMA; average linkage between groups. We have compared them with mostly used methods of hierarchical clustering: the minimum (single linkage clustering, the maximum (complete linkage clustering, the Ward, and the McQuitty (groups method average, weighted pair-group method using arithmetic averages - WPGMA methods. We have applied the comparison of these methods on spherical, ellipsoid, umbrella-like, “core-and-sphere”, ring-like and intertwined three-dimensional data structures. To generate the data and execute the analysis, we have used R statistical software. Results show that all seven methods are successful in finding compact, ball-shaped or ellipsoid structures when they are enough separated. Conversely, all methods except the minimum perform poor on non-homogenous, irregular and elongated ones. Especially challenging is a circular double helix structure; it is being correctly revealed only by the minimum method. We can also confirm formerly published results of other simulation studies, which usually favour average method (besides Ward method in cases when data is assumed to be fairly compact and well separated.
Non-hierarchical clustering methods on factorial subspaces
Tortora, Cristina
2011-01-01
Cluster analysis (CA) aims at finding homogeneous group of individuals, where homogeneous is referred to individuals that present similar characteristics. Many CA techniques already exist, among the non-hierarchical ones the most known, thank to its simplicity and computational property, is k-means method. However, the method is unstable when the number of variables is large and when variables are correlated. This problem leads to the development of two-step methods, they perform a linear tra...
Breaking the hierarchy - a new cluster selection mechanism for hierarchical clustering methods
Directory of Open Access Journals (Sweden)
Zweig Katharina A
2009-10-01
Full Text Available Abstract Background Hierarchical clustering methods like Ward's method have been used since decades to understand biological and chemical data sets. In order to get a partition of the data set, it is necessary to choose an optimal level of the hierarchy by a so-called level selection algorithm. In 2005, a new kind of hierarchical clustering method was introduced by Palla et al. that differs in two ways from Ward's method: it can be used on data on which no full similarity matrix is defined and it can produce overlapping clusters, i.e., allow for multiple membership of items in clusters. These features are optimal for biological and chemical data sets but until now no level selection algorithm has been published for this method. Results In this article we provide a general selection scheme, the level independent clustering selection method, called LInCS. With it, clusters can be selected from any level in quadratic time with respect to the number of clusters. Since hierarchically clustered data is not necessarily associated with a similarity measure, the selection is based on a graph theoretic notion of cohesive clusters. We present results of our method on two data sets, a set of drug like molecules and set of protein-protein interaction (PPI data. In both cases the method provides a clustering with very good sensitivity and specificity values according to a given reference clustering. Moreover, we can show for the PPI data set that our graph theoretic cohesiveness measure indeed chooses biologically homogeneous clusters and disregards inhomogeneous ones in most cases. We finally discuss how the method can be generalized to other hierarchical clustering methods to allow for a level independent cluster selection. Conclusion Using our new cluster selection method together with the method by Palla et al. provides a new interesting clustering mechanism that allows to compute overlapping clusters, which is especially valuable for biological and
A dynamic hierarchical clustering method for trajectory-based unusual video event detection.
Jiang, Fan; Wu, Ying; Katsaggelos, Aggelos K
2009-04-01
The proposed unusual video event detection method is based on unsupervised clustering of object trajectories, which are modeled by hidden Markov models (HMM). The novelty of the method includes a dynamic hierarchical process incorporated in the trajectory clustering algorithm to prevent model overfitting and a 2-depth greedy search strategy for efficient clustering.
Hierarchical clustering for graph visualization
Clémençon, Stéphan; Rossi, Fabrice; Tran, Viet Chi
2012-01-01
This paper describes a graph visualization methodology based on hierarchical maximal modularity clustering, with interactive and significant coarsening and refining possibilities. An application of this method to HIV epidemic analysis in Cuba is outlined.
Neutrosophic Hierarchical Clustering Algoritms
Directory of Open Access Journals (Sweden)
Rıdvan Şahin
2014-03-01
Full Text Available Interval neutrosophic set (INS is a generalization of interval valued intuitionistic fuzzy set (IVIFS, whose the membership and non-membership values of elements consist of fuzzy range, while single valued neutrosophic set (SVNS is regarded as extension of intuitionistic fuzzy set (IFS. In this paper, we extend the hierarchical clustering techniques proposed for IFSs and IVIFSs to SVNSs and INSs respectively. Based on the traditional hierarchical clustering procedure, the single valued neutrosophic aggregation operator, and the basic distance measures between SVNSs, we define a single valued neutrosophic hierarchical clustering algorithm for clustering SVNSs. Then we extend the algorithm to classify an interval neutrosophic data. Finally, we present some numerical examples in order to show the effectiveness and availability of the developed clustering algorithms.
Non-Hierarchical Clustering as a method to analyse an open-ended ...
African Journals Online (AJOL)
Apple
tests, provide instructors with tools to probe students' conceptual knowledge of various fields of science and ... quantitative non-hierarchical clustering analysis method known as k-means (Everitt, Landau, Leese & Stahl, ...... undergraduate engineering students in creating ... mathematics-formal reasoning and the contextual.
Directory of Open Access Journals (Sweden)
I. Crawford
2015-07-01
Full Text Available In this paper we present improved methods for discriminating and quantifying Primary Biological Aerosol Particles (PBAP by applying hierarchical agglomerative cluster analysis to multi-parameter ultra violet-light induced fluorescence (UV-LIF spectrometer data. The methods employed in this study can be applied to data sets in excess of 1×106 points on a desktop computer, allowing for each fluorescent particle in a dataset to be explicitly clustered. This reduces the potential for misattribution found in subsampling and comparative attribution methods used in previous approaches, improving our capacity to discriminate and quantify PBAP meta-classes. We evaluate the performance of several hierarchical agglomerative cluster analysis linkages and data normalisation methods using laboratory samples of known particle types and an ambient dataset. Fluorescent and non-fluorescent polystyrene latex spheres were sampled with a Wideband Integrated Bioaerosol Spectrometer (WIBS-4 where the optical size, asymmetry factor and fluorescent measurements were used as inputs to the analysis package. It was found that the Ward linkage with z-score or range normalisation performed best, correctly attributing 98 and 98.1 % of the data points respectively. The best performing methods were applied to the BEACHON-RoMBAS ambient dataset where it was found that the z-score and range normalisation methods yield similar results with each method producing clusters representative of fungal spores and bacterial aerosol, consistent with previous results. The z-score result was compared to clusters generated with previous approaches (WIBS AnalysiS Program, WASP where we observe that the subsampling and comparative attribution method employed by WASP results in the overestimation of the fungal spore concentration by a factor of 1.5 and the underestimation of bacterial aerosol concentration by a factor of 5. We suggest that this likely due to errors arising from misatrribution
Directory of Open Access Journals (Sweden)
I. Crawford
2015-11-01
Full Text Available In this paper we present improved methods for discriminating and quantifying primary biological aerosol particles (PBAPs by applying hierarchical agglomerative cluster analysis to multi-parameter ultraviolet-light-induced fluorescence (UV-LIF spectrometer data. The methods employed in this study can be applied to data sets in excess of 1 × 106 points on a desktop computer, allowing for each fluorescent particle in a data set to be explicitly clustered. This reduces the potential for misattribution found in subsampling and comparative attribution methods used in previous approaches, improving our capacity to discriminate and quantify PBAP meta-classes. We evaluate the performance of several hierarchical agglomerative cluster analysis linkages and data normalisation methods using laboratory samples of known particle types and an ambient data set. Fluorescent and non-fluorescent polystyrene latex spheres were sampled with a Wideband Integrated Bioaerosol Spectrometer (WIBS-4 where the optical size, asymmetry factor and fluorescent measurements were used as inputs to the analysis package. It was found that the Ward linkage with z-score or range normalisation performed best, correctly attributing 98 and 98.1 % of the data points respectively. The best-performing methods were applied to the BEACHON-RoMBAS (Bio–hydro–atmosphere interactions of Energy, Aerosols, Carbon, H2O, Organics and Nitrogen–Rocky Mountain Biogenic Aerosol Study ambient data set, where it was found that the z-score and range normalisation methods yield similar results, with each method producing clusters representative of fungal spores and bacterial aerosol, consistent with previous results. The z-score result was compared to clusters generated with previous approaches (WIBS AnalysiS Program, WASP where we observe that the subsampling and comparative attribution method employed by WASP results in the overestimation of the fungal spore concentration by a factor of 1.5 and the
Directory of Open Access Journals (Sweden)
Refat Aljumily
2015-09-01
Full Text Available A few literary scholars have long claimed that Shakespeare did not write some of his best plays (history plays and tragedies and proposed at one time or another various suspect authorship candidates. Most modern-day scholars of Shakespeare have rejected this claim, arguing that strong evidence that Shakespeare wrote the plays and poems being his name appears on them as the author. This has caused and led to an ongoing scholarly academic debate for quite some long time. Stylometry is a fast-growing field often used to attribute authorship to anonymous or disputed texts. Stylometric attempts to resolve this literary puzzle have raised interesting questions over the past few years. The following paper contributes to “the Shakespeare authorship question” by using a mathematically-based methodology to examine the hypothesis that Shakespeare wrote all the disputed plays traditionally attributed to him. More specifically, the mathematically based methodology used here is based on Mean Proximity, as a linear hierarchical clustering method, and on Principal Components Analysis, as a non-hierarchical linear clustering method. It is also based, for the first time in the domain, on Self-Organizing Map U-Matrix and Voronoi Map, as non-linear clustering methods to cover the possibility that our data contains significant non-linearities. Vector Space Model (VSM is used to convert texts into vectors in a high dimensional space. The aim of which is to compare the degrees of similarity within and between limited samples of text (the disputed plays. The various works and plays assumed to have been written by Shakespeare and possible authors notably, Sir Francis Bacon, Christopher Marlowe, John Fletcher, and Thomas Kyd, where “similarity” is defined in terms of correlation/distance coefficient measure based on the frequency of usage profiles of function words, word bi-grams, and character triple-grams. The claim that Shakespeare authored all the disputed
Masiero, Joseph R; Bauer, J M; Grav, T; Nugent, C R; Stevenson, R
2013-01-01
Using albedos from WISE/NEOWISE to separate distinct albedo groups within the Main Belt asteroids, we apply the Hierarchical Clustering Method to these subpopulations and identify dynamically associated clusters of asteroids. While this survey is limited to the ~35% of known Main Belt asteroids that were detected by NEOWISE, we present the families linked from these objects as higher confidence associations than can be obtained from dynamical linking alone. We find that over one-third of the observed population of the Main Belt is represented in the high-confidence cores of dynamical families. The albedo distribution of family members differs significantly from the albedo distribution of background objects in the same region of the Main Belt, however interpretation of this effect is complicated by the incomplete identification of lower-confidence family members. In total we link 38,298 asteroids into 76 distinct families. This work represents a critical step necessary to debias the albedo and size distributio...
Directory of Open Access Journals (Sweden)
Susan Worner
2013-09-01
Full Text Available For greater preparedness, pest risk assessors are required to prioritise long lists of pest species with potential to establish and cause significant impact in an endangered area. Such prioritization is often qualitative, subjective, and sometimes biased, relying mostly on expert and stakeholder consultation. In recent years, cluster based analyses have been used to investigate regional pest species assemblages or pest profiles to indicate the risk of new organism establishment. Such an approach is based on the premise that the co-occurrence of well-known global invasive pest species in a region is not random, and that the pest species profile or assemblage integrates complex functional relationships that are difficult to tease apart. In other words, the assemblage can help identify and prioritise species that pose a threat in a target region. A computational intelligence method called a Kohonen self-organizing map (SOM, a type of artificial neural network, was the first clustering method applied to analyse assemblages of invasive pests. The SOM is a well known dimension reduction and visualization method especially useful for high dimensional data that more conventional clustering methods may not analyse suitably. Like all clustering algorithms, the SOM can give details of clusters that identify regions with similar pest assemblages, possible donor and recipient regions. More important, however SOM connection weights that result from the analysis can be used to rank the strength of association of each species within each regional assemblage. Species with high weights that are not already established in the target region are identified as high risk. However, the SOM analysis is only the first step in a process to assess risk to be used alongside or incorporated within other measures. Here we illustrate the application of SOM analyses in a range of contexts in invasive species risk assessment, and discuss other clustering methods such as k
Directory of Open Access Journals (Sweden)
Reilly John J
2005-06-01
Full Text Available Abstract Background Advances in miniature sensor technology have led to the development of wearable systems that allow one to monitor motor activities in the field. A variety of classifiers have been proposed in the past, but little has been done toward developing systematic approaches to assess the feasibility of discriminating the motor tasks of interest and to guide the choice of the classifier architecture. Methods A technique is introduced to address this problem according to a hierarchical framework and its use is demonstrated for the application of detecting motor activities in patients with chronic obstructive pulmonary disease (COPD undergoing pulmonary rehabilitation. Accelerometers were used to collect data for 10 different classes of activity. Features were extracted to capture essential properties of the data set and reduce the dimensionality of the problem at hand. Cluster measures were utilized to find natural groupings in the data set and then construct a hierarchy of the relationships between clusters to guide the process of merging clusters that are too similar to distinguish reliably. It provides a means to assess whether the benefits of merging for performance of a classifier outweigh the loss of resolution incurred through merging. Results Analysis of the COPD data set demonstrated that motor tasks related to ambulation can be reliably discriminated from tasks performed in a seated position with the legs in motion or stationary using two features derived from one accelerometer. Classifying motor tasks within the category of activities related to ambulation requires more advanced techniques. While in certain cases all the tasks could be accurately classified, in others merging clusters associated with different motor tasks was necessary. When merging clusters, it was found that the proposed method could lead to more than 12% improvement in classifier accuracy while retaining resolution of 4 tasks. Conclusion Hierarchical
Sherrill, Delsey M; Moy, Marilyn L; Reilly, John J; Bonato, Paolo
2005-01-01
Background Advances in miniature sensor technology have led to the development of wearable systems that allow one to monitor motor activities in the field. A variety of classifiers have been proposed in the past, but little has been done toward developing systematic approaches to assess the feasibility of discriminating the motor tasks of interest and to guide the choice of the classifier architecture. Methods A technique is introduced to address this problem according to a hierarchical framework and its use is demonstrated for the application of detecting motor activities in patients with chronic obstructive pulmonary disease (COPD) undergoing pulmonary rehabilitation. Accelerometers were used to collect data for 10 different classes of activity. Features were extracted to capture essential properties of the data set and reduce the dimensionality of the problem at hand. Cluster measures were utilized to find natural groupings in the data set and then construct a hierarchy of the relationships between clusters to guide the process of merging clusters that are too similar to distinguish reliably. It provides a means to assess whether the benefits of merging for performance of a classifier outweigh the loss of resolution incurred through merging. Results Analysis of the COPD data set demonstrated that motor tasks related to ambulation can be reliably discriminated from tasks performed in a seated position with the legs in motion or stationary using two features derived from one accelerometer. Classifying motor tasks within the category of activities related to ambulation requires more advanced techniques. While in certain cases all the tasks could be accurately classified, in others merging clusters associated with different motor tasks was necessary. When merging clusters, it was found that the proposed method could lead to more than 12% improvement in classifier accuracy while retaining resolution of 4 tasks. Conclusion Hierarchical clustering methods are relevant
Odong, T.L.; Heerwaarden, van J.; Jansen, J.; Hintum, van T.J.L.; Eeuwijk, van F.A.
2011-01-01
Despite the availability of newer approaches, traditional hierarchical clustering remains very popular in genetic diversity studies in plants. However, little is known about its suitability for molecular marker data. We studied the performance of traditional hierarchical clustering techniques using
Galaxy formation through hierarchical clustering
White, Simon D. M.; Frenk, Carlos S.
1991-01-01
Analytic methods for studying the formation of galaxies by gas condensation within massive dark halos are presented. The present scheme applies to cosmogonies where structure grows through hierarchical clustering of a mixture of gas and dissipationless dark matter. The simplest models consistent with the current understanding of N-body work on dissipationless clustering, and that of numerical and analytic work on gas evolution and cooling are adopted. Standard models for the evolution of the stellar population are also employed, and new models for the way star formation heats and enriches the surrounding gas are constructed. Detailed results are presented for a cold dark matter universe with Omega = 1 and H(0) = 50 km/s/Mpc, but the present methods are applicable to other models. The present luminosity functions contain significantly more faint galaxies than are observed.
Hierarchical Formation of Galactic Clusters
Elmegreen, B G
2006-01-01
Young stellar groupings and clusters have hierarchical patterns ranging from flocculent spiral arms and star complexes on the largest scale to OB associations, OB subgroups, small loose groups, clusters and cluster subclumps on the smallest scales. There is no obvious transition in morphology at the cluster boundary, suggesting that clusters are only the inner parts of the hierarchy where stars have had enough time to mix. The power-law cluster mass function follows from this hierarchical structure: n(M_cl) M_cl^-b for b~2. This value of b is independently required by the observation that the summed IMFs from many clusters in a galaxy equals approximately the IMF of each cluster.
Intuitionistic fuzzy hierarchical clustering algorithms
Institute of Scientific and Technical Information of China (English)
Xu Zeshui
2009-01-01
Intuitionistic fuzzy set (IFS) is a set of 2-tuple arguments, each of which is characterized by a mem-bership degree and a nonmembership degree. The generalized form of IFS is interval-valued intuitionistic fuzzy set (IVIFS), whose components are intervals rather than exact numbers. IFSs and IVIFSs have been found to be very useful to describe vagueness and uncertainty. However, it seems that little attention has been focused on the clus-tering analysis of IFSs and IVIFSs. An intuitionistic fuzzy hierarchical algorithm is introduced for clustering IFSs, which is based on the traditional hierarchical clustering procedure, the intuitionistic fuzzy aggregation operator, and the basic distance measures between IFSs: the Hamming distance, normalized Hamming, weighted Hamming, the Euclidean distance, the normalized Euclidean distance, and the weighted Euclidean distance. Subsequently, the algorithm is extended for clustering IVIFSs. Finally the algorithm and its extended form are applied to the classifications of building materials and enterprises respectively.
Managing Clustered Data Using Hierarchical Linear Modeling
Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.
2012-01-01
Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…
Managing Clustered Data Using Hierarchical Linear Modeling
Warne, Russell T.; Li, Yan; McKyer, E. Lisako J.; Condie, Rachel; Diep, Cassandra S.; Murano, Peter S.
2012-01-01
Researchers in nutrition research often use cluster or multistage sampling to gather participants for their studies. These sampling methods often produce violations of the assumption of data independence that most traditional statistics share. Hierarchical linear modeling is a statistical method that can overcome violations of the independence…
Convex Clustering: An Attractive Alternative to Hierarchical Clustering
Chen, Gary K.; Chi, Eric C.; Ranola, John Michael O.; Lange, Kenneth
2015-01-01
The primary goal in cluster analysis is to discover natural groupings of objects. The field of cluster analysis is crowded with diverse methods that make special assumptions about data and address different scientific aims. Despite its shortcomings in accuracy, hierarchical clustering is the dominant clustering method in bioinformatics. Biologists find the trees constructed by hierarchical clustering visually appealing and in tune with their evolutionary perspective. Hierarchical clustering operates on multiple scales simultaneously. This is essential, for instance, in transcriptome data, where one may be interested in making qualitative inferences about how lower-order relationships like gene modules lead to higher-order relationships like pathways or biological processes. The recently developed method of convex clustering preserves the visual appeal of hierarchical clustering while ameliorating its propensity to make false inferences in the presence of outliers and noise. The solution paths generated by convex clustering reveal relationships between clusters that are hidden by static methods such as k-means clustering. The current paper derives and tests a novel proximal distance algorithm for minimizing the objective function of convex clustering. The algorithm separates parameters, accommodates missing data, and supports prior information on relationships. Our program CONVEXCLUSTER incorporating the algorithm is implemented on ATI and nVidia graphics processing units (GPUs) for maximal speed. Several biological examples illustrate the strengths of convex clustering and the ability of the proximal distance algorithm to handle high-dimensional problems. CONVEXCLUSTER can be freely downloaded from the UCLA Human Genetics web site at http://www.genetics.ucla.edu/software/ PMID:25965340
Hierarchical Clustering and Active Galaxies
Hatziminaoglou, E; Manrique, A
2000-01-01
The growth of Super Massive Black Holes and the parallel development of activity in galactic nuclei are implemented in an analytic code of hierarchical clustering. The evolution of the luminosity function of quasars and AGN will be computed with special attention paid to the connection between quasars and Seyfert galaxies. One of the major interests of the model is the parallel study of quasar formation and evolution and the History of Star Formation.
Robust Pseudo-Hierarchical Support Vector Clustering
DEFF Research Database (Denmark)
Hansen, Michael Sass; Sjöstrand, Karl; Olafsdóttir, Hildur
2007-01-01
Support vector clustering (SVC) has proven an efficient algorithm for clustering of noisy and high-dimensional data sets, with applications within many fields of research. An inherent problem, however, has been setting the parameters of the SVC algorithm. Using the recent emergence of a method...... for calculating the entire regularization path of the support vector domain description, we propose a fast method for robust pseudo-hierarchical support vector clustering (HSVC). The method is demonstrated to work well on generated data, as well as for detecting ischemic segments from multidimensional myocardial...
Odong, T L; van Heerwaarden, J; Jansen, J; van Hintum, T J L; van Eeuwijk, F A
2011-07-01
Despite the availability of newer approaches, traditional hierarchical clustering remains very popular in genetic diversity studies in plants. However, little is known about its suitability for molecular marker data. We studied the performance of traditional hierarchical clustering techniques using real and simulated molecular marker data. Our study also compared the performance of traditional hierarchical clustering with model-based clustering (STRUCTURE). We showed that the cophenetic correlation coefficient is directly related to subgroup differentiation and can thus be used as an indicator of the presence of genetically distinct subgroups in germplasm collections. Whereas UPGMA performed well in preserving distances between accessions, Ward excelled in recovering groups. Our results also showed a close similarity between clusters obtained by Ward and by STRUCTURE. Traditional cluster analysis can provide an easy and effective way of determining structure in germplasm collections using molecular marker data, and, the output can be used for sampling core collections or for association studies.
A New Metrics for Hierarchical Clustering
Institute of Scientific and Technical Information of China (English)
YANGGuangwen; SHIShuming; WANGDingxing
2003-01-01
Hierarchical clustering is a popular method of performing unsupervised learning. Some metric must be used to determine the similarity between pairs of clusters in hierarchical clustering. Traditional similarity metrics either can deal with simple shapes (i.e. spherical shapes) only or are very sensitive to outliers (the chaining effect). The main contribution of this paper is to propose some potential-based similarity metrics (APES and AMAPES) between clusters in hierarchical clustering, inspired by the concepts of the electric potential and the gravitational potential in electromagnetics and astronomy. The main features of these metrics are: the first, they have strong antijamming capability; the second, they are capable of finding clusters of different shapes such as spherical, spiral, chain, circle, sigmoid, U shape or other complex irregular shapes; the third, existing algorithms and research fruits for classical metrics can be adopted to deal with these new potential-based metrics with no or little modification. Experiments showed that the new metrics are more superior to traditional ones. Different potential functions are compared, and the sensitivity to parameters is also analyzed in this paper.
A Hierarchical Clustering Methodology for the Estimation of Toxicity
A Quantitative Structure Activity Relationship (QSAR) methodology based on hierarchical clustering was developed to predict toxicological endpoints. This methodology utilizes Ward's method to divide a training set into a series of structurally similar clusters. The structural sim...
Applied Bayesian Hierarchical Methods
Congdon, Peter D
2010-01-01
Bayesian methods facilitate the analysis of complex models and data structures. Emphasizing data applications, alternative modeling specifications, and computer implementation, this book provides a practical overview of methods for Bayesian analysis of hierarchical models.
Wang, Yanfei; Wu, Rong; Cho, Kathleen R; Shedden, Kerby A; Barder, Timothy J; Lubman, David M
2006-01-01
A two-dimensional liquid mapping method was used to map the protein expression of eight ovarian serous carcinoma cell lines and three immortalized ovarian surface epithelial cell lines. Maps were produced using pI as the separation parameter in the first dimension and hydrophobicity based upon reversed-phase HPLC separation in the second dimension. The method can be reproducibly used to produce protein expression maps over a pH range from 4.0 to 8.5. A dynamic programming method was used to correct for minor shifts in peaks during the HPLC gradient between sample runs. The resulting corrected maps can then be compared using hierarchical clustering to produce dendrograms indicating the relationship between different cell lines. It was found that several of the ovarian surface epithelial cell lines clustered together, whereas specific groups of serous carcinoma cell lines clustered with each other. Although there is limited information on the current biology of these cell lines, it was shown that the protein expression of certain cell lines is closely related to each other. Other cell lines, including one ovarian clear cell carcinoma cell line, two endometrioid carcinoma cell lines, and three breast epithelial cell lines, were also mapped for comparison to show that their protein profiles cluster differently than the serous samples and to study how they cluster relative to each other. In addition, comparisons can be made between proteins differentially expressed between cell lines that may serve as markers of ovarian serous carcinomas. The automation of the method allows reproducible comparison of many samples, and the use of differential analysis limits the number of proteins that might require further analysis by mass spectrometry techniques.
Hierarchical Control for Multiple DC Microgrids Clusters
DEFF Research Database (Denmark)
Shafiee, Qobad; Dragicevic, Tomislav; Vasquez, Juan Carlos;
2014-01-01
This paper presents a distributed hierarchical control framework to ensure reliable operation of dc Microgrid (MG) clusters. In this hierarchy, primary control is used to regulate the common bus voltage inside each MG locally. An adaptive droop method is proposed for this level which determines....... Another distributed policy is employed then to regulate the power flow among the MGs according to their local SOCs. The proposed distributed controllers on each MG communicate with only the neighbor MGs through a communication infrastructure. Finally, the small signal model is expanded for dc MG clusters...
Magnetic susceptibilities of cluster-hierarchical models
McKay, Susan R.; Berker, A. Nihat
1984-02-01
The exact magnetic susceptibilities of hierarchical models are calculated near and away from criticality, in both the ordered and disordered phases. The mechanism and phenomenology are discussed for models with susceptibilities that are physically sensible, e.g., nondivergent away from criticality. Such models are found based upon the Niemeijer-van Leeuwen cluster renormalization. A recursion-matrix method is presented for the renormalization-group evaluation of response functions. Diagonalization of this matrix at fixed points provides simple criteria for well-behaved densities and response functions.
Tashobya, Christine K; Dubourg, Dominique; Ssengooba, Freddie; Speybroeck, Niko; Macq, Jean; Criel, Bart
2016-03-01
In 2003, the Uganda Ministry of Health introduced the district league table for district health system performance assessment. The league table presents district performance against a number of input, process and output indicators and a composite index to rank districts. This study explores the use of hierarchical cluster analysis for analysing and presenting district health systems performance data and compares this approach with the use of the league table in Uganda. Ministry of Health and district plans and reports, and published documents were used to provide information on the development and utilization of the Uganda district league table. Quantitative data were accessed from the Ministry of Health databases. Statistical analysis using SPSS version 20 and hierarchical cluster analysis, utilizing Wards' method was used. The hierarchical cluster analysis was conducted on the basis of seven clusters determined for each year from 2003 to 2010, ranging from a cluster of good through moderate-to-poor performers. The characteristics and membership of clusters varied from year to year and were determined by the identity and magnitude of performance of the individual variables. Criticisms of the league table include: perceived unfairness, as it did not take into consideration district peculiarities; and being oversummarized and not adequately informative. Clustering organizes the many data points into clusters of similar entities according to an agreed set of indicators and can provide the beginning point for identifying factors behind the observed performance of districts. Although league table ranking emphasize summation and external control, clustering has the potential to encourage a formative, learning approach. More research is required to shed more light on factors behind observed performance of the different clusters. Other countries especially low-income countries that share many similarities with Uganda can learn from these experiences. © The Author 2015
Hierarchical Clustering Given Confidence Intervals of Metric Distances
Huang, Weiyu
2016-01-01
This paper considers metric spaces where distances between a pair of nodes are represented by distance intervals. The goal is to study methods for the determination of hierarchical clusters, i.e., a family of nested partitions indexed by a resolution parameter, induced from the given distance intervals of the metric spaces. Our construction of hierarchical clustering methods is based on defining admissible methods to be those methods that abide to the axioms of value - nodes in a metric space with two nodes are clustered together at the convex combination of the distance bounds between them - and transformation - when both distance bounds are reduced, the output may become more clustered but not less. Two admissible methods are constructed and are shown to provide universal upper and lower bounds in the space of admissible methods. Practical implications are explored by clustering moving points via snapshots and by clustering networks representing brain structural connectivity using the lower and upper bounds...
Hierarchical Approach in Clustering to Euclidean Traveling Salesman Problem
Fajar, Abdulah; Herman, Nanna Suryana; Abu, Nur Azman; Shahib, Sahrin
There has been growing interest in studying combinatorial optimization problems by clustering strategy, with a special emphasis on the traveling salesman problem (TSP). TSP naturally arises as a sub problem in much transportation, manufacturing and logistics application, this problem has caught much attention of mathematicians and computer scientists. A clustering approach will decompose TSP into sub graph and form cluster, so it may reduce problem size into smaller problem. Impact of hierarchical approach will be investigated to produce a better clustering strategy that fit into Euclidean TSP. Clustering strategy to Euclidean TSP consist of two main step, there are; clustering and tour construction. The significant of this research is clustering approach solution result has error less than 10% compare to best known solution (TSPLIB) and there is improvement to a hierarchical clustering algorithm in order to fit in such Euclidean TSP solution method.
Update Legal Documents Using Hierarchical Ranking Models and Word Clustering
Pham, Minh Quang Nhat; Nguyen, Minh Le; Shimazu, Akira
2010-01-01
Our research addresses the task of updating legal documents when newinformation emerges. In this paper, we employ a hierarchical ranking model tothe task of updating legal documents. Word clustering features are incorporatedto the ranking models to exploit semantic relations between words. Experimentalresults on legal data built from the United States Code show that the hierarchicalranking model with word clustering outperforms baseline methods using VectorSpace Model, and word cluster-based ...
Assembling hierarchical cluster solids with atomic precision.
Turkiewicz, Ari; Paley, Daniel W; Besara, Tiglet; Elbaz, Giselle; Pinkard, Andrew; Siegrist, Theo; Roy, Xavier
2014-11-12
Hierarchical solids created from the binary assembly of cobalt chalcogenide and iron oxide molecular clusters are reported. Six different molecular clusters based on the octahedral Co6E8 (E = Se or Te) and the expanded cubane Fe8O4 units are used as superatomic building blocks to construct these crystals. The formation of the solid is driven by the transfer of charge between complementary electron-donating and electron-accepting clusters in solution that crystallize as binary ionic compounds. The hierarchical structures are investigated by single-crystal X-ray diffraction, providing atomic and superatomic resolution. We report two different superstructures: a superatomic relative of the CsCl lattice type and an unusual packing arrangement based on the double-hexagonal close-packed lattice. Within these superstructures, we demonstrate various compositions and orientations of the clusters.
Hesitant fuzzy agglomerative hierarchical clustering algorithms
Zhang, Xiaolu; Xu, Zeshui
2015-02-01
Recently, hesitant fuzzy sets (HFSs) have been studied by many researchers as a powerful tool to describe and deal with uncertain data, but relatively, very few studies focus on the clustering analysis of HFSs. In this paper, we propose a novel hesitant fuzzy agglomerative hierarchical clustering algorithm for HFSs. The algorithm considers each of the given HFSs as a unique cluster in the first stage, and then compares each pair of the HFSs by utilising the weighted Hamming distance or the weighted Euclidean distance. The two clusters with smaller distance are jointed. The procedure is then repeated time and again until the desirable number of clusters is achieved. Moreover, we extend the algorithm to cluster the interval-valued hesitant fuzzy sets, and finally illustrate the effectiveness of our clustering algorithms by experimental results.
Hierarchical modeling of cluster size in wildlife surveys
Royle, J. Andrew
2008-01-01
Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).
Hierarchical Overlapping Clustering of Network Data Using Cut Metrics
Gama, Fernando; Ribeiro, Alejandro
2016-01-01
A novel method to obtain hierarchical and overlapping clusters from network data -i.e., a set of nodes endowed with pairwise dissimilarities- is presented. The introduced method is hierarchical in the sense that it outputs a nested collection of groupings of the node set depending on the resolution or degree of similarity desired, and it is overlapping since it allows nodes to belong to more than one group. Our construction is rooted on the facts that a hierarchical (non-overlapping) clustering of a network can be equivalently represented by a finite ultrametric space and that a convex combination of ultrametrics results in a cut metric. By applying a hierarchical (non-overlapping) clustering method to multiple dithered versions of a given network and then convexly combining the resulting ultrametrics, we obtain a cut metric associated to the network of interest. We then show how to extract a hierarchical overlapping clustering structure from the aforementioned cut metric. Furthermore, the so-called overlappi...
MultiDendrograms: Variable-Group Agglomerative Hierarchical Clustering
Gomez, Sergio; Montiel, Justo; Torres, David
2012-01-01
MultiDendrograms is a Java-written application that computes agglomerative hierarchical clusterings of data. Starting from a distances (or weights) matrix, MultiDendrograms is able to calculate its dendrograms using the most common agglomerative hierarchical clustering methods. The application implements a variable-group algorithm that solves the non-uniqueness problem found in the standard pair-group algorithm. This problem arises when two or more minimum distances between different clusters are equal during the agglomerative process, because then different output clusterings are possible depending on the criterion used to break ties between distances. MultiDendrograms solves this problem implementing a variable-group algorithm that groups more than two clusters at the same time when ties occur.
Eriksson, Brian; Singh, Aarti; Nowak, Robert
2011-01-01
Hierarchical clustering based on pairwise similarities is a common tool used in a broad range of scientific applications. However, in many problems it may be expensive to obtain or compute similarities between the items to be clustered. This paper investigates the hierarchical clustering of N items based on a small subset of pairwise similarities, significantly less than the complete set of N(N-1)/2 similarities. First, we show that if the intracluster similarities exceed intercluster similarities, then it is possible to correctly determine the hierarchical clustering from as few as 3N log N similarities. We demonstrate this order of magnitude savings in the number of pairwise similarities necessitates sequentially selecting which similarities to obtain in an adaptive fashion, rather than picking them at random. We then propose an active clustering method that is robust to a limited fraction of anomalous similarities, and show how even in the presence of these noisy similarity values we can resolve the hierar...
A Novel Divisive Hierarchical Clustering Algorithm for Geospatial Analysis
Directory of Open Access Journals (Sweden)
Shaoning Li
2017-01-01
Full Text Available In the fields of geographic information systems (GIS and remote sensing (RS, the clustering algorithm has been widely used for image segmentation, pattern recognition, and cartographic generalization. Although clustering analysis plays a key role in geospatial modelling, traditional clustering methods are limited due to computational complexity, noise resistant ability and robustness. Furthermore, traditional methods are more focused on the adjacent spatial context, which makes it hard for the clustering methods to be applied to multi-density discrete objects. In this paper, a new method, cell-dividing hierarchical clustering (CDHC, is proposed based on convex hull retraction. The main steps are as follows. First, a convex hull structure is constructed to describe the global spatial context of geospatial objects. Then, the retracting structure of each borderline is established in sequence by setting the initial parameter. The objects are split into two clusters (i.e., “sub-clusters” if the retracting structure intersects with the borderlines. Finally, clusters are repeatedly split and the initial parameter is updated until the terminate condition is satisfied. The experimental results show that CDHC separates the multi-density objects from noise sufficiently and also reduces complexity compared to the traditional agglomerative hierarchical clustering algorithm.
Hierarchical clustering techniques for image database organization and summarization
Vellaikal, Asha; Kuo, C.-C. Jay
1998-10-01
This paper investigates clustering techniques as a method of organizing image databases to support popular visual management functions such as searching, browsing and navigation. Different types of hierarchical agglomerative clustering techniques are studied as a method of organizing features space as well as summarizing image groups by the selection of a few appropriate representatives. Retrieval performance using both single and multiple level hierarchies are experimented with and the algorithms show an interesting relationship between the top k correct retrievals and the number of comparisons required. Some arguments are given to support the use of such cluster-based techniques for managing distributed image databases.
Noor Rashidah Rashid
2012-01-01
Cluster Analysis is a multivariate method in statistics. Agglomerative Hierarchical Cluster Analysis is one of approaches in Cluster Analysis. There are two linkage methods in Agglomerative Hierarchical Cluster Analysis which are Single Linkage and Complete Linkage. The purpose of this study is to compare between Single Linkage and Complete Linkage in Agglomerative Hierarchical Cluster Analysis. The comparison of performances between these linkage methods was shown by using Kruskal-Wallis tes...
Constructing storyboards based on hierarchical clustering analysis
Hasebe, Satoshi; Sami, Mustafa M.; Muramatsu, Shogo; Kikuchi, Hisakazu
2005-07-01
There are growing needs for quick preview of video contents for the purpose of improving accessibility of video archives as well as reducing network traffics. In this paper, a storyboard that contains a user-specified number of keyframes is produced from a given video sequence. It is based on hierarchical cluster analysis of feature vectors that are derived from wavelet coefficients of video frames. Consistent use of extracted feature vectors is the key to avoid a repetition of computationally-intensive parsing of the same video sequence. Experimental results suggest that a significant reduction in computational time is gained by this strategy.
Technique for fast and efficient hierarchical clustering
Stork, Christopher
2013-10-08
A fast and efficient technique for hierarchical clustering of samples in a dataset includes compressing the dataset to reduce a number of variables within each of the samples of the dataset. A nearest neighbor matrix is generated to identify nearest neighbor pairs between the samples based on differences between the variables of the samples. The samples are arranged into a hierarchy that groups the samples based on the nearest neighbor matrix. The hierarchy is rendered to a display to graphically illustrate similarities or differences between the samples.
Hierarchical Aligned Cluster Analysis for Temporal Clustering of Human Motion.
Zhou, Feng; De la Torre, Fernando; Hodgins, Jessica K
2013-03-01
Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.
The reflection of hierarchical cluster analysis of co-occurrence matrices in SPSS
Zhou, Q.; Leng, F.; Leydesdorff, L.
2015-01-01
Purpose: To discuss the problems arising from hierarchical cluster analysis of co-occurrence matrices in SPSS, and the corresponding solutions. Design/methodology/approach: We design different methods of using the SPSS hierarchical clustering module for co-occurrence matrices in order to compare the
The reflection of hierarchical cluster analysis of co-occurrence matrices in SPSS
Zhou, Q.; Leng, F.; Leydesdorff, L.
2015-01-01
Purpose: To discuss the problems arising from hierarchical cluster analysis of co-occurrence matrices in SPSS, and the corresponding solutions. Design/methodology/approach: We design different methods of using the SPSS hierarchical clustering module for co-occurrence matrices in order to compare
Global Considerations in Hierarchical Clustering Reveal Meaningful Patterns in Data
Varshavsky, Roy; Horn, David; Linial, Michal
2008-01-01
Background A hierarchy, characterized by tree-like relationships, is a natural method of organizing data in various domains. When considering an unsupervised machine learning routine, such as clustering, a bottom-up hierarchical (BU, agglomerative) algorithm is used as a default and is often the only method applied. Methodology/Principal Findings We show that hierarchical clustering that involve global considerations, such as top-down (TD, divisive), or glocal (global-local) algorithms are better suited to reveal meaningful patterns in the data. This is demonstrated, by testing the correspondence between the results of several algorithms (TD, glocal and BU) and the correct annotations provided by experts. The correspondence was tested in multiple domains including gene expression experiments, stock trade records and functional protein families. The performance of each of the algorithms is evaluated by statistical criteria that are assigned to clusters (nodes of the hierarchy tree) based on expert-labeled data. Whereas TD algorithms perform better on global patterns, BU algorithms perform well and are advantageous when finer granularity of the data is sought. In addition, a novel TD algorithm that is based on genuine density of the data points is presented and is shown to outperform other divisive and agglomerative methods. Application of the algorithm to more than 500 protein sequences belonging to ion-channels illustrates the potential of the method for inferring overlooked functional annotations. ClustTree, a graphical Matlab toolbox for applying various hierarchical clustering algorithms and testing their quality is made available. Conclusions Although currently rarely used, global approaches, in particular, TD or glocal algorithms, should be considered in the exploratory process of clustering. In general, applying unsupervised clustering methods can leverage the quality of manually-created mapping of proteins families. As demonstrated, it can also provide
Directory of Open Access Journals (Sweden)
Xianrui Liang
2013-01-01
Full Text Available Background: A method for chemical fingerprint analysis of Hibiscus mutabilis L. leaves was developed based on ultra performance liquid chromatography with photodiode array detector (UPLC-PAD combined with similarity analysis (SA and hierarchical clustering analysis (HCA. Materials and Methods: 10 batches of Hibiscus mutabilis L. leaves samples were collected from different regions of China. UPLC-PAD was employed to collect chemical fingerprints of Hibiscus mutabilis L. leaves. Results: The relative standard deviations (RSDs of the relative retention times (RRT and relative peak areas (RPA of 10 characteristic peaks (one of them was identified as rutin in precision, repeatability and stability test were less than 3%, and the method of fingerprint analysis was validated to be suitable for the Hibiscus mutabilis L. leaves. Conclusions: The chromatographic fingerprints showed abundant diversity of chemical constituents qualitatively in the 10 batches of Hibiscus mutabilis L. leaves samples from different locations by similarity analysis on basis of calculating the correlation coefficients between each two fingerprints. Moreover, the HCA method clustered the samples into four classes, and the HCA dendrogram showed the close or distant relations among the 10 samples, which was consistent to the SA result to some extent.
Hierarchical Cluster Assembly in Globally Collapsing Clouds
Vazquez-Semadeni, Enrique; Colin, Pedro
2016-01-01
We discuss the mechanism of cluster formation in a numerical simulation of a molecular cloud (MC) undergoing global hierarchical collapse (GHC). The global nature of the collapse implies that the SFR increases over time. The hierarchical nature of the collapse consists of small-scale collapses within larger-scale ones. The large-scale collapses culminate a few Myr later than the small-scale ones and consist of filamentary flows that accrete onto massive central clumps. The small-scale collapses form clumps that are embedded in the filaments and falling onto the large-scale collapse centers. The stars formed in the early, small-scale collapses share the infall motion of their parent clumps. Thus, the filaments feed both gaseous and stellar material to the massive central clump. This leads to the presence of a few older stars in a region where new protostars are forming, and also to a self-similar structure, in which each unit is composed of smaller-scale sub-units that approach each other and may merge. Becaus...
Hierarchical clustering using correlation metric and spatial continuity constraint
Stork, Christopher L.; Brewer, Luke N.
2012-10-02
Large data sets are analyzed by hierarchical clustering using correlation as a similarity measure. This provides results that are superior to those obtained using a Euclidean distance similarity measure. A spatial continuity constraint may be applied in hierarchical clustering analysis of images.
基于类轮廓层次聚类方法的研究%RESEARCH ON CLASS-PROFILE-BASED HIERARCHICAL CLUSTERING METHOD
Institute of Scientific and Technical Information of China (English)
孟海东; 唐旋
2011-01-01
传统的聚类算法在考虑类与类之间的连通性特征和近似性特征上往往顾此失彼.首先给出类边界点和类轮廓的基本定义以及寻求方法,然后基于类间连通性特征和近似性特征的综合考虑,拟定一些类间相似性度量标准和方法,最后提出一种基于类轮廓的层次聚类算法.该算法能够有效处理任意形状的簇,且能够区分孤立点和噪声数据.通过对图像数据集和Iris标准数据集的聚类分析,验证了该算法的可行性和有效性.%Traditional clustering algorithms are often incapable of roundly considering the connectivity and similarity characteristics among classes. The thesis firstly presents the fundamental definition of class boundary point and class profile; secondly, with comprehensive consideration based on connectivity characteristics and similarity characteristics among classes, defines some standards and methods for inter class similarity measurement; thirdly, proposes a class-profile-based hierarchical clustering algorithm, which is able to effectively process arbitrary shaped clusters and distinguish isolated points from noise data. The feasibility and effectiveness of the algorithm is validated through clustering analysis on image data sets and Iris standard data sets.
Taamneh, Madhar; Taamneh, Salah; Alkheder, Sharaf
2017-09-01
Artificial neural networks (ANNs) have been widely used in predicting the severity of road traffic crashes. All available information about previously occurred accidents is typically used for building a single prediction model (i.e., classifier). Too little attention has been paid to the differences between these accidents, leading, in most cases, to build less accurate predictors. Hierarchical clustering is a well-known clustering method that seeks to group data by creating a hierarchy of clusters. Using hierarchical clustering and ANNs, a clustering-based classification approach for predicting the injury severity of road traffic accidents was proposed. About 6000 road accidents occurred over a six-year period from 2008 to 2013 in Abu Dhabi were used throughout this study. In order to reduce the amount of variation in data, hierarchical clustering was applied on the data set to organize it into six different forms, each with different number of clusters (i.e., clusters from 1 to 6). Two ANN models were subsequently built for each cluster of accidents in each generated form. The first model was built and validated using all accidents (training set), whereas only 66% of the accidents were used to build the second model, and the remaining 34% were used to test it (percentage split). Finally, the weighted average accuracy was computed for each type of models in each from of data. The results show that when testing the models using the training set, clustering prior to classification achieves (11%-16%) more accuracy than without using clustering, while the percentage split achieves (2%-5%) more accuracy. The results also suggest that partitioning the accidents into six clusters achieves the best accuracy if both types of models are taken into account.
Multiscale stochastic hierarchical image segmentation by spectral clustering
Institute of Scientific and Technical Information of China (English)
LI XiaoBin; TIAN Zheng
2007-01-01
This paper proposes a sampling based hierarchical approach for solving the computational demands of the spectral clustering methods when applied to the problem of image segmentation. The authors first define the distance between a pixel and a cluster, and then derive a new theorem to estimate the number of samples needed for clustering. Finally, by introducing a scale parameter into the similarity function, a novel spectral clustering based image segmentation method has been developed. An important characteristic of the approach is that in the course of image segmentation one needs not only to tune the scale parameter to merge the small size clusters or split the large size clusters but also take samples from the data set at the different scales. The multiscale and stochastic nature makes it feasible to apply the method to very large grouping problem. In addition, it also makes the segmentation compute in time that is linear in the size of the image. The experimental results on various synthetic and real world images show the effectiveness of the approach.
Fast, Linear Time Hierarchical Clustering using the Baire Metric
Contreras, Pedro
2011-01-01
The Baire metric induces an ultrametric on a dataset and is of linear computational complexity, contrasted with the standard quadratic time agglomerative hierarchical clustering algorithm. In this work we evaluate empirically this new approach to hierarchical clustering. We compare hierarchical clustering based on the Baire metric with (i) agglomerative hierarchical clustering, in terms of algorithm properties; (ii) generalized ultrametrics, in terms of definition; and (iii) fast clustering through k-means partititioning, in terms of quality of results. For the latter, we carry out an in depth astronomical study. We apply the Baire distance to spectrometric and photometric redshifts from the Sloan Digital Sky Survey using, in this work, about half a million astronomical objects. We want to know how well the (more costly to determine) spectrometric redshifts can predict the (more easily obtained) photometric redshifts, i.e. we seek to regress the spectrometric on the photometric redshifts, and we use clusterwi...
Hierarchical Clustering and the Concept of Space Distortion.
Hubert, Lawrence; Schultz, James
An empirical assesssment of the space distortion properties of two prototypic hierarchical clustering procedures is given in terms of an occupancy model developed from combinatorics. Using one simple example, the single-link and complete-link clustering strategies now in common use in the behavioral sciences are empirically shown to be space…
The Hierarchical Distribution of Young Stellar Clusters in Nearby Galaxies
Grasha, Kathryn; Calzetti, Daniela
2017-01-01
We investigate the spatial distributions of young stellar clusters in six nearby galaxies to trace the large scale hierarchical star-forming structures. The six galaxies are drawn from the Legacy ExtraGalactic UV Survey (LEGUS). We quantify the strength of the clustering among stellar clusters as a function of spatial scale and age to establish the survival timescale of the substructures. We separate the clusters into different classes, compact (bound) clusters and associations (unbound), and compare the clustering among them. We find that younger star clusters are more strongly clustered over small spatial scales and that the clustering disappears rapidly for ages as young as a few tens of Myr, consistent with clusters slowly losing the fractal dimension inherited at birth from their natal molecular clouds.
The Hierarchical Clustering of Tax Burden in the EU27
Directory of Open Access Journals (Sweden)
Simkova Nikola
2015-09-01
Full Text Available The issue of taxation has become more important due to a significant share of the government revenue. There are several ways of expressing the tax burden of countries. This paper describes the traditional approach as a share of tax revenue to GDP which is applied to the total taxation and the capital taxation as a part of tax systems affecting investment decisions. The implicit tax rate on capital created by Eurostat also offers a possible explanation of the tax burden on capital, so its components are analysed in detail. This study uses one of the econometric methods called the hierarchical clustering. The data on which the clustering is based comprises countries in the EU27 for the period of 1995 – 2012. The aim of this paper is to reveal clusters of countries in the EU27 with similar tax burden or tax changes. The findings suggest that mainly newly acceding countries (2004 and 2007 are in a group of countries with a low tax burden which tried to encourage investors by favourable tax rates. On the other hand, there are mostly countries from the original EU15. Some clusters may be explained by similar historical development, geographic and demographic characteristics.
Performance Analysis of Hierarchical Clustering Algorithm
Directory of Open Access Journals (Sweden)
K.Ranjini
2011-07-01
Full Text Available Clustering is the classification of objects into different groups, or more precisely, the partitioning of a data set into subsets (clusters, so that the data in each subset (ideally share some common trait - often proximity according to some defined distance measure. Data clustering is a common technique for statistical data analysis, which is used in many fields, including machine learning, data mining, pattern recognition, image analysis and bioinformatics. This paper explains the implementation of agglomerative and divisive clustering algorithms applied on various types of data. The details of the victims of Tsunami in Thailand during the year 2004, was taken as the test data. Visual programming is used for implementation and running time of the algorithms using different linkages (agglomerative to different types of data are taken for analysis.
Properties of hierarchically forming star clusters
Maschberger, Th; Bonnell, I A; Kroupa, P
2010-01-01
We undertake a systematic analysis of the early (< 0.5 Myr) evolution of clustering and the stellar initial mass function in turbulent fragmentation simulations. These large scale simulations for the first time offer the opportunity for a statistical analysis of IMF variations and correlations between stellar properties and cluster richness. The typical evolutionary scenario involves star formation in small-n clusters which then progressively merge; the first stars to form are seeds of massive stars and achieve a headstart in mass acquisition. These massive seeds end up in the cores of clusters and a large fraction of new stars of lower mass is formed in the outer parts of the clusters. The resulting clusters are therefore mass segregated at an age of 0.5 Myr, although the signature of mass segregation is weakened during mergers. We find that the resulting IMF has a smaller exponent (alpha=1.8-2.2) than the Salpeter value (alpha=2.35). The IMFs in subclusters are truncated at masses only somewhat larger th...
Hierarchical Compressed Sensing for Cluster Based Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Vishal Krishna Singh
2016-02-01
Full Text Available Data transmission consumes significant amount of energy in large scale wireless sensor networks (WSNs. In such an environment, reducing the in-network communication and distributing the load evenly over the network can reduce the overall energy consumption and maximize the network lifetime significantly. In this work, the aforementioned problem of network lifetime and uneven energy consumption in large scale wireless sensor networks is addressed. This work proposes a hierarchical compressed sensing (HCS scheme to reduce the in-network communication during the data gathering process. Co-related sensor readings are collected via a hierarchical clustering scheme. A compressed sensing (CS based data processing scheme is devised to transmit the data from the source to the sink. The proposed HCS is able to identify the optimal position for the application of CS to achieve reduced and similar number of transmissions on all the nodes in the network. An activity map is generated to validate the reduced and uniformly distributed communication load of the WSN. Based on the number of transmissions per data gathering round, the bit-hop metric model is used to analyse the overall energy consumption. Simulation results validate the efficiency of the proposed method over the existing CS based approaches.
Hierarchical clusters of phytoplankton variables in dammed water bodies
Silva, Eliana Costa e.; Lopes, Isabel Cristina; Correia, Aldina; Gonçalves, A. Manuela
2017-06-01
In this paper a dataset containing biological variables of the water column of several Portuguese reservoirs is analyzed. Hierarchical cluster analysis is used to obtain clusters of phytoplankton variables of the phylum Cyanophyta, with the objective of validating the classification of Portuguese reservoirs previewly presented in [1] which were divided into three clusters: (1) Interior Tagus and Aguieira; (2) Douro; and (3) Other rivers. Now three new clusters of Cyanophyta variables were found. Kruskal-Wallis and Mann-Whitney tests are used to compare the now obtained Cyanophyta clusters and the previous Reservoirs clusters, in order to validate the classification of the water quality of reservoirs. The amount of Cyanophyta algae present in the reservoirs from the three clusters is significantly different, which validates the previous classification.
Directory of Open Access Journals (Sweden)
Odilia Yim
2015-02-01
Full Text Available Cluster analysis refers to a class of data reduction methods used for sorting cases, observations, or variables of a given dataset into homogeneous groups that differ from each other. The present paper focuses on hierarchical agglomerative cluster analysis, a statistical technique where groups are sequentially created by systematically merging similar clusters together, as dictated by the distance and linkage measures chosen by the researcher. Specific distance and linkage measures are reviewed, including a discussion of how these choices can influence the clustering process by comparing three common linkage measures (single linkage, complete linkage, average linkage. The tutorial guides researchers in performing a hierarchical cluster analysis using the SPSS statistical software. Through an example, we demonstrate how cluster analysis can be used to detect meaningful subgroups in a sample of bilinguals by examining various language variables.
A Framework for Analyzing Software Quality using Hierarchical Clustering
Directory of Open Access Journals (Sweden)
Arashdeep Kaur
2011-02-01
Full Text Available Fault proneness data available in the early software life cycle from previous releases or similar kind of projects will aid in improving software quality estimations. Various techniques have been proposed in the literature which includes statistical method, machine learning methods, neural network techniques and clustering techniques for the prediction of faulty and non faulty modules in the project. In this study, Hierarchical clustering algorithm is being trained and tested with lifecycle data collected from NASA projects namely, CM1, PC1 and JM1 as predictive models. These predictive models contain requirement metrics and static code metrics. We have combined requirement metric model with static code metric model to get fusion metric model. Further we have investigated that which of the three prediction models is found to be the best prediction model on the basis of fault detection. The basic hypothesis of software quality estimation is that automatic quality prediction models enable verificationexperts to concentrate their attention and resources at problem areas of the system under development. The proposed approach has been implemented in MATLAB 7.4. The results show that when all the prediction techniques are evaluated, the best prediction model is found to be the fusion metric model. This proposed model is also compared with other quality models available in the literature and is found to be efficient for predicting faulty modules.
Directory of Open Access Journals (Sweden)
Yang Chunhe
2016-01-01
Full Text Available The hierarchical clustering method has been used for exploration of gene expression and proteomic profiles; however, little research into its application in the examination of expression of multiplecytokine/chemokine responses to stimuli has been reported. Thus, little progress has been made on how phytohemagglutinin(PHA affects cytokine expression profiling on a large scale in the human hematological system. To investigate the characteristic expression pattern under PHA stimulation, Luminex, a multiplex bead-based suspension array, was performed. The data set collected from human peripheral blood mononuclear cells (PBMC was analyzed using the hierarchical clustering method. It was revealed that two specific chemokines (CCL3 andCCL4 underwent significantly greater quantitative changes during induction of expression than other tested cytokines/chemokines after PHA stimulation. This result indicates that hierarchical clustering is a useful tool for detecting fine patterns during exploration of biological data, and that it can play an important role in comparative studies.
A Framework for Hierarchical Clustering Based Indexing in Search Engines
Directory of Open Access Journals (Sweden)
Parul Gupta
2011-01-01
Full Text Available Granting efficient and fast accesses to the index is a key issuefor performances of Web Search Engines. In order to enhancememory utilization and favor fast query resolution, WSEs useInverted File (IF indexes that consist of an array of theposting lists where each posting list is associated with a termand contains the term as well as the identifiers of the documentscontaining the term. Since the document identifiers are stored insorted order, they can be stored as the difference between thesuccessive documents so as to reduce the size of the index. Thispaper describes a clustering algorithm that aims atpartitioning the set of documents into ordered clusters so thatthe documents within the same cluster are similar and are beingassigned the closer document identifiers. Thus the averagevalue of the differences between the successive documents willbe minimized and hence storage space would be saved. Thepaper further presents the extension of this clustering algorithmto be applied for the hierarchical clustering in which similarclusters are clubbed to form a mega cluster and similar megaclusters are then combined to form super cluster. Thus thepaper describes the different levels of clustering whichoptimizes the search process by directing the searchto a specific path from higher levels of clustering to the lowerlevels i.e. from super clusters to mega clusters, then to clustersand finally to the individual documents so that the user gets thebest possible matching results in minimum possible time.
Hierarchical cluster-tendency analysis of the group structure in the foreign exchange market
Wu, Xin-Ye; Zheng, Zhi-Gang
2013-08-01
A hierarchical cluster-tendency (HCT) method in analyzing the group structure of networks of the global foreign exchange (FX) market is proposed by combining the advantages of both the minimal spanning tree (MST) and the hierarchical tree (HT). Fifty currencies of the top 50 World GDP in 2010 according to World Bank's database are chosen as the underlying system. By using the HCT method, all nodes in the FX market network can be "colored" and distinguished. We reveal that the FX networks can be divided into two groups, i.e., the Asia-Pacific group and the Pan-European group. The results given by the hierarchical cluster-tendency method agree well with the formerly observed geographical aggregation behavior in the FX market. Moreover, an oil-resource aggregation phenomenon is discovered by using our method. We find that gold could be a better numeraire for the weekly-frequency FX data.
Institute of Scientific and Technical Information of China (English)
田玲; 邓旌波; 廖紫纤; 石博; 何楚
2013-01-01
提出了一种基于多层区域谱聚类的非监督SAR图像分割算法(multi-space and multi-hierarchical region based spectral clustering,MSMHSC).该算法首先在特征与几何空间求距离,快速获得初始过分割区域,然后在过分割区域的谱空间上进行聚类,最终实现非监督的SAR图像分割.该方法计算复杂度小,无须训练样本,使用层次化思想使其能更充分地利用SAR图像各类先验与似然信息.在MSTAR真实SAR数据集上的实验验证了该算法的快速性和有效性.%This paper proposed a method based on the hierarchical clustering concept.First,it over-segmented the source image into many small regions.And then,it conducted a spectral clustering algorithm on those regions.The algorithm was tested on the MSTAR SAR data set,and was proved to be fast and efficient.
Hierarchical Cluster Analysis – Various Approaches to Data Preparation
Directory of Open Access Journals (Sweden)
Z. Pacáková
2013-09-01
Full Text Available The article deals with two various approaches to data preparation to avoid multicollinearity. The aim of the article is to find similarities among the e-communication level of EU states using hierarchical cluster analysis. The original set of fourteen indicators was first reduced on the basis of correlation analysis while in case of high correlation indicator of higher variability was included in further analysis. Secondly the data were transformed using principal component analysis while the principal components are poorly correlated. For further analysis five principal components explaining about 92% of variance were selected. Hierarchical cluster analysis was performed both based on the reduced data set and the principal component scores. Both times three clusters were assumed following Pseudo t-Squared and Pseudo F Statistic, but the final clusters were not identical. An important characteristic to compare the two results found was to look at the proportion of variance accounted for by the clusters which was about ten percent higher for the principal component scores (57.8% compared to 47%. Therefore it can be stated, that in case of using principal component scores as an input variables for cluster analysis with explained proportion high enough (about 92% for in our analysis, the loss of information is lower compared to data reduction on the basis of correlation analysis.
Hierarchical trie packet classification algorithm based on expectation-maximization clustering
Bi, Xia-an; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm. PMID:28704476
Concept Association and Hierarchical Hamming Clustering Model in Text Classification
Institute of Scientific and Technical Information of China (English)
Su Gui-yang; Li Jian-hua; Ma Ying-hua; Li Sheng-hong; Yin Zhong-hang
2004-01-01
We propose two models in this paper. The concept of association model is put forward to obtain the co-occurrence relationships among keywords in the documents and the hierarchical Hamming clustering model is used to reduce the dimensionality of the category feature vector space which can solve the problem of the extremely high dimensionality of the documents' feature space. The results of experiment indicate that it can obtain the co-occurrence relations among keywords in the documents which promote the recall of classification system effectively. The hierarchical Hamming clustering model can reduce the dimensionality of the category feature vector efficiently, the size of the vector space is only about 10% of the primary dimensionality.
Mapping informative clusters in a hierarchical [corrected] framework of FMRI multivariate analysis.
Directory of Open Access Journals (Sweden)
Rui Xu
Full Text Available Pattern recognition methods have become increasingly popular in fMRI data analysis, which are powerful in discriminating between multi-voxel patterns of brain activities associated with different mental states. However, when they are used in functional brain mapping, the location of discriminative voxels varies significantly, raising difficulties in interpreting the locus of the effect. Here we proposed a hierarchical framework of multivariate approach that maps informative clusters rather than voxels to achieve reliable functional brain mapping without compromising the discriminative power. In particular, we first searched for local homogeneous clusters that consisted of voxels with similar response profiles. Then, a multi-voxel classifier was built for each cluster to extract discriminative information from the multi-voxel patterns. Finally, through multivariate ranking, outputs from the classifiers were served as a multi-cluster pattern to identify informative clusters by examining interactions among clusters. Results from both simulated and real fMRI data demonstrated that this hierarchical approach showed better performance in the robustness of functional brain mapping than traditional voxel-based multivariate methods. In addition, the mapped clusters were highly overlapped for two perceptually equivalent object categories, further confirming the validity of our approach. In short, the hierarchical framework of multivariate approach is suitable for both pattern classification and brain mapping in fMRI studies.
Image Segmentation by Hierarchical Spatial and Color Spaces Clustering
Institute of Scientific and Technical Information of China (English)
YU Wei
2005-01-01
Image segmentation, as a basic building block for many high-level image analysis problems, has attracted many research attentions over years. Existing approaches, however, are mainly focusing on the clustering analysis in the single channel information, i.e., either in color or spatial space, which may lead to unsatisfactory segmentation performance. Considering the spatial and color spaces jointly, this paper proposes a new hierarchical image segmentation algorithm, which alternately clusters the image regions in color and spatial spaces in a fine to coarse manner. Without losing the perceptual consistence, the proposed algorithm achieves the segmentation result using only very few number of colors according to user specification.
Automated tetraploid genotype calling by hierarchical clustering
SNP arrays are transforming breeding and genetics research for autotetraploids. To fully utilize these arrays, however, the relationship between signal intensity and allele dosage must be inferred independently for each marker. We developed an improved computational method to automate this process, ...
A fast quad-tree based two dimensional hierarchical clustering.
Rajadurai, Priscilla; Sankaranarayanan, Swamynathan
2012-01-01
Recently, microarray technologies have become a robust technique in the area of genomics. An important step in the analysis of gene expression data is the identification of groups of genes disclosing analogous expression patterns. Cluster analysis partitions a given dataset into groups based on specified features. Euclidean distance is a widely used similarity measure for gene expression data that considers the amount of changes in gene expression. However, the huge number of genes and the intricacy of biological networks have highly increased the challenges of comprehending and interpreting the resulting group of data, increasing processing time. The proposed technique focuses on a QT based fast 2-dimensional hierarchical clustering algorithm to perform clustering. The construction of the closest pair data structure is an each level is an important time factor, which determines the processing time of clustering. The proposed model reduces the processing time and improves analysis of gene expression data.
Extending stability through hierarchical clusters in Echo State Networks
Directory of Open Access Journals (Sweden)
Sarah Jarvis
2010-07-01
Full Text Available Echo State Networks (ESN are reservoir networks that satisfy well-established criteria for stability when constructed as feedforward networks. Recent evidence suggests that stability criteria are altered in the presence of reservoir substructures, such as clusters. Understanding how the reservoir architecture affects stability is thus important for the appropriate design of any ESN. To quantitatively determine the influence of the most relevant network parameters, we analysed the impact of reservoir substructures on stability in hierarchically clustered ESNs (HESN, as they allow a smooth transition from highly structured to increasingly homogeneous reservoirs. Previous studies used the largest eigenvalue of the reservoir connectivity matrix (spectral radius as a predictor for stable network dynamics. Here, we evaluate the impact of clusters, hierarchy and intercluster connectivity on the predictive power of the spectral radius for stability. Both hierarchy and low relative cluster sizes extend the range of spectral radius values, leading to stable networks, while increasing intercluster connectivity decreased maximal spectral radius.
Multi-mode clustering model for hierarchical wireless sensor networks
Hu, Xiangdong; Li, Yongfu; Xu, Huifen
2017-03-01
The topology management, i.e., clusters maintenance, of wireless sensor networks (WSNs) is still a challenge due to its numerous nodes, diverse application scenarios and limited resources as well as complex dynamics. To address this issue, a multi-mode clustering model (M2 CM) is proposed to maintain the clusters for hierarchical WSNs in this study. In particular, unlike the traditional time-trigger model based on the whole-network and periodic style, the M2 CM is proposed based on the local and event-trigger operations. In addition, an adaptive local maintenance algorithm is designed for the broken clusters in the WSNs using the spatial-temporal demand changes accordingly. Numerical experiments are performed using the NS2 network simulation platform. Results validate the effectiveness of the proposed model with respect to the network maintenance costs, node energy consumption and transmitted data as well as the network lifetime.
Bekki, Kenji
2017-01-01
Most old globular clusters (GCs) in the Galaxy are observed to have internal chemical abundance spreads in light elements. We discuss a new GC formation scenario based on hierarchical star formation within fractal molecular clouds. In the new scenario, a cluster of bound and unbound star clusters (`star cluster complex', SCC) that have a power-law cluster mass function with a slope (β) of 2 is first formed from a massive gas clump developed in a dwarf galaxy. Such cluster complexes and β = 2 are observed and expected from hierarchical star formation. The most massive star cluster (`main cluster'), which is the progenitor of a GC, can accrete gas ejected from asymptotic giant branch (AGB) stars initially in the cluster and other low-mass clusters before the clusters are tidally stripped or destroyed to become field stars in the dwarf. The SCC is initially embedded in a giant gas hole created by numerous supernovae of the SCC so that cold gas outside the hole can be accreted onto the main cluster later. New stars formed from the accreted gas have chemical abundances that are different from those of the original SCC. Using hydrodynamical simulations of GC formation based on this scenario, we show that the main cluster with the initial mass as large as [2 - 5] × 105M⊙ can accrete more than 105M⊙ gas from AGB stars of the SCC. We suggest that merging of hierarchical star cluster complexes can play key roles in stellar halo formation around GCs and self-enrichment processes in the early phase of GC formation.
Hierarchically Clustered Star Formation in the Magellanic Clouds
Gouliermis, Dimitrios A; Ossenkopf, Volker; Klessen, Ralf S; Dolphin, Andrew E
2012-01-01
We present a cluster analysis of the bright main-sequence and faint pre--main-sequence stellar populations of a field ~ 90 x 90 pc centered on the HII region NGC 346/N66 in the Small Magellanic Cloud, from imaging with HST/ACS. We extend our earlier analysis on the stellar cluster population in the region to characterize the structuring behavior of young stars in the region as a whole with the use of stellar density maps interpreted through techniques designed for the study of the ISM structuring. In particular, we demonstrate with Cartwrigth & Whitworth's Q parameter, dendrograms, and the Delta-variance wavelet transform technique that the young stellar populations in the region NGC 346/N66 are hierarchically clustered, in agreement with other regions in the Magellanic Clouds observed with HST. The origin of this hierarchy is currently under investigation.
Hierarchical method of task assignment for multiple cooperating UAV teams
Institute of Scientific and Technical Information of China (English)
Xiaoxuan Hu; Huawei Ma; Qingsong Ye; He Luo
2015-01-01
The problem of task assignment for multiple cooperat-ing unmanned aerial vehicle (UAV) teams is considered. Multiple UAVs forming several smal teams are needed to perform attack tasks on a set of predetermined ground targets. A hierarchical task assignment method is presented to address the problem. It breaks the original problem down to three levels of sub-problems: tar-get clustering, cluster al ocation and target assignment. The first two sub-problems are central y solved by using clustering algo-rithms and integer linear programming, respectively, and the third sub-problem is solved in a distributed and paral el manner, using a mixed integer linear programming model and an improved ant colony algorithm. The proposed hierarchical method can reduce the computational complexity of the task assignment problem con-siderably, especial y when the number of tasks or the number of UAVs is large. Experimental results show that this method is feasi-ble and more efficient than non-hierarchical methods.
An agglomerative hierarchical approach to visualization in Bayesian clustering problems.
Dawson, K J; Belkhir, K
2009-07-01
Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals--the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. As the number of possible partitions grows very rapidly with the sample size, we cannot visualize this probability distribution in its entirety, unless the sample is very small. As a solution to this visualization problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package PartitionView. The exact linkage algorithm takes the posterior co-assignment probabilities as input and yields as output a rooted binary tree, or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities.
D Nearest Neighbour Search Using a Clustered Hierarchical Tree Structure
Suhaibah, A.; Uznir, U.; Anton, F.; Mioc, D.; Rahman, A. A.
2016-06-01
Locating and analysing the location of new stores or outlets is one of the common issues facing retailers and franchisers. This is due to assure that new opening stores are at their strategic location to attract the highest possible number of customers. Spatial information is used to manage, maintain and analyse these store locations. However, since the business of franchising and chain stores in urban areas runs within high rise multi-level buildings, a three-dimensional (3D) method is prominently required in order to locate and identify the surrounding information such as at which level of the franchise unit will be located or is the franchise unit located is at the best level for visibility purposes. One of the common used analyses used for retrieving the surrounding information is Nearest Neighbour (NN) analysis. It uses a point location and identifies the surrounding neighbours. However, with the immense number of urban datasets, the retrieval and analysis of nearest neighbour information and their efficiency will become more complex and crucial. In this paper, we present a technique to retrieve nearest neighbour information in 3D space using a clustered hierarchical tree structure. Based on our findings, the proposed approach substantially showed an improvement of response time analysis compared to existing approaches of spatial access methods in databases. The query performance was tested using a dataset consisting of 500,000 point locations building and franchising unit. The results are presented in this paper. Another advantage of this structure is that it also offers a minimal overlap and coverage among nodes which can reduce repetitive data entry.
Xu, Lizhen; Paterson, Andrew D; Xu, Wei
2017-04-01
Motivated by the multivariate nature of microbiome data with hierarchical taxonomic clusters, counts that are often skewed and zero inflated, and repeated measures, we propose a Bayesian latent variable methodology to jointly model multiple operational taxonomic units within a single taxonomic cluster. This novel method can incorporate both negative binomial and zero-inflated negative binomial responses, and can account for serial and familial correlations. We develop a Markov chain Monte Carlo algorithm that is built on a data augmentation scheme using Pólya-Gamma random variables. Hierarchical centering and parameter expansion techniques are also used to improve the convergence of the Markov chain. We evaluate the performance of our proposed method through extensive simulations. We also apply our method to a human microbiome study.
Unconventional methods for clustering
Kotyrba, Martin
2016-06-01
Cluster analysis or clustering is a task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters). It is the main task of exploratory data mining and a common technique for statistical data analysis used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, and bioinformatics. The topic of this paper is one of the modern methods of clustering namely SOM (Self Organising Map). The paper describes the theory needed to understand the principle of clustering and descriptions of algorithm used with clustering in our experiments.
Institute of Scientific and Technical Information of China (English)
罗恩韬; 王国军
2015-01-01
云计算、健康医疗、街景地图服务、推荐系统等新兴服务促使数据的种类和规模以前所未有的速度增长,数据量的激增会导致很多共性问题.例如数据的可表示,可处理和可靠性问题.如何有效处理和分析数据之间的关系,提高数据的划分效率,建立数据的聚类分析模型,已经成为学术界和企业界共同亟待解决的问题.该文提出一种基于语义特征的层次聚类方法,首先根据数据的语义特征进行训练,然后在每个子集上利用训练结果进行层次聚类,最终产生整体数据的密度中心点,提高了数据聚类效率和准确性.此方法采样复杂度低,数据分析准确,易于实现,具有良好的判定性.%The type and scale of data has been promoted with a hitherto unknown speed by the emerging services including cloud computing, health care, street view services recommendation system and so on. However, the surge in the volume of data may lead to many common problems, such as the representability, reliability and handlability of data. Therefore, how to effectively handle the relationship between the data and the analysis to improve the efficiency of classification of the data and establish the data clustering analysis model has become an academic and business problem, which needs to be solved urgently. A hierarchical clustering method based on semantic feature is proposed. Firstly, the data should be trained according to the semantic features of data, and then is used the training result to process hierarchical clustering in each subset; finally, the density center point is produced. This method can improve the efficiency and accuracy of data clustering. This algorithm is of low complexity about sampling, high accuracy of data analysis and good judgment. Furthermore, the algorithm is easy to realize.
Kinematic gait patterns in healthy runners: A hierarchical cluster analysis.
Phinyomark, Angkoon; Osis, Sean; Hettinga, Blayne A; Ferber, Reed
2015-11-01
Previous studies have demonstrated distinct clusters of gait patterns in both healthy and pathological groups, suggesting that different movement strategies may be represented. However, these studies have used discrete time point variables and usually focused on only one specific joint and plane of motion. Therefore, the first purpose of this study was to determine if running gait patterns for healthy subjects could be classified into homogeneous subgroups using three-dimensional kinematic data from the ankle, knee, and hip joints. The second purpose was to identify differences in joint kinematics between these groups. The third purpose was to investigate the practical implications of clustering healthy subjects by comparing these kinematics with runners experiencing patellofemoral pain (PFP). A principal component analysis (PCA) was used to reduce the dimensionality of the entire gait waveform data and then a hierarchical cluster analysis (HCA) determined group sets of similar gait patterns and homogeneous clusters. The results show two distinct running gait patterns were found with the main between-group differences occurring in frontal and sagittal plane knee angles (Pgait strategies. These results suggest care must be taken when selecting samples of subjects in order to investigate the pathomechanics of injured runners.
Hierarchical star cluster assembly in globally collapsing molecular clouds
Vázquez-Semadeni, Enrique; González-Samaniego, Alejandro; Colín, Pedro
2017-05-01
We discuss the mechanism of cluster formation in a numerical simulation of a molecular cloud (MC) undergoing global hierarchical collapse, focusing on how the gas motions in the parent cloud control the assembly of the cluster. The global collapse implies that the star formation rate (SFR) increases over time. The collapse is hierarchical because it consists of small-scale collapses within larger scale ones. The latter culminate a few Myr later than the first small-scale ones and consist of filamentary flows that accrete on to massive central clumps. The small-scale collapses consist of clumps that are embedded in the filaments and falling on to the large-scale collapse centres. The stars formed in the early, small-scale collapses share the infall motion of their parent clumps, so that the filaments feed both gas and stars to the massive central clump. This process leads to the presence of a few older stars in a region where new protostars are forming, and also to a self-similar structure, in which each unit is composed of smaller scale subunits that approach each other and may merge. Because the older stars formed in the filaments share the infall motion of the gas on to the central clump, they tend to have larger velocities and to be distributed over larger areas than the younger stars formed in the central clump. Finally, interpreting the initial mass function (IMF) simply as a probability distribution implies that massive stars only form once the local SFR is large enough to sample the IMF up to high masses. In combination with the increase of the SFR, this implies that massive stars tend to appear late in the evolution of the MC, and only in the central massive clumps. We discuss the correspondence of these features with observed properties of young stellar clusters, finding very good qualitative agreement.
Paraskevopoulou, Sivylla E; Wu, Di; Eftekhar, Amir; Constandinou, Timothy G
2014-09-30
This work presents a novel unsupervised algorithm for real-time adaptive clustering of neural spike data (spike sorting). The proposed Hierarchical Adaptive Means (HAM) clustering method combines centroid-based clustering with hierarchical cluster connectivity to classify incoming spikes using groups of clusters. It is described how the proposed method can adaptively track the incoming spike data without requiring any past history, iteration or training and autonomously determines the number of spike classes. Its performance (classification accuracy) has been tested using multiple datasets (both simulated and recorded) achieving a near-identical accuracy compared to k-means (using 10-iterations and provided with the number of spike classes). Also, its robustness in applying to different feature extraction methods has been demonstrated by achieving classification accuracies above 80% across multiple datasets. Last but crucially, its low complexity, that has been quantified through both memory and computation requirements makes this method hugely attractive for future hardware implementation. Copyright © 2014 Elsevier B.V. All rights reserved.
Multilevel hierarchical kernel spectral clustering for real-life large scale complex networks.
Directory of Open Access Journals (Sweden)
Raghvendra Mall
Full Text Available Kernel spectral clustering corresponds to a weighted kernel principal component analysis problem in a constrained optimization framework. The primal formulation leads to an eigen-decomposition of a centered Laplacian matrix at the dual level. The dual formulation allows to build a model on a representative subgraph of the large scale network in the training phase and the model parameters are estimated in the validation stage. The KSC model has a powerful out-of-sample extension property which allows cluster affiliation for the unseen nodes of the big data network. In this paper we exploit the structure of the projections in the eigenspace during the validation stage to automatically determine a set of increasing distance thresholds. We use these distance thresholds in the test phase to obtain multiple levels of hierarchy for the large scale network. The hierarchical structure in the network is determined in a bottom-up fashion. We empirically showcase that real-world networks have multilevel hierarchical organization which cannot be detected efficiently by several state-of-the-art large scale hierarchical community detection techniques like the Louvain, OSLOM and Infomap methods. We show that a major advantage of our proposed approach is the ability to locate good quality clusters at both the finer and coarser levels of hierarchy using internal cluster quality metrics on 7 real-life networks.
HCsnip: An R Package for Semi-supervised Snipping of the Hierarchical Clustering Tree.
Obulkasim, Askar; van de Wiel, Mark A
2015-01-01
Hierarchical clustering (HC) is one of the most frequently used methods in computational biology in the analysis of high-dimensional genomics data. Given a data set, HC outputs a binary tree leaves of which are the data points and internal nodes represent clusters of various sizes. Normally, a fixed-height cut on the HC tree is chosen, and each contiguous branch of data points below that height is considered as a separate cluster. However, the fixed-height branch cut may not be ideal in situations where one expects a complicated tree structure with nested clusters. Furthermore, due to lack of utilization of related background information in selecting the cutoff, induced clusters are often difficult to interpret. This paper describes a novel procedure that aims to automatically extract meaningful clusters from the HC tree in a semi-supervised way. The procedure is implemented in the R package HCsnip available from Bioconductor. Rather than cutting the HC tree at a fixed-height, HCsnip probes the various way of snipping, possibly at variable heights, to tease out hidden clusters ensconced deep down in the tree. The cluster extraction process utilizes, along with the data set from which the HC tree is derived, commonly available background information. Consequently, the extracted clusters are highly reproducible and robust against various sources of variations that "haunted" high-dimensional genomics data. Since the clustering process is guided by the background information, clusters are easy to interpret. Unlike existing packages, no constraint is placed on the data type on which clustering is desired. Particularly, the package accepts patient follow-up data for guiding the cluster extraction process. To our knowledge, HCsnip is the first package that is able to decomposes the HC tree into clusters with piecewise snipping under the guidance of patient time-to-event information. Our implementation of the semi-supervised HC tree snipping framework is generic, and can
Exploiting Homogeneity of Density in Incremental Hierarchical Clustering
Directory of Open Access Journals (Sweden)
Dwi H. Widiyantoro
2006-11-01
Full Text Available Hierarchical clustering is an important tool in many applications. As it involves a large data set that proliferates over time, reclustering the data set periodically is not an efficient process. Therefore, the ability to incorporate a new data set incrementally into an existing hierarchy becomes increasingly demanding. This article describes Homogen, a system that employs a new algorithm for generating a hierarchy of concepts and clusters incrementally from a stream of observations. The system aims to construct a hierarchy that satisfies the homogeneity and the monotonicity properties. Working in a bottom-up fashion, a new observation is placed in the hierarchy and a sequence of hierarchy restructuring processes is performed only in regions that have been affected by the presence of the new observation. Additionally, it combines multiple restructuring techniques that address different restructuring objectives to get a synergistic effect. The system has been tested on a variety of domains including structured and unstructured data sets. The experimental results reveal that the system is able to construct a concept hierarchy that is consistent regardless of the input data order and whose quality is comparable to the quality of those produced by non incremental clustering algorithms.
Lyman Alpha Emitters in the Hierarchically Clustering Galaxy Formation
Kobayashi, Masakazu A R; Nagashima, Masahiro
2007-01-01
We present a new theoretical model for the luminosity functions (LFs) of Lyman alpha (Lya) emitting galaxies in the framework of hierarchical galaxy formation. We extend a semi-analytic model of galaxy formation that reproduces a number of observations for local galaxies, without changing the original model parameters but introducing a physically-motivated modelling to describe the escape fraction of Lya photons from host galaxies (f_esc). Though a previous study using a hierarchical clustering model simply assumed a constant and universal value of f_esc, we incorporate two new effects on f_esc: extinction by interstellar dust and galaxy-scale outflow induced as a star formation feedback. It is found that the new model nicely reproduces all the observed Lya LFs of the Lya emitters (LAEs) at different redshifts in z ~ 3--6. Our model predicts that galaxies with strong outflows and f_esc ~ 1 are dominant in the observed LFs, which is consistent with available observations while the simple universal f_esc model ...
The structure of dark matter halos in hierarchical clustering theories
Subramanian, K; Ostriker, J P; Subramanian, Kandaswamy; Cen, Renyue; Ostriker, Jeremiah P.
1999-01-01
During hierarchical clustering, smaller masses generally collapse earlier than larger masses and so are denser on the average. The core of a small mass halo could be dense enough to resist disruption and survive undigested, when it is incorporated into a bigger object. We explore the possibility that a nested sequence of undigested cores in the center of the halo, which have survived the hierarchical, inhomogeneous collapse to form larger and larger objects, determines the halo structure in the inner regions. For a flat universe with $P(k) \\propto k^n$, scaling arguments then suggest that the core density profile is, $\\rho \\propto r^{-\\alpha}$ with $\\alpha = (9+3n)/(5+n)$. But whether such behaviour obtains depends on detailed dynamics. We first examine the dynamics using a fluid approach to the self-similar collapse solutions for the dark matter phase space density, including the effect of velocity dispersions. We highlight the importance of tangential velocity dispersions to obtain density profiles shallowe...
Hand Tracking based on Hierarchical Clustering of Range Data
Cespi, Roberto; Lindner, Marvin
2011-01-01
Fast and robust hand segmentation and tracking is an essential basis for gesture recognition and thus an important component for contact-less human-computer interaction (HCI). Hand gesture recognition based on 2D video data has been intensively investigated. However, in practical scenarios purely intensity based approaches suffer from uncontrollable environmental conditions like cluttered background colors. In this paper we present a real-time hand segmentation and tracking algorithm using Time-of-Flight (ToF) range cameras and intensity data. The intensity and range information is fused into one pixel value, representing its combined intensity-depth homogeneity. The scene is hierarchically clustered using a GPU based parallel merging algorithm, allowing a robust identification of both hands even for inhomogeneous backgrounds. After the detection, both hands are tracked on the CPU. Our tracking algorithm can cope with the situation that one hand is temporarily covered by the other hand.
Identifying Reference Objects by Hierarchical Clustering in Java Environment
Directory of Open Access Journals (Sweden)
RAHUL SAHA
2011-09-01
Full Text Available Recently Java programming environment has become so popular. Java programming language is a language that is designed to be portable enough to be executed in wide range of computers ranging from cell phones to supercomputers. Computer programs written in Java are compiled into Java Byte code instructions that are suitable for execution by a Java Virtual Machine implementation. Java virtual Machine is commonly implemented in software by means of an interpreter for the Java Virtual Machine instruction set. As an object oriented language, Java utilizes the concept of objects. Our idea is to identify the candidate objects references in a Java environment through hierarchical cluster analysis using reference stack and execution stack.
Bayesian hierarchical clustering for studying cancer gene expression data with unknown statistics.
Directory of Open Access Journals (Sweden)
Korsuk Sirinukunwattana
Full Text Available Clustering analysis is an important tool in studying gene expression data. The Bayesian hierarchical clustering (BHC algorithm can automatically infer the number of clusters and uses Bayesian model selection to improve clustering quality. In this paper, we present an extension of the BHC algorithm. Our Gaussian BHC (GBHC algorithm represents data as a mixture of Gaussian distributions. It uses normal-gamma distribution as a conjugate prior on the mean and precision of each of the Gaussian components. We tested GBHC over 11 cancer and 3 synthetic datasets. The results on cancer datasets show that in sample clustering, GBHC on average produces a clustering partition that is more concordant with the ground truth than those obtained from other commonly used algorithms. Furthermore, GBHC frequently infers the number of clusters that is often close to the ground truth. In gene clustering, GBHC also produces a clustering partition that is more biologically plausible than several other state-of-the-art methods. This suggests GBHC as an alternative tool for studying gene expression data. The implementation of GBHC is available at https://sites.google.com/site/gaussianbhc/
van der Ham, Joris L
2016-05-19
Forensic entomologists can use carrion communities' ecological succession data to estimate the postmortem interval (PMI). Permutation tests of hierarchical cluster analyses of these data provide a conceptual method to estimate part of the PMI, the post-colonization interval (post-CI). This multivariate approach produces a baseline of statistically distinct clusters that reflect changes in the carrion community composition during the decomposition process. Carrion community samples of unknown post-CIs are compared with these baseline clusters to estimate the post-CI. In this short communication, I use data from previously published studies to demonstrate the conceptual feasibility of this multivariate approach. Analyses of these data produce series of significantly distinct clusters, which represent carrion communities during 1- to 20-day periods of the decomposition process. For 33 carrion community samples, collected over an 11-day period, this approach correctly estimated the post-CI within an average range of 3.1 days.
Novel density-based and hierarchical density-based clustering algorithms for uncertain data.
Zhang, Xianchao; Liu, Han; Zhang, Xiaotong
2017-09-01
Uncertain data has posed a great challenge to traditional clustering algorithms. Recently, several algorithms have been proposed for clustering uncertain data, and among them density-based techniques seem promising for handling data uncertainty. However, some issues like losing uncertain information, high time complexity and nonadaptive threshold have not been addressed well in the previous density-based algorithm FDBSCAN and hierarchical density-based algorithm FOPTICS. In this paper, we firstly propose a novel density-based algorithm PDBSCAN, which improves the previous FDBSCAN from the following aspects: (1) it employs a more accurate method to compute the probability that the distance between two uncertain objects is less than or equal to a boundary value, instead of the sampling-based method in FDBSCAN; (2) it introduces new definitions of probability neighborhood, support degree, core object probability, direct reachability probability, thus reducing the complexity and solving the issue of nonadaptive threshold (for core object judgement) in FDBSCAN. Then, we modify the algorithm PDBSCAN to an improved version (PDBSCANi), by using a better cluster assignment strategy to ensure that every object will be assigned to the most appropriate cluster, thus solving the issue of nonadaptive threshold (for direct density reachability judgement) in FDBSCAN. Furthermore, as PDBSCAN and PDBSCANi have difficulties for clustering uncertain data with non-uniform cluster density, we propose a novel hierarchical density-based algorithm POPTICS by extending the definitions of PDBSCAN, adding new definitions of fuzzy core distance and fuzzy reachability distance, and employing a new clustering framework. POPTICS can reveal the cluster structures of the datasets with different local densities in different regions better than PDBSCAN and PDBSCANi, and it addresses the issues in FOPTICS. Experimental results demonstrate the superiority of our proposed algorithms over the existing
Bouaziz, Matthieu; Paccard, Caroline; Guedj, Mickael; Ambroise, Christophe
2012-01-01
Inferring the structure of populations has many applications for genetic research. In addition to providing information for evolutionary studies, it can be used to account for the bias induced by population stratification in association studies. To this end, many algorithms have been proposed to cluster individuals into genetically homogeneous sub-populations. The parametric algorithms, such as Structure, are very popular but their underlying complexity and their high computational cost led to the development of faster parametric alternatives such as Admixture. Alternatives to these methods are the non-parametric approaches. Among this category, AWclust has proven efficient but fails to properly identify population structure for complex datasets. We present in this article a new clustering algorithm called Spectral Hierarchical clustering for the Inference of Population Structure (SHIPS), based on a divisive hierarchical clustering strategy, allowing a progressive investigation of population structure. This method takes genetic data as input to cluster individuals into homogeneous sub-populations and with the use of the gap statistic estimates the optimal number of such sub-populations. SHIPS was applied to a set of simulated discrete and admixed datasets and to real SNP datasets, that are data from the HapMap and Pan-Asian SNP consortium. The programs Structure, Admixture, AWclust and PCAclust were also investigated in a comparison study. SHIPS and the parametric approach Structure were the most accurate when applied to simulated datasets both in terms of individual assignments and estimation of the correct number of clusters. The analysis of the results on the real datasets highlighted that the clusterings of SHIPS were the more consistent with the population labels or those produced by the Admixture program. The performances of SHIPS when applied to SNP data, along with its relatively low computational cost and its ease of use make this method a promising
A novel approach to the problem of non-uniqueness of the solution in hierarchical clustering.
Cattinelli, Isabella; Valentini, Giorgio; Paulesu, Eraldo; Borghese, Nunzio Alberto
2013-07-01
The existence of multiple solutions in clustering, and in hierarchical clustering in particular, is often ignored in practical applications. However, this is a non-trivial problem, as different data orderings can result in different cluster sets that, in turns, may lead to different interpretations of the same data. The method presented here offers a solution to this issue. It is based on the definition of an equivalence relation over dendrograms that allows developing all and only the significantly different dendrograms for the same dataset, thus reducing the computational complexity to polynomial from the exponential obtained when all possible dendrograms are considered. Experimental results in the neuroimaging and bioinformatics domains show the effectiveness of the proposed method.
Clinical fracture risk evaluated by hierarchical agglomerative clustering
DEFF Research Database (Denmark)
Kruse, Christian; Eiken, P; Vestergaard, P
2017-01-01
profiles. INTRODUCTION: The purposes of this study were to establish and quantify patient clusters of high, average and low fracture risk using an unsupervised machine learning algorithm. METHODS: Regional and national Danish patient data on dual-energy X-ray absorptiometry (DXA) scans, medication...... containing less than 250 subjects. Clusters were identified as high, average or low fracture risk based on bone mineral density (BMD) characteristics. Cluster-based descriptive statistics and relative Z-scores for variable means were computed. RESULTS: Ten thousand seven hundred seventy-five women were...... as low fracture risk with high to very high BMD. A mean age of 60 years was the earliest that allowed for separation of high-risk clusters. DXA scan results could identify high-risk subjects with different antiresorptive treatment compliance levels based on similarities and differences in lumbar spine...
Craig, Paul; Roa-Seïler, Néna
2013-01-01
This paper describes a novel information visualization technique that combines multidimensional scaling and hierarchical clustering to support the exploratory analysis of multidimensional data. The technique displays the results of multidimensional scaling using a scatter plot where the proximity of any two items' representations is approximate to their similarity according to a Euclidean distance metric. The results of hierarchical clustering are overlaid onto this view by drawing smoothed outlines around each nested cluster. The difference in similarity between successive cluster combinations is used to colour code clusters and make stronger natural clusters more prominent in the display. When a cluster or group of items is selected, multidimensional scaling and hierarchical clustering are re-applied to a filtered subset of the data, and animation is used to smooth the transition between successive filtered views. As a case study we demonstrate the technique being used to analyse survey data relating to the appropriateness of different phrases to different emotionally charged situations.
Water quality assessment with hierarchical cluster analysis based on Mahalanobis distance.
Du, Xiangjun; Shao, Fengjing; Wu, Shunyao; Zhang, Hanlin; Xu, Si
2017-07-01
Water quality assessment is crucial for assessment of marine eutrophication, prediction of harmful algal blooms, and environment protection. Previous studies have developed many numeric modeling methods and data driven approaches for water quality assessment. The cluster analysis, an approach widely used for grouping data, has also been employed. However, there are complex correlations between water quality variables, which play important roles in water quality assessment but have always been overlooked. In this paper, we analyze correlations between water quality variables and propose an alternative method for water quality assessment with hierarchical cluster analysis based on Mahalanobis distance. Further, we cluster water quality data collected form coastal water of Bohai Sea and North Yellow Sea of China, and apply clustering results to evaluate its water quality. To evaluate the validity, we also cluster the water quality data with cluster analysis based on Euclidean distance, which are widely adopted by previous studies. The results show that our method is more suitable for water quality assessment with many correlated water quality variables. To our knowledge, it is the first attempt to apply Mahalanobis distance for coastal water quality assessment.
Directory of Open Access Journals (Sweden)
Natalia A Petushkova
Full Text Available There are two ways that statistical methods can learn from biomedical data. One way is to learn classifiers to identify diseases and to predict outcomes using the training dataset with established diagnosis for each sample. When the training dataset is not available the task can be to mine for presence of meaningful groups (clusters of samples and to explore underlying data structure (unsupervised learning.We investigated the proteomic profiles of the cytosolic fraction of human liver samples using two-dimensional electrophoresis (2DE. Samples were resected upon surgical treatment of hepatic metastases in colorectal cancer. Unsupervised hierarchical clustering of 2DE gel images (n = 18 revealed a pair of clusters, containing 11 and 7 samples. Previously we used the same specimens to measure biochemical profiles based on cytochrome P450-dependent enzymatic activities and also found that samples were clearly divided into two well-separated groups by cluster analysis. It turned out that groups by enzyme activity almost perfectly match to the groups identified from proteomic data. Of the 271 reproducible spots on our 2DE gels, we selected 15 to distinguish the human liver cytosolic clusters. Using MALDI-TOF peptide mass fingerprinting, we identified 12 proteins for the selected spots, including known cancer-associated species.Our results highlight the importance of hierarchical cluster analysis of proteomic data, and showed concordance between results of biochemical and proteomic approaches. Grouping of the human liver samples and/or patients into differing clusters may provide insights into possible molecular mechanism of drug metabolism and creates a rationale for personalized treatment.
Using Dynamic Quantum Clustering to Analyze Hierarchically Heterogeneous Samples on the Nanoscale
Energy Technology Data Exchange (ETDEWEB)
Hume, Allison; /Princeton U. /SLAC
2012-09-07
Dynamic Quantum Clustering (DQC) is an unsupervised, high visual data mining technique. DQC was tested as an analysis method for X-ray Absorption Near Edge Structure (XANES) data from the Transmission X-ray Microscopy (TXM) group. The TXM group images hierarchically heterogeneous materials with nanoscale resolution and large field of view. XANES data consists of energy spectra for each pixel of an image. It was determined that DQC successfully identifies structure in data of this type without prior knowledge of the components in the sample. Clusters and sub-clusters clearly reflected features of the spectra that identified chemical component, chemical environment, and density in the image. DQC can also be used in conjunction with the established data analysis technique, which does require knowledge of components present.
Hierarchical clusters in families with type 2 diabetes
García-Solano, Beatriz; Gallegos-Cabriales, Esther C; Gómez-Meza, Marco V; García-Madrid, Guillermina; Flores-Merlo, Marcela; García-Solano, Mauro
2015-01-01
Families represent more than a set of individuals; family is more than a sum of its individual members. With this classification, nurses can identify the family health-illness beliefs obey family as a unit concept, and plan family inclusion into the type 2 diabetes treatment, whom is not considered in public policy, despite families share diet, exercise, and self-monitoring with a member who suffers type 2 diabetes. The aim of this study was to determine whether the characteristics, functionality, routines, and family and individual health in type 2 diabetes describes the differences and similarities between families to consider them as a unit. We performed an exploratory, descriptive hierarchical cluster analysis of 61 families using three instruments and a questionnaire, in addition to weight, height, body fat percentage, hemoglobin A1c, total cholesterol, triglycerides, low-density lipoprotein and high-density lipoprotein. The analysis produced three groups of families. Wilk’s lambda demonstrated statistically significant differences provided by age (Λ = 0.778, F = 2.098, p = 0.010) and family health (Λ = 0.813, F = 2.650, p = 0.023). A post hoc Tukey test coincided with the three subsets. Families with type 2 diabetes have common elements that make them similar, while sharing differences that make them unique. PMID:27347419
Hadida, Jonathan; Desrosiers, Christian; Duong, Luc
2011-03-01
The segmentation of anatomical structures in Computed Tomography Angiography (CTA) is a pre-operative task useful in image guided surgery. Even though very robust and precise methods have been developed to help achieving a reliable segmentation (level sets, active contours, etc), it remains very time consuming both in terms of manual interactions and in terms of computation time. The goal of this study is to present a fast method to find coarse anatomical structures in CTA with few parameters, based on hierarchical clustering. The algorithm is organized as follows: first, a fast non-parametric histogram clustering method is proposed to compute a piecewise constant mask. A second step then indexes all the space-connected regions in the piecewise constant mask. Finally, a hierarchical clustering is achieved to build a graph representing the connections between the various regions in the piecewise constant mask. This step builds up a structural knowledge about the image. Several interactive features for segmentation are presented, for instance association or disassociation of anatomical structures. A comparison with the Mean-Shift algorithm is presented.
Banerjee, Sambaran
2014-01-01
The formation of very young massive clusters or "starburst" clusters is currently one of the most widely debated topic in astronomy. The classical notion dictates that a star cluster is formed in-situ in a dense molecular gas clump followed by a substantial residual gas expulsion. On the other hand, based on the observed morphologies of many young stellar associations, a hierarchical formation scenario is alternatively suggested. A very young (age $\\approx$ 1 Myr), massive ($>10^4M_\\odot$) star cluster like the Galactic NGC 3603 young cluster (HD 97950) is an appropriate testbed for distinguishing between such "monolithic" and "hierarchical" formation scenarios. A recent study by Banerjee and Kroupa (2014) demonstrates that the monolithic scenario remarkably reproduces the HD 97950 cluster. In the present work, we explore the possibility of the formation of the above cluster via hierarchical assembly of subclusters. These subclusters are initially distributed over a wide range of spatial volumes and have vari...
THE EVOLUTION OF BRIGHTEST CLUSTER GALAXIES IN A HIERARCHICAL UNIVERSE
Energy Technology Data Exchange (ETDEWEB)
Tonini, Chiara; Bernyk, Maksym; Croton, Darren [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, Melbourne, VIC 3122 (Australia); Maraston, Claudia; Thomas, Daniel [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom)
2012-11-01
We investigate the evolution of brightest cluster galaxies (BCGs) from redshift z {approx} 1.6 to z = 0. We upgrade the hierarchical semi-analytic model of Croton et al. with a new spectro-photometric model that produces realistic galaxy spectra, making use of the Maraston stellar populations and a new recipe for the dust extinction. We compare the model predictions of the K-band luminosity evolution and the J - K, V - I, and I - K color evolution with a series of data sets, including those of Collins et al. who argued that semi-analytic models based on the Millennium simulation cannot reproduce the red colors and high luminosity of BCGs at z > 1. We show instead that the model is well in range of the observed luminosity and correctly reproduces the color evolution of BCGs in the whole redshift range up to z {approx} 1.6. We argue that the success of the semi-analytic model is in large part due to the implementation of a more sophisticated spectro-photometric model. An analysis of the model BCGs shows an increase in mass by a factor of 2-3 since z {approx} 1, and star formation activity down to low redshifts. While the consensus regarding BCGs is that they are passively evolving, we argue that this conclusion is affected by the degeneracy between star formation history and stellar population models used in spectral energy distribution fitting, and by the inefficacy of toy models of passive evolution to capture the complexity of real galaxies, especially those with rich merger histories like BCGs. Following this argument, we also show that in the semi-analytic model the BCGs show a realistic mix of stellar populations, and that these stellar populations are mostly old. In addition, the age-redshift relation of the model BCGs follows that of the universe, meaning that given their merger history and star formation history, the ageing of BCGs is always dominated by the ageing of their stellar populations. In a {Lambda}CDM universe, we define such evolution as &apos
Munandar, T. A.; Azhari; Mushdholifah, A.; Arsyad, L.
2017-03-01
Disparities in regional development methods are commonly identified using the Klassen Typology and Location Quotient. Both methods typically use the data on the gross regional domestic product (GRDP) sectors of a particular region. The Klassen approach can identify regional disparities by classifying the GRDP sector data into four classes, namely Quadrants I, II, III, and IV. Each quadrant indicates a certain level of regional disparities based on the GRDP sector value of the said region. Meanwhile, the Location Quotient (LQ) is usually used to identify potential sectors in a particular region so as to determine which sectors are potential and which ones are not potential. LQ classifies each sector into three classes namely, the basic sector, the non-basic sector with a competitive advantage, and the non-basic sector which can only meet its own necessities. Both Klassen Typology and LQ are unable to visualize the relationship of achievements in the development clearly of each region and sector. This research aimed to develop a new approach to the identification of disparities in regional development in the form of hierarchical clustering. The method of Hierarchical Agglomerative Clustering (HAC) was employed as the basis of the hierarchical clustering model for identifying disparities in regional development. Modifications were made to HAC using the Klassen Typology and LQ. Then, HAC which had been modified using the Klassen Typology was called MHACK while HAC which had been modified using LQ was called MACLoQ. Both algorithms can be used to identify regional disparities (MHACK) and potential sectors (MACLoQ), respectively, in the form of hierarchical clusters. Based on the MHACK in 31 regencies in Central Java Province, it is identified that 3 regencies (Demak, Jepara, and Magelang City) fall into the category of developed and rapidly-growing regions, while the other 28 regencies fall into the category of developed but depressed regions. Results of the MACLo
Ghebremedhin, Meron; Yesupriya, Shubha; Luka, Janos; Crane, Nicole J.
2015-03-01
Recent studies have demonstrated the potential advantages of the use of Raman spectroscopy in the biomedical field due to its rapidity and noninvasive nature. In this study, Raman spectroscopy is applied as a method for differentiating between bacteria isolates for Gram status and Genus species. We created models for identifying 28 bacterial isolates using spectra collected with a 785 nm laser excitation Raman spectroscopic system. In order to investigate the groupings of these samples, partial least squares discriminant analysis (PLSDA) and hierarchical cluster analysis (HCA) was implemented. In addition, cluster analyses of the isolates were performed using various data types consisting of, biochemical tests, gene sequence alignment, high resolution melt (HRM) analysis and antimicrobial susceptibility tests of minimum inhibitory concentration (MIC) and degree of antimicrobial resistance (SIR). In order to evaluate the ability of these models to correctly classify bacterial isolates using solely Raman spectroscopic data, a set of 14 validation samples were tested using the PLSDA models and consequently the HCA models. External cluster evaluation criteria of purity and Rand index were calculated at different taxonomic levels to compare the performance of clustering using Raman spectra as well as the other datasets. Results showed that Raman spectra performed comparably, and in some cases better than, the other data types with Rand index and purity values up to 0.933 and 0.947, respectively. This study clearly demonstrates that the discrimination of bacterial species using Raman spectroscopic data and hierarchical cluster analysis is possible and has the potential to be a powerful point-of-care tool in clinical settings.
Intrusion Detection Method Based on Improved Growing Hierarchical Self-Organizing Map
Institute of Scientific and Technical Information of China (English)
张亚平; 布文秀; 苏畅; 王璐瑶; 许涵
2016-01-01
Considering that growing hierarchical self-organizing map(GHSOM) ignores the influence of individ-ual component in sample vector analysis, and its accurate rate in detecting unknown network attacks is relatively lower, an improved GHSOM method combined with mutual information is proposed. After theoretical analysis, experiments are conducted to illustrate the effectiveness of the proposed method by accurately clustering the input data. Based on different clusters, the complex relationship within the data can be revealed effectively.
Fernández-Arjona, María del Mar; Grondona, Jesús M.; Granados-Durán, Pablo; Fernández-Llebrez, Pedro; López-Ávalos, María D.
2017-01-01
It is known that microglia morphology and function are closely related, but only few studies have objectively described different morphological subtypes. To address this issue, morphological parameters of microglial cells were analyzed in a rat model of aseptic neuroinflammation. After the injection of a single dose of the enzyme neuraminidase (NA) within the lateral ventricle (LV) an acute inflammatory process occurs. Sections from NA-injected animals and sham controls were immunolabeled with the microglial marker IBA1, which highlights ramifications and features of the cell shape. Using images obtained by section scanning, individual microglial cells were sampled from various regions (septofimbrial nucleus, hippocampus and hypothalamus) at different times post-injection (2, 4 and 12 h). Each cell yielded a set of 15 morphological parameters by means of image analysis software. Five initial parameters (including fractal measures) were statistically different in cells from NA-injected rats (most of them IL-1β positive, i.e., M1-state) compared to those from control animals (none of them IL-1β positive, i.e., surveillant state). However, additional multimodal parameters were revealed more suitable for hierarchical cluster analysis (HCA). This method pointed out the classification of microglia population in four clusters. Furthermore, a linear discriminant analysis (LDA) suggested three specific parameters to objectively classify any microglia by a decision tree. In addition, a principal components analysis (PCA) revealed two extra valuable variables that allowed to further classifying microglia in a total of eight sub-clusters or types. The spatio-temporal distribution of these different morphotypes in our rat inflammation model allowed to relate specific morphotypes with microglial activation status and brain location. An objective method for microglia classification based on morphological parameters is proposed. Main points Microglia undergo a quantifiable
The Hierarchical Distribution of the Young Stellar Clusters in Six Local Star-forming Galaxies
Grasha, K.; Calzetti, D.; Adamo, A.; Kim, H.; Elmegreen, B. G.; Gouliermis, D. A.; Dale, D. A.; Fumagalli, M.; Grebel, E. K.; Johnson, K. E.; Kahre, L.; Kennicutt, R. C.; Messa, M.; Pellerin, A.; Ryon, J. E.; Smith, L. J.; Shabani, F.; Thilker, D.; Ubeda, L.
2017-05-01
We present a study of the hierarchical clustering of the young stellar clusters in six local (3-15 Mpc) star-forming galaxies using Hubble Space Telescope broadband WFC3/UVIS UV and optical images from the Treasury Program LEGUS (Legacy ExtraGalactic UV Survey). We identified 3685 likely clusters and associations, each visually classified by their morphology, and we use the angular two-point correlation function to study the clustering of these stellar systems. We find that the spatial distribution of the young clusters and associations are clustered with respect to each other, forming large, unbound hierarchical star-forming complexes that are in general very young. The strength of the clustering decreases with increasing age of the star clusters and stellar associations, becoming more homogeneously distributed after ˜40-60 Myr and on scales larger than a few hundred parsecs. In all galaxies, the associations exhibit a global behavior that is distinct and more strongly correlated from compact clusters. Thus, populations of clusters are more evolved than associations in terms of their spatial distribution, traveling significantly from their birth site within a few tens of Myr, whereas associations show evidence of disruption occurring very quickly after their formation. The clustering of the stellar systems resembles that of a turbulent interstellar medium that drives the star formation process, correlating the components in unbound star-forming complexes in a hierarchical manner, dispersing shortly after formation, suggestive of a single, continuous mode of star formation across all galaxies.
Hierarchical Design Method for Micro Device
Directory of Open Access Journals (Sweden)
Zheng Liu
2013-05-01
Full Text Available Traditional mask-beginning design flow of micro device is unintuitive and fussy for designers. A hierarchical design method and involved key technologies for features mapping procedure are presented. With the feature-based design framework, the model of micro device is organized by various features in different designing stages, which can be converted into each other based on the mapping rules. The feature technology is the foundation of the three-level design flow that provides a more efficient design way. In system level, functional features provide the top level of schematic and functional description. After the functional mapping procedure, on the other hand, parametric design features construct the 3D model of micro device in device level, which is based on Hybird Model representation. By means of constraint features, the corresponding revision rules are applied to the rough model to optimize the original structure. As a result, the model reconstruction algorithm makes benefit for the model revision and constraint features mapping process. Moreover, the formulating description of manufacturing features derivation provides the automatic way for model conversion.
Timchenko, Leonid; Yarovyi, Andrii; Kokriatskaya, Nataliya; Nakonechna, Svitlana; Abramenko, Ludmila; Ławicki, Tomasz; Popiel, Piotr; Yesmakhanova, Laura
2016-09-01
The paper presents a method of parallel-hierarchical transformations for rapid recognition of dynamic images using GPU technology. Direct parallel-hierarchical transformations based on cluster CPU-and GPU-oriented hardware platform. Mathematic models of training of the parallel hierarchical (PH) network for the transformation are developed, as well as a training method of the PH network for recognition of dynamic images. This research is most topical for problems on organizing high-performance computations of super large arrays of information designed to implement multi-stage sensing and processing as well as compaction and recognition of data in the informational structures and computer devices. This method has such advantages as high performance through the use of recent advances in parallelization, possibility to work with images of ultra dimension, ease of scaling in case of changing the number of nodes in the cluster, auto scan of local network to detect compute nodes.
Content Based Image Retrieval using Hierarchical and K-Means Clustering Techniques
Directory of Open Access Journals (Sweden)
V.S.V.S. Murthy
2010-03-01
Full Text Available In this paper we present an image retrieval system that takes an image as the input query and retrieves images based on image content. Content Based Image Retrieval is an approach for retrieving semantically-relevant images from an image database based on automatically-derived image features. The unique aspect of the system is the utilization of hierarchical and k-means clustering techniques. The proposed procedure consists of two stages. First, here we are going to filter most of the images in the hierarchical clustering and then apply the clustered images to KMeans, so that we can get better favored image results.
Hierarchical Control for Multiple DC-Microgrids Clusters
DEFF Research Database (Denmark)
Shafiee, Qobad; Dragicevic, Tomislav; Vasquez, Juan Carlos
2014-01-01
DC microgrids (MGs) have gained research interest during the recent years because of many potential advantages as compared to the ac system. To ensure reliable operation of a low-voltage dc MG as well as its intelligent operation with the other DC MGs, a hierarchical control is proposed in this p......DC microgrids (MGs) have gained research interest during the recent years because of many potential advantages as compared to the ac system. To ensure reliable operation of a low-voltage dc MG as well as its intelligent operation with the other DC MGs, a hierarchical control is proposed...
Bae, Hyoung Won; Rho, Seungsoo; Lee, Hye Sun; Lee, Naeun; Hong, Samin; Seong, Gong Je; Sung, Kyung Rim; Kim, Chan Yun
2014-04-29
To classify medically treated open-angle glaucoma (OAG) by the pattern of progression using hierarchical cluster analysis, and to determine OAG progression characteristics by comparing clusters. Ninety-five eyes of 95 OAG patients who received medical treatment, and who had undergone visual field (VF) testing at least once per year for 5 or more years. OAG was classified into subgroups using hierarchical cluster analysis based on the following five variables: baseline mean deviation (MD), baseline visual field index (VFI), MD slope, VFI slope, and Glaucoma Progression Analysis (GPA) printout. After that, other parameters were compared between clusters. Two clusters were made after a hierarchical cluster analysis. Cluster 1 showed -4.06 ± 2.43 dB baseline MD, 92.58% ± 6.27% baseline VFI, -0.28 ± 0.38 dB per year MD slope, -0.52% ± 0.81% per year VFI slope, and all "no progression" cases in GPA printout, whereas cluster 2 showed -8.68 ± 3.81 baseline MD, 77.54 ± 12.98 baseline VFI, -0.72 ± 0.55 MD slope, -2.22 ± 1.89 VFI slope, and seven "possible" and four "likely" progression cases in GPA printout. There were no significant differences in age, sex, mean IOP, central corneal thickness, and axial length between clusters. However, cluster 2 included more high-tension glaucoma patients and used a greater number of antiglaucoma eye drops significantly compared with cluster 1. Hierarchical cluster analysis of progression patterns divided OAG into slow and fast progression groups, evidenced by assessing the parameters of glaucomatous progression in VF testing. In the fast progression group, the prevalence of high-tension glaucoma was greater and the number of antiglaucoma medications administered was increased versus the slow progression group. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
A Reexamination of Methods of Hierarchic Composition in the AHP
Institute of Scientific and Technical Information of China (English)
ZHANG Zhi-yong
2002-01-01
This paper demonstrates that we should use two different hierarchic composition methods for the two different types of levels in the AHP. The first method is using the weighted geometric mean to synthesize the judgments of alternative-type-level elements, which is the only hierarchic composition method for the alternative-type level in an AHP hierarchy, and the rank is preserved automatically. The second one is using the weighted arithmetic mean to synthesize the priorities of the criteria-type-level elements, which is the only hierarchic composition method for all the criteria-type levels, and rank reversals are allowed.
Periorbital melasma: Hierarchical cluster analysis of clinical features in Asian patients.
Jung, Y S; Bae, J M; Kim, B J; Kang, J-S; Cho, S B
2017-03-19
Studies have shown melasma lesions to be distributed across the face in centrofacial, malar, and mandibular patterns. Meanwhile, however, melasma lesions of the periorbital area have yet to be thoroughly described. We analyzed normal and ultraviolet light-exposed photographs of patients with melasma. The periorbital melasma lesions were measured according to anatomical reference points and a hierarchical cluster analysis was performed. The periorbital melasma lesions showed clinical features of fine and homogenous melasma pigmentation, involving both the upper and lower eyelids that extended to other anatomical sites with a darker and coarser appearance. The hierarchical cluster analysis indicated that patients with periorbital melasma can be categorized into two clusters according to the surface anatomy of the face. Significant differences between cluster 1 and cluster 2 were found in lateral distance and inferolateral distance, but not in medial distance and superior distance. Comparing the two clusters, patients in cluster 2 were found to be significantly older and more commonly accompanied by melasma lesions of the temple and medial cheek. Our hierarchical cluster analysis of periorbital melasma lesions demonstrated that Asian patients with periorbital melasma can be categorized into two clusters according to the surface anatomy of the face. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NEW METHOD TO ESTIMATE SCALING OF POWER-LAW DEGREE DISTRIBUTION AND HIERARCHICAL NETWORKS
Institute of Scientific and Technical Information of China (English)
YANG Bo; DUAN Wen-qi; CHEN Zhong
2006-01-01
A new method and corresponding numerical procedure are introduced to estimate scaling exponents of power-law degree distribution and hierarchical clustering func tion for complex networks. This method can overcome the biased and inaccurate faults of graphical linear fitting methods commonly used in current network research. Furthermore, it is verified to have higher goodness-of-fit than graphical methods by comparing the KS (Kolmogorov-Smirnov) test statistics for 10 CNN (Connecting Nearest-Neighbor)networks.
The Evolution of Galaxy Clustering in Hierarchical Models
1999-01-01
The main ingredients of recent semi-analytic models of galaxy formation are summarised. We present predictions for the galaxy clustering properties of a well specified LCDM model whose parameters are constrained by observed local galaxy properties. We present preliminary predictions for evolution of clustering that can be probed with deep pencil beam surveys.
3D Nearest Neighbour Search Using a Clustered Hierarchical Tree Structure
DEFF Research Database (Denmark)
Suhaibah, A.; Uznir, U.; Antón Castro, Francesc/François
2016-01-01
, with the immense number of urban datasets, the retrieval and analysis of nearest neighbour information and their efficiency will become more complex and crucial. In this paper, we present a technique to retrieve nearest neighbour information in 3D space using a clustered hierarchical tree structure. Based on our...... findings, the proposed approach substantially showed an improvement of response time analysis compared to existing approaches of spatial access methods in databases. The query performance was tested using a dataset consisting of 500,000 point locations building and franchising unit. The results...... of the franchise unit will be located or is the franchise unit located is at the best level for visibility purposes. One of the common used analyses used for retrieving the surrounding information is Nearest Neighbour (NN) analysis. It uses a point location and identifies the surrounding neighbours. However...
Directory of Open Access Journals (Sweden)
M. Safish Mary
2012-04-01
Full Text Available Classification of large amount of data is a time consuming process but crucial for analysis and decision making. Radial Basis Function networks are widely used for classification and regression analysis. In this paper, we have studied the performance of RBF neural networks to classify the sales of cars based on the demand, using kernel density estimation algorithm which produces classification accuracy comparable to data classification accuracy provided by support vector machines. In this paper, we have proposed a new instance based data selection method where redundant instances are removed with help of a threshold thus improving the time complexity with improved classification accuracy. The instance based selection of the data set will help reduce the number of clusters formed thereby reduces the number of centers considered for building the RBF network. Further the efficiency of the training is improved by applying a hierarchical clustering technique to reduce the number of clusters formed at every step. The paper explains the algorithm used for classification and for conditioning the data. It also explains the complexities involved in classification of sales data for analysis and decision-making.
Moens, Katrien; Siegert, Richard J; Taylor, Steve; Namisango, Eve; Harding, Richard
2015-01-01
Symptom research across conditions has historically focused on single symptoms, and the burden of multiple symptoms and their interactions has been relatively neglected especially in people living with HIV. Symptom cluster studies are required to set priorities in treatment planning, and to lessen the total symptom burden. This study aimed to identify and compare symptom clusters among people living with HIV attending five palliative care facilities in two sub-Saharan African countries. Data from cross-sectional self-report of seven-day symptom prevalence on the 32-item Memorial Symptom Assessment Scale-Short Form were used. A hierarchical cluster analysis was conducted using Ward's method applying squared Euclidean Distance as the similarity measure to determine the clusters. Contingency tables, X2 tests and ANOVA were used to compare the clusters by patient specific characteristics and distress scores. Among the sample (N=217) the mean age was 36.5 (SD 9.0), 73.2% were female, and 49.1% were on antiretroviral therapy (ART). The cluster analysis produced five symptom clusters identified as: 1) dermatological; 2) generalised anxiety and elimination; 3) social and image; 4) persistently present; and 5) a gastrointestinal-related symptom cluster. The patients in the first three symptom clusters reported the highest physical and psychological distress scores. Patient characteristics varied significantly across the five clusters by functional status (worst functional physical status in cluster one, pclusters two and three, p=0.012); global distress (F=26.8, pcluster one, best for cluster four). The greatest burden is associated with cluster one, and should be prioritised in clinical management. Further symptom cluster research in people living with HIV with longitudinally collected symptom data to test cluster stability and identify common symptom trajectories is recommended.
Directory of Open Access Journals (Sweden)
Kuate-Defo, Bathélémy
2001-01-01
Full Text Available EnglishThis paper merges two parallel developments since the 1970s of newstatistical tools for data analysis: statistical methods known as hazard models that are used foranalyzing event-duration data and statistical methods for analyzing hierarchically clustered dataknown as multilevel models. These developments have rarely been integrated in research practice andthe formalization and estimation of models for hierarchically clustered survival data remain largelyuncharted. I attempt to fill some of this gap and demonstrate the merits of formulating and estimatingmultilevel hazard models with longitudinal data.FrenchCette étude intègre deux approches statistiques de pointe d'analyse des donnéesquantitatives depuis les années 70: les méthodes statistiques d'analyse desdonnées biographiques ou méthodes de survie et les méthodes statistiquesd'analyse des données hiérarchiques ou méthodes multi-niveaux. Ces deuxapproches ont été très peu mis en symbiose dans la pratique de recherche et parconséquent, la formulation et l'estimation des modèles appropriés aux donnéeslongitudinales et hiérarchiquement nichées demeure essentiellement un champd'investigation vierge. J'essaye de combler ce vide et j'utilise des données réellesen santé publique pour démontrer les mérites et contextes de formulation etd'estimation des modèles multi-niveaux et multi-états des données biographiqueset longitudinales.
Sanfilippo, Antonio [Richland, WA; Calapristi, Augustin J [West Richland, WA; Crow, Vernon L [Richland, WA; Hetzler, Elizabeth G [Kennewick, WA; Turner, Alan E [Kennewick, WA
2009-12-22
Document clustering methods, document cluster label disambiguation methods, document clustering apparatuses, and articles of manufacture are described. In one aspect, a document clustering method includes providing a document set comprising a plurality of documents, providing a cluster comprising a subset of the documents of the document set, using a plurality of terms of the documents, providing a cluster label indicative of subject matter content of the documents of the cluster, wherein the cluster label comprises a plurality of word senses, and selecting one of the word senses of the cluster label.
Directory of Open Access Journals (Sweden)
Górecki J.
2017-01-01
Full Text Available Several successful approaches to structure determination of hierarchical Archimedean copulas (HACs proposed in the literature rely on agglomerative clustering and Kendall’s correlation coefficient. However, there has not been presented any theoretical proof justifying such approaches. This work fills this gap and introduces a theorem showing that, given the matrix of the pairwise Kendall correlation coefficients corresponding to a HAC, its structure can be recovered by an agglomerative clustering technique.
Prediction of in vitro and in vivo oestrogen receptor activity using hierarchical clustering
In this study, hierarchical clustering classification models were developed to predict in vitro and in vivo oestrogen receptor (ER) activity. Classification models were developed for binding, agonist, and antagonist in vitro ER activity and for mouse in vivo uterotrophic ER bindi...
Prediction of in vitro and in vivo oestrogen receptor activity using hierarchical clustering
In this study, hierarchical clustering classification models were developed to predict in vitro and in vivo oestrogen receptor (ER) activity. Classification models were developed for binding, agonist, and antagonist in vitro ER activity and for mouse in vivo uterotrophic ER bindi...
Semi-supervised clustering methods
Bair, Eric
2013-01-01
Cluster analysis methods seek to partition a data set into homogeneous subgroups. It is useful in a wide variety of applications, including document processing and modern genetics. Conventional clustering methods are unsupervised, meaning that there is no outcome variable nor is anything known about the relationship between the observations in the data set. In many situations, however, information about the clusters is available in addition to the values of the features. For example, the cluster labels of some observations may be known, or certain observations may be known to belong to the same cluster. In other cases, one may wish to identify clusters that are associated with a particular outcome variable. This review describes several clustering algorithms (known as “semi-supervised clustering” methods) that can be applied in these situations. The majority of these methods are modifications of the popular k-means clustering method, and several of them will be described in detail. A brief description of some other semi-supervised clustering algorithms is also provided. PMID:24729830
On Comparison of Clustering Methods for Pharmacoepidemiological Data.
Feuillet, Fanny; Bellanger, Lise; Hardouin, Jean-Benoit; Victorri-Vigneau, Caroline; Sébille, Véronique
2015-01-01
The high consumption of psychotropic drugs is a public health problem. Rigorous statistical methods are needed to identify consumption characteristics in post-marketing phase. Agglomerative hierarchical clustering (AHC) and latent class analysis (LCA) can both provide clusters of subjects with similar characteristics. The objective of this study was to compare these two methods in pharmacoepidemiology, on several criteria: number of clusters, concordance, interpretation, and stability over time. From a dataset on bromazepam consumption, the two methods present a good concordance. AHC is a very stable method and it provides homogeneous classes. LCA is an inferential approach and seems to allow identifying more accurately extreme deviant behavior.
Complexity of major UK companies between 2006 and 2010: Hierarchical structure method approach
Ulusoy, Tolga; Keskin, Mustafa; Shirvani, Ayoub; Deviren, Bayram; Kantar, Ersin; Çaǧrı Dönmez, Cem
2012-11-01
This study reports on topology of the top 40 UK companies that have been analysed for predictive verification of markets for the period 2006-2010, applying the concept of minimal spanning tree and hierarchical tree (HT) analysis. Construction of the minimal spanning tree (MST) and the hierarchical tree (HT) is confined to a brief description of the methodology and a definition of the correlation function between a pair of companies based on the London Stock Exchange (LSE) index in order to quantify synchronization between the companies. A derivation of hierarchical organization and the construction of minimal-spanning and hierarchical trees for the 2006-2008 and 2008-2010 periods have been used and the results validate the predictive verification of applied semantics. The trees are known as useful tools to perceive and detect the global structure, taxonomy and hierarchy in financial data. From these trees, two different clusters of companies in 2006 were detected. They also show three clusters in 2008 and two between 2008 and 2010, according to their proximity. The clusters match each other as regards their common production activities or their strong interrelationship. The key companies are generally given by major economic activities as expected. This work gives a comparative approach between MST and HT methods from statistical physics and information theory with analysis of financial markets that may give new valuable and useful information of the financial market dynamics.
DEFF Research Database (Denmark)
Ussery, David; Bohlin, Jon; Skjerve, Eystein
2009-01-01
Recently there has been an explosion in the availability of bacterial genomic sequences, making possible now an analysis of genomic signatures across more than 800 hundred different bacterial chromosomes, from a wide variety of environments. Using genomic signatures, we pair-wise compared 867...... different genomic DNA sequences, taken from chromosomes and plasmids more than 100,000 base-pairs in length. Hierarchical clustering was performed on the outcome of the comparisons before a multinomial regression model was fitted. The regression model included the cluster groups as the response variable...... AT content. Small improvements to the regression model, although significant, were also obtained by factors such as sequence size, habitat, growth temperature, selective pressure measured as oligonucleotide usage variance, and oxygen requirement.The statistics obtained using hierarchical clustering...
Signatures of Hierarchical Clustering in Dark Matter Detection Experiments
Stiff, D; Frieman, Joshua A
2001-01-01
In the cold dark matter model of structure formation, galaxies are assembled hierarchically from mergers and the accretion of subclumps. This process is expected to leave residual substructure in the Galactic dark halo, including partially disrupted clumps and their associated tidal debris. We develop a model for such halo substructure and study its implications for dark matter (WIMP and axion) detection experiments. We combine the Press-Schechter model for the distribution of halo subclump masses with N-body simulations of the evolution and disruption of individual clumps as they orbit through the evolving Galaxy to derive the probability that the Earth is passing through a subclump or stream of a given density. Our results suggest that it is likely that the local complement of dark matter particles includes a 1-5% contribution from a single clump. The implications for dark matter detection experiments are significant, since the disrupted clump is composed of a `cold' flow of high-velocity particles. We desc...
Clustering dynamic textures with the hierarchical em algorithm for modeling video.
Mumtaz, Adeel; Coviello, Emanuele; Lanckriet, Gert R G; Chan, Antoni B
2013-07-01
Dynamic texture (DT) is a probabilistic generative model, defined over space and time, that represents a video as the output of a linear dynamical system (LDS). The DT model has been applied to a wide variety of computer vision problems, such as motion segmentation, motion classification, and video registration. In this paper, we derive a new algorithm for clustering DT models that is based on the hierarchical EM algorithm. The proposed clustering algorithm is capable of both clustering DTs and learning novel DT cluster centers that are representative of the cluster members in a manner that is consistent with the underlying generative probabilistic model of the DT. We also derive an efficient recursive algorithm for sensitivity analysis of the discrete-time Kalman smoothing filter, which is used as the basis for computing expectations in the E-step of the HEM algorithm. Finally, we demonstrate the efficacy of the clustering algorithm on several applications in motion analysis, including hierarchical motion clustering, semantic motion annotation, and learning bag-of-systems (BoS) codebooks for dynamic texture recognition.
Improving the Decision Value of Hierarchical Text Clustering Using Term Overlap Detection
Directory of Open Access Journals (Sweden)
Nilupulee Nathawitharana
2015-09-01
Full Text Available Humans are used to expressing themselves with written language and language provides a medium with which we can describe our experiences in detail incorporating individuality. Even though documents provide a rich source of information, it becomes very difficult to identify, extract, summarize and search when vast amounts of documents are collected especially over time. Document clustering is a technique that has been widely used to group documents based on similarity of content represented by the words used. Once key groups are identified further drill down into sub-groupings is facilitated by the use of hierarchical clustering. Clustering and hierarchical clustering are very useful when applied to numerical and categorical data and cluster accuracy and purity measures exist to evaluate the outcomes of a clustering exercise. Although the same measures have been applied to text clustering, text clusters are based on words or terms which can be repeated across documents associated with different topics. Therefore text data cannot be considered as a direct ‘coding’ of a particular experience or situation in contrast to numerical and categorical data and term overlap is a very common characteristic in text clustering. In this paper we propose a new technique and methodology for term overlap capture from text documents, highlighting the different situations such overlap could signify and discuss why such understanding is important for obtaining value from text clustering. Experiments were conducted using a widely used text document collection where the proposed methodology allowed exploring the term diversity for a given document collection and obtain clusters with minimum term overlap.
A novel load balancing method for hierarchical federation simulation system
Bin, Xiao; Xiao, Tian-yuan
2013-07-01
In contrast with single HLA federation framework, hierarchical federation framework can improve the performance of large-scale simulation system in a certain degree by distributing load on several RTI. However, in hierarchical federation framework, RTI is still the center of message exchange of federation, and it is still the bottleneck of performance of federation, the data explosion in a large-scale HLA federation may cause overload on RTI, It may suffer HLA federation performance reduction or even fatal error. Towards this problem, this paper proposes a load balancing method for hierarchical federation simulation system based on queuing theory, which is comprised of three main module: queue length predicting, load controlling policy, and controller. The method promotes the usage of resources of federate nodes, and improves the performance of HLA simulation system with balancing load on RTIG and federates. Finally, the experiment results are presented to demonstrate the efficient control of the method.
Chang, Seongmin; Baek, Sungmin; Kim, Ki-Ook; Cho, Maenghyo
2015-06-01
A system identification method has been proposed to validate finite element models of complex structures using measured modal data. Finite element method is used for the system identification as well as the structural analysis. In perturbation methods, the perturbed system is expressed as a combination of the baseline structure and the related perturbations. The changes in dynamic responses are applied to determine the structural modifications so that the equilibrium may be satisfied in the perturbed system. In practical applications, the dynamic measurements are carried out on a limited number of accessible nodes and associated degrees of freedom. The equilibrium equation is, in principle, expressed in terms of the measured (master, primary) and unmeasured (slave, secondary) degrees of freedom. Only the specified degrees of freedom are included in the equation formulation for identification and the unspecified degrees of freedom are eliminated through the iterative improved reduction scheme. A large number of system parameters are included as the unknown variables in the system identification of large-scaled structures. The identification problem with large number of system parameters requires a large amount of computation time and resources. In the present study, a hierarchical clustering algorithm is applied to reduce the number of system parameters effectively. Numerical examples demonstrate that the proposed method greatly improves the accuracy and efficiency in the inverse problem of identification.
Methods for analyzing cost effectiveness data from cluster randomized trials
Directory of Open Access Journals (Sweden)
Clark Allan
2007-09-01
Full Text Available Abstract Background Measurement of individuals' costs and outcomes in randomized trials allows uncertainty about cost effectiveness to be quantified. Uncertainty is expressed as probabilities that an intervention is cost effective, and confidence intervals of incremental cost effectiveness ratios. Randomizing clusters instead of individuals tends to increase uncertainty but such data are often analysed incorrectly in published studies. Methods We used data from a cluster randomized trial to demonstrate five appropriate analytic methods: 1 joint modeling of costs and effects with two-stage non-parametric bootstrap sampling of clusters then individuals, 2 joint modeling of costs and effects with Bayesian hierarchical models and 3 linear regression of net benefits at different willingness to pay levels using a least squares regression with Huber-White robust adjustment of errors, b a least squares hierarchical model and c a Bayesian hierarchical model. Results All five methods produced similar results, with greater uncertainty than if cluster randomization was not accounted for. Conclusion Cost effectiveness analyses alongside cluster randomized trials need to account for study design. Several theoretically coherent methods can be implemented with common statistical software.
Wang, Jin; Sun, Xiangping; Nahavandi, Saeid; Kouzani, Abbas; Wu, Yuchuan; She, Mary
2014-11-01
Biomedical time series clustering that automatically groups a collection of time series according to their internal similarity is of importance for medical record management and inspection such as bio-signals archiving and retrieval. In this paper, a novel framework that automatically groups a set of unlabelled multichannel biomedical time series according to their internal structural similarity is proposed. Specifically, we treat a multichannel biomedical time series as a document and extract local segments from the time series as words. We extend a topic model, i.e., the Hierarchical probabilistic Latent Semantic Analysis (H-pLSA), which was originally developed for visual motion analysis to cluster a set of unlabelled multichannel time series. The H-pLSA models each channel of the multichannel time series using a local pLSA in the first layer. The topics learned in the local pLSA are then fed to a global pLSA in the second layer to discover the categories of multichannel time series. Experiments on a dataset extracted from multichannel Electrocardiography (ECG) signals demonstrate that the proposed method performs better than previous state-of-the-art approaches and is relatively robust to the variations of parameters including length of local segments and dictionary size. Although the experimental evaluation used the multichannel ECG signals in a biometric scenario, the proposed algorithm is a universal framework for multichannel biomedical time series clustering according to their structural similarity, which has many applications in biomedical time series management. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Hierarchical modelling for the environmental sciences statistical methods and applications
Clark, James S
2006-01-01
New statistical tools are changing the way in which scientists analyze and interpret data and models. Hierarchical Bayes and Markov Chain Monte Carlo methods for analysis provide a consistent framework for inference and prediction where information is heterogeneous and uncertain, processes are complicated, and responses depend on scale. Nowhere are these methods more promising than in the environmental sciences.
Directory of Open Access Journals (Sweden)
Guillaume Marrelec
Full Text Available The use of mutual information as a similarity measure in agglomerative hierarchical clustering (AHC raises an important issue: some correction needs to be applied for the dimensionality of variables. In this work, we formulate the decision of merging dependent multivariate normal variables in an AHC procedure as a Bayesian model comparison. We found that the Bayesian formulation naturally shrinks the empirical covariance matrix towards a matrix set a priori (e.g., the identity, provides an automated stopping rule, and corrects for dimensionality using a term that scales up the measure as a function of the dimensionality of the variables. Also, the resulting log Bayes factor is asymptotically proportional to the plug-in estimate of mutual information, with an additive correction for dimensionality in agreement with the Bayesian information criterion. We investigated the behavior of these Bayesian alternatives (in exact and asymptotic forms to mutual information on simulated and real data. An encouraging result was first derived on simulations: the hierarchical clustering based on the log Bayes factor outperformed off-the-shelf clustering techniques as well as raw and normalized mutual information in terms of classification accuracy. On a toy example, we found that the Bayesian approaches led to results that were similar to those of mutual information clustering techniques, with the advantage of an automated thresholding. On real functional magnetic resonance imaging (fMRI datasets measuring brain activity, it identified clusters consistent with the established outcome of standard procedures. On this application, normalized mutual information had a highly atypical behavior, in the sense that it systematically favored very large clusters. These initial experiments suggest that the proposed Bayesian alternatives to mutual information are a useful new tool for hierarchical clustering.
Marrelec, Guillaume; Messé, Arnaud; Bellec, Pierre
2015-01-01
The use of mutual information as a similarity measure in agglomerative hierarchical clustering (AHC) raises an important issue: some correction needs to be applied for the dimensionality of variables. In this work, we formulate the decision of merging dependent multivariate normal variables in an AHC procedure as a Bayesian model comparison. We found that the Bayesian formulation naturally shrinks the empirical covariance matrix towards a matrix set a priori (e.g., the identity), provides an automated stopping rule, and corrects for dimensionality using a term that scales up the measure as a function of the dimensionality of the variables. Also, the resulting log Bayes factor is asymptotically proportional to the plug-in estimate of mutual information, with an additive correction for dimensionality in agreement with the Bayesian information criterion. We investigated the behavior of these Bayesian alternatives (in exact and asymptotic forms) to mutual information on simulated and real data. An encouraging result was first derived on simulations: the hierarchical clustering based on the log Bayes factor outperformed off-the-shelf clustering techniques as well as raw and normalized mutual information in terms of classification accuracy. On a toy example, we found that the Bayesian approaches led to results that were similar to those of mutual information clustering techniques, with the advantage of an automated thresholding. On real functional magnetic resonance imaging (fMRI) datasets measuring brain activity, it identified clusters consistent with the established outcome of standard procedures. On this application, normalized mutual information had a highly atypical behavior, in the sense that it systematically favored very large clusters. These initial experiments suggest that the proposed Bayesian alternatives to mutual information are a useful new tool for hierarchical clustering.
The evolution of Brightest Cluster Galaxies in a hierarchical universe
Tonini, Chiara; Croton, Darren; Maraston, Claudia; Thomas, Daniel
2012-01-01
We investigate the evolution of Brightest Cluster Galaxies (BCGs) from redshift z~1.6 to z~0. We use the semi-analytic model of Croton et al. (2006) with a new spectro-photometric model based on the Maraston (2005) stellar populations and a new recipe for the dust extinction. We compare the model predictions of the K-band luminosity evolution and the J-K, V-I and I-K colour evolution with a series of datasets, including Collins et al. (Nature, 2009) who argued that semi-analytic models based on the Millennium simulation cannot reproduce the red colours and high luminosity of BCGs at z>1. We show instead that the model is well in range of the observed luminosity and correctly reproduces the colour evolution of BCGs in the whole redshift range up to z~1.6. We argue that the success of the semi-analytic model is in large part due to the implementation of a more sophisticated spectro-photometric model. An analysis of the model BCGs shows an increase in mass by a factor ~2 since z~1, and star formation activity do...
Unglert, K.; Radić, V.; Jellinek, A. M.
2016-06-01
Variations in the spectral content of volcano seismicity related to changes in volcanic activity are commonly identified manually in spectrograms. However, long time series of monitoring data at volcano observatories require tools to facilitate automated and rapid processing. Techniques such as self-organizing maps (SOM) and principal component analysis (PCA) can help to quickly and automatically identify important patterns related to impending eruptions. For the first time, we evaluate the performance of SOM and PCA on synthetic volcano seismic spectra constructed from observations during two well-studied eruptions at Klauea Volcano, Hawai'i, that include features observed in many volcanic settings. In particular, our objective is to test which of the techniques can best retrieve a set of three spectral patterns that we used to compose a synthetic spectrogram. We find that, without a priori knowledge of the given set of patterns, neither SOM nor PCA can directly recover the spectra. We thus test hierarchical clustering, a commonly used method, to investigate whether clustering in the space of the principal components and on the SOM, respectively, can retrieve the known patterns. Our clustering method applied to the SOM fails to detect the correct number and shape of the known input spectra. In contrast, clustering of the data reconstructed by the first three PCA modes reproduces these patterns and their occurrence in time more consistently. This result suggests that PCA in combination with hierarchical clustering is a powerful practical tool for automated identification of characteristic patterns in volcano seismic spectra. Our results indicate that, in contrast to PCA, common clustering algorithms may not be ideal to group patterns on the SOM and that it is crucial to evaluate the performance of these tools on a control dataset prior to their application to real data.
Directory of Open Access Journals (Sweden)
Katrien Moens
Full Text Available Symptom research across conditions has historically focused on single symptoms, and the burden of multiple symptoms and their interactions has been relatively neglected especially in people living with HIV. Symptom cluster studies are required to set priorities in treatment planning, and to lessen the total symptom burden. This study aimed to identify and compare symptom clusters among people living with HIV attending five palliative care facilities in two sub-Saharan African countries.Data from cross-sectional self-report of seven-day symptom prevalence on the 32-item Memorial Symptom Assessment Scale-Short Form were used. A hierarchical cluster analysis was conducted using Ward's method applying squared Euclidean Distance as the similarity measure to determine the clusters. Contingency tables, X2 tests and ANOVA were used to compare the clusters by patient specific characteristics and distress scores.Among the sample (N=217 the mean age was 36.5 (SD 9.0, 73.2% were female, and 49.1% were on antiretroviral therapy (ART. The cluster analysis produced five symptom clusters identified as: 1 dermatological; 2 generalised anxiety and elimination; 3 social and image; 4 persistently present; and 5 a gastrointestinal-related symptom cluster. The patients in the first three symptom clusters reported the highest physical and psychological distress scores. Patient characteristics varied significantly across the five clusters by functional status (worst functional physical status in cluster one, p<0.001; being on ART (highest proportions for clusters two and three, p=0.012; global distress (F=26.8, p<0.001, physical distress (F=36.3, p<0.001 and psychological distress subscale (F=21.8, p<0.001 (all subscales worst for cluster one, best for cluster four.The greatest burden is associated with cluster one, and should be prioritised in clinical management. Further symptom cluster research in people living with HIV with longitudinally collected symptom data to
New resampling method for evaluating stability of clusters
Directory of Open Access Journals (Sweden)
Neuhaeuser Markus
2008-01-01
Full Text Available Abstract Background Hierarchical clustering is a widely applied tool in the analysis of microarray gene expression data. The assessment of cluster stability is a major challenge in clustering procedures. Statistical methods are required to distinguish between real and random clusters. Several methods for assessing cluster stability have been published, including resampling methods such as the bootstrap. We propose a new resampling method based on continuous weights to assess the stability of clusters in hierarchical clustering. While in bootstrapping approximately one third of the original items is lost, continuous weights avoid zero elements and instead allow non integer diagonal elements, which leads to retention of the full dimensionality of space, i.e. each variable of the original data set is represented in the resampling sample. Results Comparison of continuous weights and bootstrapping using real datasets and simulation studies reveals the advantage of continuous weights especially when the dataset has only few observations, few differentially expressed genes and the fold change of differentially expressed genes is low. Conclusion We recommend the use of continuous weights in small as well as in large datasets, because according to our results they produce at least the same results as conventional bootstrapping and in some cases they surpass it.
DEFF Research Database (Denmark)
Thomadsen, Tommy
2005-01-01
Communication networks are immensely important today, since both companies and individuals use numerous services that rely on them. This thesis considers the design of hierarchical (communication) networks. Hierarchical networks consist of layers of networks and are well-suited for coping...... the clusters. The design of hierarchical networks involves clustering of nodes, hub selection, and network design, i.e. selection of links and routing of ows. Hierarchical networks have been in use for decades, but integrated design of these networks has only been considered for very special types of networks....... The thesis investigates models for hierarchical network design and methods used to design such networks. In addition, ring network design is considered, since ring networks commonly appear in the design of hierarchical networks. The thesis introduces hierarchical networks, including a classification scheme...
An Exactly Soluble Hierarchical Clustering Model Inverse Cascades, Self-Similarity, and Scaling
Gabrielov, A; Turcotte, D L
1999-01-01
We show how clustering as a general hierarchical dynamical process proceeds via a sequence of inverse cascades to produce self-similar scaling, as an intermediate asymptotic, which then truncates at the largest spatial scales. We show how this model can provide a general explanation for the behavior of several models that has been described as ``self-organized critical,'' including forest-fire, sandpile, and slider-block models.
Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering
Tilton, James C.
2002-01-01
This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.
An energy efficient cooperative hierarchical MIMO clustering scheme for wireless sensor networks.
Nasim, Mehwish; Qaisar, Saad; Lee, Sungyoung
2012-01-01
In this work, we present an energy efficient hierarchical cooperative clustering scheme for wireless sensor networks. Communication cost is a crucial factor in depleting the energy of sensor nodes. In the proposed scheme, nodes cooperate to form clusters at each level of network hierarchy ensuring maximal coverage and minimal energy expenditure with relatively uniform distribution of load within the network. Performance is enhanced by cooperative multiple-input multiple-output (MIMO) communication ensuring energy efficiency for WSN deployments over large geographical areas. We test our scheme using TOSSIM and compare the proposed scheme with cooperative multiple-input multiple-output (CMIMO) clustering scheme and traditional multihop Single-Input-Single-Output (SISO) routing approach. Performance is evaluated on the basis of number of clusters, number of hops, energy consumption and network lifetime. Experimental results show significant energy conservation and increase in network lifetime as compared to existing schemes.
Nimon, Kim
2012-01-01
Using state achievement data that are openly accessible, this paper demonstrates the application of hierarchical linear modeling within the context of career technical education research. Three prominent approaches to analyzing clustered data (i.e., modeling aggregated data, modeling disaggregated data, modeling hierarchical data) are discussed…
Ning, P; Guo, Y F; Sun, T Y; Zhang, H S; Chai, D; Li, X M
2016-09-01
To study the distinct clinical phenotype of chronic airway diseases by hierarchical cluster analysis and two-step cluster analysis. A population sample of adult patients in Donghuamen community, Dongcheng district and Qinghe community, Haidian district, Beijing from April 2012 to January 2015, who had wheeze within the last 12 months, underwent detailed investigation, including a clinical questionnaire, pulmonary function tests, total serum IgE levels, blood eosinophil level and a peak flow diary. Nine variables were chosen as evaluating parameters, including pre-salbutamol forced expired volume in one second(FEV1)/forced vital capacity(FVC) ratio, pre-salbutamol FEV1, percentage of post-salbutamol change in FEV1, residual capacity, diffusing capacity of the lung for carbon monoxide/alveolar volume adjusted for haemoglobin level, peak expiratory flow(PEF) variability, serum IgE level, cumulative tobacco cigarette consumption (pack-years) and respiratory symptoms (cough and expectoration). Subjects' different clinical phenotype by hierarchical cluster analysis and two-step cluster analysis was identified. (1) Four clusters were identified by hierarchical cluster analysis. Cluster 1 was chronic bronchitis in smokers with normal pulmonary function. Cluster 2 was chronic bronchitis or mild chronic obstructive pulmonary disease (COPD) patients with mild airflow limitation. Cluster 3 included COPD patients with heavy smoking, poor quality of life and severe airflow limitation. Cluster 4 recognized atopic patients with mild airflow limitation, elevated serum IgE and clinical features of asthma. Significant differences were revealed regarding pre-salbutamol FEV1/FVC%, pre-salbutamol FEV1% pred, post-salbutamol change in FEV1%, maximal mid-expiratory flow curve(MMEF)% pred, carbon monoxide diffusing capacity per liter of alveolar(DLCO)/(VA)% pred, residual volume(RV)% pred, total serum IgE level, smoking history (pack-years), St.George's respiratory questionnaire
Directory of Open Access Journals (Sweden)
Kellermann Walter
2007-01-01
Full Text Available We address the problem of underdetermined BSS. While most previous approaches are designed for instantaneous mixtures, we propose a time-frequency-domain algorithm for convolutive mixtures. We adopt a two-step method based on a general maximum a posteriori (MAP approach. In the first step, we estimate the mixing matrix based on hierarchical clustering, assuming that the source signals are sufficiently sparse. The algorithm works directly on the complex-valued data in the time-frequency domain and shows better convergence than algorithms based on self-organizing maps. The assumption of Laplacian priors for the source signals in the second step leads to an algorithm for estimating the source signals. It involves the -norm minimization of complex numbers because of the use of the time-frequency-domain approach. We compare a combinatorial approach initially designed for real numbers with a second-order cone programming (SOCP approach designed for complex numbers. We found that although the former approach is not theoretically justified for complex numbers, its results are comparable to, or even better than, the SOCP solution. The advantage is a lower computational cost for problems with low input/output dimensions.
Kantar, Ersin; Keskin, Mustafa; Deviren, Bayram
2012-04-01
We have analyzed the topology of 50 important Turkish companies for the period 2006-2010 using the concept of hierarchical methods (the minimal spanning tree (MST) and hierarchical tree (HT)). We investigated the statistical reliability of links between companies in the MST by using the bootstrap technique. We also used the average linkage cluster analysis (ALCA) technique to observe the cluster structures much better. The MST and HT are known as useful tools to perceive and detect global structure, taxonomy, and hierarchy in financial data. We obtained four clusters of companies according to their proximity. We also observed that the Banks and Holdings cluster always forms in the centre of the MSTs for the periods 2006-2007, 2008, and 2009-2010. The clusters match nicely with their common production activities or their strong interrelationship. The effects of the Automobile sector increased after the global financial crisis due to the temporary incentives provided by the Turkish government. We find that Turkish companies were not very affected by the global financial crisis.
Evolutionary-Hierarchical Bases of the Formation of Cluster Model of Innovation Economic Development
Directory of Open Access Journals (Sweden)
Yuliya Vladimirovna Dubrovskaya
2016-10-01
Full Text Available The functioning of a modern economic system is based on the interaction of objects of different hierarchical levels. Thus, the problem of the study of innovation processes taking into account the mutual influence of the activities of these economic actors becomes important. The paper dwells evolutionary basis for the formation of models of innovation development on the basis of micro and macroeconomic analysis. Most of the concepts recognized that despite a big number of diverse models, the coordination of the relations between economic agents is of crucial importance for the successful innovation development. According to the results of the evolutionary-hierarchical analysis, the authors reveal key phases of the development of forms of business cooperation, science and government in the domestic economy. It has become the starting point of the conception of the characteristics of the interaction in the cluster models of innovation development of the economy. Considerable expectancies on improvement of the national innovative system are connected with the development of cluster and network structures. The main objective of government authorities is the formation of mechanisms and institutions that will foster cooperation between members of the clusters. The article explains that the clusters cannot become the factors in the growth of the national economy, not being an effective tool for interaction between the actors of the regional innovative systems.
Hierarchical Clustering Multi-Task Learning for Joint Human Action Grouping and Recognition.
Liu, An-An; Su, Yu-Ting; Nie, Wei-Zhi; Kankanhalli, Mohan
2017-01-01
This paper proposes a hierarchical clustering multi-task learning (HC-MTL) method for joint human action grouping and recognition. Specifically, we formulate the objective function into the group-wise least square loss regularized by low rank and sparsity with respect to two latent variables, model parameters and grouping information, for joint optimization. To handle this non-convex optimization, we decompose it into two sub-tasks, multi-task learning and task relatedness discovery. First, we convert this non-convex objective function into the convex formulation by fixing the latent grouping information. This new objective function focuses on multi-task learning by strengthening the shared-action relationship and action-specific feature learning. Second, we leverage the learned model parameters for the task relatedness measure and clustering. In this way, HC-MTL can attain both optimal action models and group discovery by alternating iteratively. The proposed method is validated on three kinds of challenging datasets, including six realistic action datasets (Hollywood2, YouTube, UCF Sports, UCF50, HMDB51 & UCF101), two constrained datasets (KTH & TJU), and two multi-view datasets (MV-TJU & IXMAS). The extensive experimental results show that: 1) HC-MTL can produce competing performances to the state of the arts for action recognition and grouping; 2) HC-MTL can overcome the difficulty in heuristic action grouping simply based on human knowledge; 3) HC-MTL can avoid the possible inconsistency between the subjective action grouping depending on human knowledge and objective action grouping based on the feature subspace distributions of multiple actions. Comparison with the popular clustered multi-task learning further reveals that the discovered latent relatedness by HC-MTL aids inducing the group-wise multi-task learning and boosts the performance. To the best of our knowledge, ours is the first work that breaks the assumption that all actions are either
3D Pharmacophore, hierarchical methods, and 5-HT4 receptor binding data.
Varin, Thibault; Saettel, Nicolas; Villain, Jonathan; Lesnard, Aurelien; Dauphin, François; Bureau, Ronan; Rault, Sylvain
2008-10-01
5-Hydroxytryptamine subtype-4 (5-HT(4)) receptors have stimulated considerable interest amongst scientists and clinicians owing to their importance in neurophysiology and potential as therapeutic targets. A comparative analysis of hierarchical methods applied to data from one thousand 5-HT(4) receptor-ligand binding interactions was carried out. The chemical structures were described as chemical and pharmacophore fingerprints. The definitions of indices, related to the quality of the hierarchies in being able to distinguish between active and inactive compounds, revealed two interesting hierarchies with the Unity (1 active cluster) and pharmacophore fingerprints (4 active clusters). The results of this study also showed the importance of correct choice of metrics as well as the effectiveness of a new alternative of the Ward clustering algorithm named Energy (Minimum E-Distance method). In parallel, the relationship between these classifications and a previously defined 3D 5-HT(4) antagonist pharmacophore was established.
Directory of Open Access Journals (Sweden)
L. Infante
2002-01-01
Full Text Available En esta contribuci on presento resultados recientes sobre las propiedades de acumulaci on de galaxias, grupos, c umulos y superc umulos de bajo redshift (z 1. Presento, a su vez, lo esperado y lo medido con respecto al grado de evoluci on de la acumulaci on de galaxias. Hemos usado el cat alogo fotom etrico de galaxias extra do de las primeras im agenes del \\Sloan Digital Sky Survey", para estudiar las propiedades de acumulaci on de peque~nas estructuras de galaxias, pares, tr os, cuartetos, quintetos, etc. Un an alisis de la funci on de correlaci on de dos puntos, en un area de 250 grados cuadrados del cielo, muestra que estos objetos, al parecer, est an mucho m as acumulados que galaxias individuales.
Hybrid Steepest-Descent Methods for Triple Hierarchical Variational Inequalities
Directory of Open Access Journals (Sweden)
L. C. Ceng
2015-01-01
Full Text Available We introduce and analyze a relaxed iterative algorithm by combining Korpelevich’s extragradient method, hybrid steepest-descent method, and Mann’s iteration method. We prove that, under appropriate assumptions, the proposed algorithm converges strongly to a common element of the fixed point set of infinitely many nonexpansive mappings, the solution set of finitely many generalized mixed equilibrium problems (GMEPs, the solution set of finitely many variational inclusions, and the solution set of general system of variational inequalities (GSVI, which is just a unique solution of a triple hierarchical variational inequality (THVI in a real Hilbert space. In addition, we also consider the application of the proposed algorithm for solving a hierarchical variational inequality problem with constraints of finitely many GMEPs, finitely many variational inclusions, and the GSVI. The results obtained in this paper improve and extend the corresponding results announced by many others.
Hierarchical Clustering of Large Databases and Classification of Antibiotics at High Noise Levels
Directory of Open Access Journals (Sweden)
Alexander V. Yarkov
2008-12-01
Full Text Available A new algorithm for divisive hierarchical clustering of chemical compounds based on 2D structural fragments is suggested. The algorithm is deterministic, and given a random ordering of the input, will always give the same clustering and can process a database up to 2 million records on a standard PC. The algorithm was used for classification of 1,183 antibiotics mixed with 999,994 random chemical structures. Similarity threshold, at which best separation of active and non active compounds took place, was estimated as 0.6. 85.7% of the antibiotics were successfully classified at this threshold with 0.4% of inaccurate compounds. A .sdf file was created with the probe molecules for clustering of external databases.
A supplier selection using a hybrid grey based hierarchical clustering and artificial bee colony
Directory of Open Access Journals (Sweden)
Farshad Faezy Razi
2014-06-01
Full Text Available Selection of one or a combination of the most suitable potential providers and outsourcing problem is the most important strategies in logistics and supply chain management. In this paper, selection of an optimal combination of suppliers in inventory and supply chain management are studied and analyzed via multiple attribute decision making approach, data mining and evolutionary optimization algorithms. For supplier selection in supply chain, hierarchical clustering according to the studied indexes first clusters suppliers. Then, according to its cluster, each supplier is evaluated through Grey Relational Analysis. Then the combination of suppliers’ Pareto optimal rank and costs are obtained using Artificial Bee Colony meta-heuristic algorithm. A case study is conducted for a better description of a new algorithm to select a multiple source of suppliers.
Bae, Hyoung Won; Ji, Yongwoo; Lee, Hye Sun; Lee, Naeun; Hong, Samin; Seong, Gong Je; Sung, Kyung Rim; Kim, Chan Yun
2015-01-01
Normal-tension glaucoma (NTG) is a heterogenous disease, and there is still controversy about subclassifications of this disorder. On the basis of spectral-domain optical coherence tomography (SD-OCT), we subdivided NTG with hierarchical cluster analysis using optic nerve head (ONH) parameters and retinal nerve fiber layer (RNFL) thicknesses. A total of 200 eyes of 200 NTG patients between March 2011 and June 2012 underwent SD-OCT scans to measure ONH parameters and RNFL thicknesses. We classified NTG into homogenous subgroups based on these variables using a hierarchical cluster analysis, and compared clusters to evaluate diverse NTG characteristics. Three clusters were found after hierarchical cluster analysis. Cluster 1 (62 eyes) had the thickest RNFL and widest rim area, and showed early glaucoma features. Cluster 2 (60 eyes) was characterized by the largest cup/disc ratio and cup volume, and showed advanced glaucomatous damage. Cluster 3 (78 eyes) had small disc areas in SD-OCT and were comprised of patients with significantly younger age, longer axial length, and greater myopia than the other 2 groups. A hierarchical cluster analysis of SD-OCT scans divided NTG patients into 3 groups based upon ONH parameters and RNFL thicknesses. It is anticipated that the small disc area group comprised of younger and more myopic patients may show unique features unlike the other 2 groups.
Directory of Open Access Journals (Sweden)
Guo Junqiao
2008-09-01
Full Text Available Abstract Background The effects of climate variations on bacillary dysentery incidence have gained more recent concern. However, the multi-collinearity among meteorological factors affects the accuracy of correlation with bacillary dysentery incidence. Methods As a remedy, a modified method to combine ridge regression and hierarchical cluster analysis was proposed for investigating the effects of climate variations on bacillary dysentery incidence in northeast China. Results All weather indicators, temperatures, precipitation, evaporation and relative humidity have shown positive correlation with the monthly incidence of bacillary dysentery, while air pressure had a negative correlation with the incidence. Ridge regression and hierarchical cluster analysis showed that during 1987–1996, relative humidity, temperatures and air pressure affected the transmission of the bacillary dysentery. During this period, all meteorological factors were divided into three categories. Relative humidity and precipitation belonged to one class, temperature indexes and evaporation belonged to another class, and air pressure was the third class. Conclusion Meteorological factors have affected the transmission of bacillary dysentery in northeast China. Bacillary dysentery prevention and control would benefit from by giving more consideration to local climate variations.
Measuring efficiency of a hierarchical organization with fuzzy DEA method
LUBAN Florica
2009-01-01
The paper analyses how the data envelopment analysis (DEA) and fuzzy set theory can be used to measure and evaluate the efficiency of a hierarchical system with n decision making units and a coordinating unit. It is presented a model for determining the of activity levels of decision making units so as to achieve both fuzzy objectives of achieving global target levels of coordination unit on the inputs and outputs and individual target levels of decision making units, and then some methods to...
Comparison of the incremental and hierarchical methods for crystalline neon.
Nolan, S J; Bygrave, P J; Allan, N L; Manby, F R
2010-02-24
We present a critical comparison of the incremental and hierarchical methods for the evaluation of the static cohesive energy of crystalline neon. Both of these schemes make it possible to apply the methods of molecular electronic structure theory to crystalline solids, offering a systematically improvable alternative to density functional theory. Results from both methods are compared with previous theoretical and experimental studies of solid neon and potential sources of error are discussed. We explore the similarities of the two methods and demonstrate how they may be used in tandem to study crystalline solids.
3D NEAREST NEIGHBOUR SEARCH USING A CLUSTERED HIERARCHICAL TREE STRUCTURE
Directory of Open Access Journals (Sweden)
A. Suhaibah
2016-06-01
Full Text Available Locating and analysing the location of new stores or outlets is one of the common issues facing retailers and franchisers. This is due to assure that new opening stores are at their strategic location to attract the highest possible number of customers. Spatial information is used to manage, maintain and analyse these store locations. However, since the business of franchising and chain stores in urban areas runs within high rise multi-level buildings, a three-dimensional (3D method is prominently required in order to locate and identify the surrounding information such as at which level of the franchise unit will be located or is the franchise unit located is at the best level for visibility purposes. One of the common used analyses used for retrieving the surrounding information is Nearest Neighbour (NN analysis. It uses a point location and identifies the surrounding neighbours. However, with the immense number of urban datasets, the retrieval and analysis of nearest neighbour information and their efficiency will become more complex and crucial. In this paper, we present a technique to retrieve nearest neighbour information in 3D space using a clustered hierarchical tree structure. Based on our findings, the proposed approach substantially showed an improvement of response time analysis compared to existing approaches of spatial access methods in databases. The query performance was tested using a dataset consisting of 500,000 point locations building and franchising unit. The results are presented in this paper. Another advantage of this structure is that it also offers a minimal overlap and coverage among nodes which can reduce repetitive data entry.
Directory of Open Access Journals (Sweden)
Nicoló Musmeci
Full Text Available We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover,we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging [corrected].
Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T
2015-01-01
We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover,we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging [corrected].
Kantar, E.; Deviren, B.; Keskin, M.
2011-11-01
We present a study, within the scope of econophysics, of the hierarchical structure of 98 among the largest international companies including 18 among the largest Turkish companies, namely Banks, Automobile, Software-hardware, Telecommunication Services, Energy and the Oil-Gas sectors, viewed as a network of interacting companies. We analyze the daily time series data of the Boerse-Frankfurt and Istanbul Stock Exchange. We examine the topological properties among the companies over the period 2006-2010 by using the concept of hierarchical structure methods (the minimal spanning tree (MST) and the hierarchical tree (HT)). The period is divided into three subperiods, namely 2006-2007, 2008 which was the year of global economic crisis, and 2009-2010, in order to test various time-windows and observe temporal evolution. We carry out bootstrap analyses to associate the value of statistical reliability to the links of the MSTs and HTs. We also use average linkage clustering analysis (ALCA) in order to better observe the cluster structure. From these studies, we find that the interactions among the Banks/Energy sectors and the other sectors were reduced after the global economic crisis; hence the effects of the Banks and Energy sectors on the correlations of all companies were decreased. Telecommunication Services were also greatly affected by the crisis. We also observed that the Automobile and Banks sectors, including Turkish companies as well as some companies from the USA, Japan and Germany were strongly correlated with each other in all periods.
Diversity of Xiphinema americanum-group Species and Hierarchical Cluster Analysis of Morphometrics.
Lamberti, F; Ciancio, A
1993-09-01
Of the 39 species composing the Xiphinema americanum group, 14 were described originally from North America and two others have been reported from this region. Many species are very similar morphologically and can be distinguished only by a difficult comparison of various combinations of some morphometric characters. Study of morphometrics of 49 populations, including the type populations of the 39 species attributed to this group, by principal component analysis and hierarchical cluster analysis placed the populations into five subgroups, proposed here as the X. brevicolle subgroup (seven species), the X. americanum subgroup (17 species), the X. taylori subgroup (two species), the X. pachtaicum subgroup (eight species), and the X. lambertii subgroup (five species).
Capozziello, S; De Siena, S; Guerra, F; Illuminati, F
2000-01-01
We derive, in order of magnitude, the observed astrophysical and cosmologicalscales in the Universe, from neutron stars to superclusters of galaxies, up to,asymptotically, the observed radius of the Universe. This result is obtained byintroducing a recursive scheme of alternating hierachical mechanisms ofthree-dimensional and two-dimensional close packings of gravitationallyinteracting objects. The iterative scheme yields a rapidly converging geometricsequence, which can be described as a hierarchical clustering of aggregates,having the observed radius of the Universe as its fixed point.
CLUSTAG & WCLUSTAG: Hierarchical Clustering Algorithms for Efficient Tag-SNP Selection
Ao, Sio-Iong
More than 6 million single nucleotide polymorphisms (SNPs) in the human genome have been genotyped by the HapMap project. Although only a pro portion of these SNPs are functional, all can be considered as candidate markers for indirect association studies to detect disease-related genetic variants. The complete screening of a gene or a chromosomal region is nevertheless an expensive undertak ing for association studies. A key strategy for improving the efficiency of association studies is to select a subset of informative SNPs, called tag SNPs, for analysis. In the chapter, hierarchical clustering algorithms have been proposed for efficient tag SNP selection.
A Route Confidence Evaluation Method for Reliable Hierarchical Text Categorization
Hatami, Nima; Armano, Giuliano
2012-01-01
Hierarchical Text Categorization (HTC) is becoming increasingly important with the rapidly growing amount of text data available in the World Wide Web. Among the different strategies proposed to cope with HTC, the Local Classifier per Node (LCN) approach attains good performance by mirroring the underlying class hierarchy while enforcing a top-down strategy in the testing step. However, the problem of embedding hierarchical information (parent-child relationship) to improve the performance of HTC systems still remains open. A confidence evaluation method for a selected route in the hierarchy is proposed to evaluate the reliability of the final candidate labels in an HTC system. In order to take into account the information embedded in the hierarchy, weight factors are used to take into account the importance of each level. An acceptance/rejection strategy in the top-down decision making process is proposed, which improves the overall categorization accuracy by rejecting a few percentage of samples, i.e., thos...
Hierarchical Agglomerative Clustering Schemes for Energy-Efficiency in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Taleb Tariq
2017-06-01
Full Text Available Extending the lifetime of wireless sensor networks (WSNs while delivering the expected level of service remains a hot research topic. Clustering has been identified in the literature as one of the primary means to save communication energy. In this paper, we argue that hierarchical agglomerative clustering (HAC provides a suitable foundation for designing highly energy efficient communication protocols for WSNs. To this end, we study a new mechanism for selecting cluster heads (CHs based both on the physical location of the sensors and their residual energy. Furthermore, we study different patterns of communications between the CHs and the base station depending on the possible transmission ranges and the ability of the sensors to act as traffic relays. Simulation results show that our proposed clustering and communication schemes outperform well-knows existing approaches by comfortable margins. In particular, networks lifetime is increased by more than 60% compared to LEACH and HEED, and by more than 30% compared to K-means clustering.
Directory of Open Access Journals (Sweden)
J Anuradha
2014-05-01
Full Text Available Attention Deficit Hyperactive Disorder (ADHD is a disruptive neurobehavioral disorder characterized by abnormal behavioral patterns in attention, perusing activity, acting impulsively and combined types. It is predominant among school going children and it is tricky to differentiate between an active and an ADHD child. Misdiagnosis and undiagnosed cases are very common. Behavior patterns are identified by the mentors in the academic environment who lack skills in screening those kids. Hence an unsupervised learning algorithm can cluster the behavioral patterns of children at school for diagnosis of ADHD. In this paper, we propose a hierarchical clustering algorithm to partition the dataset based on attribute dependency (HCAD. HCAD forms clusters of data based on the high dependent attributes and their equivalence relation. It is capable of handling large volumes of data with reasonably faster clustering than most of the existing algorithms. It can work on both labeled and unlabelled data sets. Experimental results reveal that this algorithm has higher accuracy in comparison to other algorithms. HCAD achieves 97% of cluster purity in diagnosing ADHD. Empirical analysis of application of HCAD on different data sets from UCI repository is provided.
Clustering Methods Application for Customer Segmentation to Manage Advertisement Campaign
Directory of Open Access Journals (Sweden)
Maciej Kutera
2010-10-01
Full Text Available Clustering methods are recently so advanced elaborated algorithms for large collection data analysis that they have been already included today to data mining methods. Clustering methods are nowadays larger and larger group of methods, very quickly evolving and having more and more various applications. In the article, our research concerning usefulness of clustering methods in customer segmentation to manage advertisement campaign is presented. We introduce results obtained by using four selected methods which have been chosen because their peculiarities suggested their applicability to our purposes. One of the analyzed method – k-means clustering with random selected initial cluster seeds gave very good results in customer segmentation to manage advertisement campaign and these results were presented in details in the article. In contrast one of the methods (hierarchical average linkage was found useless in customer segmentation. Further investigations concerning benefits of clustering methods in customer segmentation to manage advertisement campaign is worth continuing, particularly that finding solutions in this field can give measurable profits for marketing activity.
A hierarchical method for molecular docking using cloud computing.
Kang, Ling; Guo, Quan; Wang, Xicheng
2012-11-01
Discovering small molecules that interact with protein targets will be a key part of future drug discovery efforts. Molecular docking of drug-like molecules is likely to be valuable in this field; however, the great number of such molecules makes the potential size of this task enormous. In this paper, a method to screen small molecular databases using cloud computing is proposed. This method is called the hierarchical method for molecular docking and can be completed in a relatively short period of time. In this method, the optimization of molecular docking is divided into two subproblems based on the different effects on the protein-ligand interaction energy. An adaptive genetic algorithm is developed to solve the optimization problem and a new docking program (FlexGAsDock) based on the hierarchical docking method has been developed. The implementation of docking on a cloud computing platform is then discussed. The docking results show that this method can be conveniently used for the efficient molecular design of drugs.
Kantar, Ersin; Keskin, Mustafa
2013-11-01
This study uses hierarchical structure methods (minimal spanning tree (MST) and hierarchical tree (HT)) to examine the relationship between energy consumption and economic growth in a sample of 30 Asian countries covering the period 1971-2008. These countries are categorized into four panels based on the World Bank income classification, namely high, upper middle, lower middle, and low income. In particular, we use the data of electricity consumption and real gross domestic product (GDP) per capita to detect the topological properties of the countries. We show a relationship between electricity consumption and economic growth by using the MST and HT. We also use the bootstrap technique to investigate a value of the statistical reliability to the links of the MST. Finally, we use a clustering linkage procedure in order to observe the cluster structure. The results of the structural topologies of these trees are as follows: (i) we identified different clusters of countries according to their geographical location and economic growth, (ii) we found a strong relationship between energy consumption and economic growth for all income groups considered in this study and (iii) the results are in good agreement with the causal relationship between electricity consumption and economic growth.
Pagnuco, Inti A.; Pastore, Juan I.; Abras, Guillermo; Brun, Marcel; Ballarin, Virginia L.
2016-04-01
It is usually assumed that co-expressed genes suggest co-regulation in the underlying regulatory network. Determining sets of co-expressed genes is an important task, where significative groups of genes are defined based on some criteria. This task is usually performed by clustering algorithms, where the whole family of genes, or a subset of them, are clustered into meaningful groups based on their expression values in a set of experiment. In this work we used a methodology based on the Silhouette index as a measure of cluster quality for individual gene groups, and a combination of several variants of hierarchical clustering to generate the candidate groups, to obtain sets of co-expressed genes for two real data examples. We analyzed the quality of the best ranked groups, obtained by the algorithm, using an online bioinformatics tool that provides network information for the selected genes. Moreover, to verify the performance of the algorithm, considering the fact that it doesn’t find all possible subsets, we compared its results against a full search, to determine the amount of good co-regulated sets not detected.
On the Formation of Cool, Non-Flowing Cores in Galaxy Clusters via Hierarchical Mergers
Burns, J O; Norman, M L; Bryan, G L
2003-01-01
We present a new model for the creation of cool cores in rich galaxy clusters within a LambdaCDM cosmological framework using the results from high spatial dynamic range, adaptive mesh hydro/N-body simulations. It is proposed that cores of cool gas first form in subclusters and these subclusters merge to create rich clusters with cool, central X-Ray excesses. The rich cool clusters do not possess ``cooling flows'' due to the presence of bulk velocities in the intracluster medium in excess of 1000 km/sec produced by on-going accretion of gas from supercluster filaments. This new model has several attractive features including the presence of substantial core substructure within the cool cores, and it predicts the appearance of cool bullets, cool fronts, and cool filaments all of which have been recently observed with X-Ray satellites. This hierarchical formation model is also consistent with the observation that cool cores in Abell clusters occur preferentially in dense supercluster environments. On the other ...
Niching method using clustering crowding
Institute of Scientific and Technical Information of China (English)
GUO Guan-qi; GUI Wei-hua; WU Min; YU Shou-yi
2005-01-01
This study analyzes drift phenomena of deterministic crowding and probabilistic crowding by using equivalence class model and expectation proportion equations. It is proved that the replacement errors of deterministic crowding cause the population converging to a single individual, thus resulting in premature stagnation or losing optional optima. And probabilistic crowding can maintain equilibrium multiple subpopulations as the population size is adequate large. An improved niching method using clustering crowding is proposed. By analyzing topology of fitness landscape using hill valley function and extending the search space for similarity analysis, clustering crowding determines the locality of search space more accurately, thus greatly decreasing replacement errors of crowding. The integration of deterministic and probabilistic replacement increases the capacity of both parallel local hill climbing and maintaining multiple subpopulations. The experimental results optimizing various multimodal functions show that,the performances of clustering crowding, such as the number of effective peaks maintained, average peak ratio and global optimum ratio are uniformly superior to those of the evolutionary algorithms using fitness sharing, simple deterministic crowding and probabilistic crowding.
Hierarchical N-body methods on shared address space multiprocessors.
Holt, C.; Singh, J. P.
The authors examine the parallelization issues in and architectural implications of the two dominant adaptive hierarchical N-body methods: the Barnes-Hut method and the Fast Multipole Method. They show that excellent parallel performance can be obtained on cache-coherent shared address space multiprocessors, by demonstrating performance on three cache-coherent machines: the Stanford DASH, the Kendall Square Research KSR-1, and the Silicon Graphics Challenge. Even on machines that have their main memory physically distributed among processing nodes and highly nonuniform memory access costs, the speedups are obtained without any attention to where memory is allocated on the machine. The authors show that the reason for good performance is the high degree of temporal locality afforded by the applications, and the fact that working sets are small (and scale slowly) so that caching shared data automatically in hardware exploits this locality very effectively. Even if data distribution in main memory is assumed to be free, it does not help very much. Finally, they address a potential bottleneck in scaling the parallelism to large machines, namely the fraction of time spent in building the tree used by hierarchical N-body methods.
Scoring methods used in cluster analysis
Sirota, Sergej
2014-01-01
The aim of the thesis is to compare methods of cluster analysis correctly classify objects in the dataset into groups, which are known. In the theoretical section first describes the steps needed to prepare a data file for cluster analysis. The next theoretical section is dedicated to the cluster analysis, which describes ways of measuring similarity of objects and clusters, and dedicated to description the methods of cluster analysis used in practical part of this thesis. In practical part a...
Clustering of galaxies in a hierarchical universe - II. Evolution to high redshift
Kauffmann, Guinevere; Colberg, Jörg M.; Diaferio, Antonaldo; White, Simon D. M.
1999-08-01
In hierarchical cosmologies the evolution of galaxy clustering depends both on cosmological quantities such as Omega, Lambda and P(k), which determine how collapsed structures - dark matter haloes - form and evolve, and on the physical processes - cooling, star formation, radiative and hydrodynamic feedback - which drive the formation of galaxies within these merging haloes. In this paper we combine dissipationless cosmological N-body simulations and semi-analytic models of galaxy formation in order to study how these two aspects interact. We focus on the differences in clustering predicted for galaxies of differing luminosity, colour, morphology and star formation rate, and on what these differences can teach us about the galaxy formation process. We show that a `dip' in the amplitude of galaxy correlations between z=0 and z=1 can be an important diagnostic. Such a dip occurs in low-density CDM models, because structure forms early, and dark matter haloes of mass ~10^12M_solar, containing galaxies with luminosities ~L_*, are unbiased tracers of the dark matter over this redshift range; their clustering amplitude then evolves similarly to that of the dark matter. At higher redshifts, bright galaxies become strongly biased and the clustering amplitude increases again. In high density models, structure forms late, and bias evolves much more rapidly. As a result, the clustering amplitude of L_* galaxies remains constant from z=0 to z=1. The strength of these effects is sensitive to sample selection. The dip becomes weaker for galaxies with lower star formation rates, redder colours, higher luminosities and earlier morphological types. We explain why this is the case, and how it is related to the variation with redshift of the abundance and environment of the observed galaxies. We also show that the relative peculiar velocities of galaxies are biased low in our models, but that this effect is never very strong. Studies of clustering evolution as a function of galaxy
Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method
Tsai, F. T. C.; Elshall, A. S.
2014-12-01
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.
Yi, Wen-Bin; Shen, Li; Qi, Yin-Feng; Tang, Hong
2011-09-01
The paper introduces the Probabilistic Latent Semantic Analysis (PLSA) to the image clustering and an effective image clustering algorithm using the semantic information from PLSA is proposed which is used for hyperspectral images. Firstly, the ISODATA algorithm is used to obtain the initial clustering result of hyperspectral image and the clusters of the initial clustering result are considered as the visual words of the PLSA. Secondly, the object-oriented image segmentation algorithm is used to partition the hyperspectral image and segments with relatively pure pixels are regarded as documents in PLSA. Thirdly, a variety of identification methods which can estimate the best number of cluster centers is combined to get the number of latent semantic topics. Then the conditional distributions of visual words in topics and the mixtures of topics in different documents are estimated by using PLSA. Finally, the conditional probabilistic of latent semantic topics are distinguished using statistical pattern recognition method, the topic type for each visual in each document will be given and the clustering result of hyperspectral image are then achieved. Experimental results show the clusters of the proposed algorithm are better than K-MEANS and ISODATA in terms of object-oriented property and the clustering result is closer to the distribution of real spatial distribution of surface.
Hierarchical Matrices Method and Its Application in Electromagnetic Integral Equations
Directory of Open Access Journals (Sweden)
Han Guo
2012-01-01
Full Text Available Hierarchical (H- matrices method is a general mathematical framework providing a highly compact representation and efficient numerical arithmetic. When applied in integral-equation- (IE- based computational electromagnetics, H-matrices can be regarded as a fast algorithm; therefore, both the CPU time and memory requirement are reduced significantly. Its kernel independent feature also makes it suitable for any kind of integral equation. To solve H-matrices system, Krylov iteration methods can be employed with appropriate preconditioners, and direct solvers based on the hierarchical structure of H-matrices are also available along with high efficiency and accuracy, which is a unique advantage compared to other fast algorithms. In this paper, a novel sparse approximate inverse (SAI preconditioner in multilevel fashion is proposed to accelerate the convergence rate of Krylov iterations for solving H-matrices system in electromagnetic applications, and a group of parallel fast direct solvers are developed for dealing with multiple right-hand-side cases. Finally, numerical experiments are given to demonstrate the advantages of the proposed multilevel preconditioner compared to conventional “single level” preconditioners and the practicability of the fast direct solvers for arbitrary complex structures.
The SMART CLUSTER METHOD - adaptive earthquake cluster analysis and declustering
Schaefer, Andreas; Daniell, James; Wenzel, Friedemann
2016-04-01
Earthquake declustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity with usual applications comprising of probabilistic seismic hazard assessments (PSHAs) and earthquake prediction methods. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation. Various methods have been developed to address this issue from other researchers. These have differing ranges of complexity ranging from rather simple statistical window methods to complex epidemic models. This study introduces the smart cluster method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal identification. Hereby, an adaptive search algorithm for data point clusters is adopted. It uses the earthquake density in the spatio-temporal neighbourhood of each event to adjust the search properties. The identified clusters are subsequently analysed to determine directional anisotropy, focussing on a strong correlation along the rupture plane and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010/2011 Darfield-Christchurch events, an adaptive classification procedure is applied to disassemble subsequent ruptures which may have been grouped into an individual cluster using near-field searches, support vector machines and temporal splitting. The steering parameters of the search behaviour are linked to local earthquake properties like magnitude of completeness, earthquake density and Gutenberg-Richter parameters. The method is capable of identifying and classifying earthquake clusters in space and time. It is tested and validated using earthquake data from California and New Zealand. As a result of the cluster identification process, each event in
Parallel iterative solvers and preconditioners using approximate hierarchical methods
Energy Technology Data Exchange (ETDEWEB)
Grama, A.; Kumar, V.; Sameh, A. [Univ. of Minnesota, Minneapolis, MN (United States)
1996-12-31
In this paper, we report results of the performance, convergence, and accuracy of a parallel GMRES solver for Boundary Element Methods. The solver uses a hierarchical approximate matrix-vector product based on a hybrid Barnes-Hut / Fast Multipole Method. We study the impact of various accuracy parameters on the convergence and show that with minimal loss in accuracy, our solver yields significant speedups. We demonstrate the excellent parallel efficiency and scalability of our solver. The combined speedups from approximation and parallelism represent an improvement of several orders in solution time. We also develop fast and paralellizable preconditioners for this problem. We report on the performance of an inner-outer scheme and a preconditioner based on truncated Green`s function. Experimental results on a 256 processor Cray T3D are presented.
Convex Decomposition Based Cluster Labeling Method for Support Vector Clustering
Institute of Scientific and Technical Information of China (English)
Yuan Ping; Ying-Jie Tian; Ya-Jian Zhou; Yi-Xian Yang
2012-01-01
Support vector clustering (SVC) is an important boundary-based clustering algorithm in multiple applications for its capability of handling arbitrary cluster shapes. However,SVC's popularity is degraded by its highly intensive time complexity and poor label performance.To overcome such problems,we present a novel efficient and robust convex decomposition based cluster labeling (CDCL) method based on the topological property of dataset.The CDCL decomposes the implicit cluster into convex hulls and each one is comprised by a subset of support vectors (SVs).According to a robust algorithm applied in the nearest neighboring convex hulls,the adjacency matrix of convex hulls is built up for finding the connected components; and the remaining data points would be assigned the label of the nearest convex hull appropriately.The approach's validation is guaranteed by geometric proofs.Time complexity analysis and comparative experiments suggest that CDCL improves both the efficiency and clustering quality significantly.
The relationship between carbon dioxide emission and economic growth: Hierarchical structure methods
Deviren, Seyma Akkaya; Deviren, Bayram
2016-06-01
Carbon dioxide (CO2) emission has an essential role in the current debate on sustainable development and environmental protection. CO2 emission is also directly linked with use of energy which plays a focal role both for production and consumption in the world economy. Therefore the relationship between the CO2 emission and economic growth has a significant implication for the environmental and economical policies. In this study, within the scope of sociophysics, the topology, taxonomy and relationships among the 33 countries, which have almost the high CO2 emission and economic growth values, are investigated by using the hierarchical structure methods, such as the minimal spanning tree (MST) and hierarchical tree (HT), over the period of 1970-2010. The average linkage cluster analysis (ALCA) is also used to examine the cluster structure more clearly in HTs. According to their proximity, economic ties and economic growth, different clusters of countries are identified from the structural topologies of these trees. We have found that the high income & OECD countries are closely connected to each other and are isolated from the upper middle and lower middle income countries from the MSTs, which are obtained both for the CO2 emission and economic growth. Moreover, the high income & OECD clusters are homogeneous with respect to the economic activities and economic ties of the countries. It is also mentioned that the Group of Seven (G7) countries (CAN, ENG, FRA, GER, ITA, JPN, USA) are connected to each other and these countries are located at the center of the MST for the results of CO2 emission. The same analysis may also successfully apply to the other environmental sources and different countries.
DEFF Research Database (Denmark)
Mi, Jianli; Lock, Nina; Sun, Ting;
2010-01-01
A simple biomolecule-assisted hydrothermal approach has been developed for the fabrication of Bi2Te3 thermoelectric nanomaterials. The product has a nanostring-cluster hierarchical structure which is composed of ordered and aligned platelet-like crystals. The platelets are100 nm in diameter...
Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G
2015-10-01
Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.
Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.
2016-01-01
Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969
Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model
Ellefsen, Karl J.; Smith, David
2016-01-01
Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.
An Overview on Clustering Methods
Madhulatha, T Soni
2012-01-01
Clustering is a common technique for statistical data analysis, which is used in many fields, including machine learning, data mining, pattern recognition, image analysis and bioinformatics. Clustering is the process of grouping similar objects into different groups, or more precisely, the partitioning of a data set into subsets, so that the data in each subset according to some defined distance measure. This paper covers about clustering algorithms, benefits and its applications. Paper concludes by discussing some limitations.
A method of transition conflict resolving in hierarchical control
Łabiak, Grzegorz
2016-09-01
The paper concerns the problem of automatic solving of transition conflicts in hierarchical concurrent state machines (also known as UML state machine). Preparing by the designer a formal specification of a behaviour free from conflicts can be very complex. In this paper, it is proposed a method for solving conflicts through transition predicates modification. Partially specified predicates in the nondeterministic diagram are transformed into a symbolic Boolean space, whose points of the space code all possible valuations of transition predicates. Next, all valuations under partial specifications are logically multiplied by a function which represents all possible orthogonal predicate valuations. The result of this operation contains all possible collections of predicates, which under given partial specification make that the original diagram is conflict free and deterministic.
Clustering of Galaxies in a Hierarchical Universe 2 evolution to High Redshift
Kauffmann, G; Diaferio, A; White, S D M; Kauffmann, Guinevere; Colberg, Joerg M.; Diaferio, Antonaldo; White, Simon D.M.
1998-01-01
In hierarchical cosmologies the evolution of galaxy clustering depends both on cosmological quantities such as Omega and Lambda, which determine how dark matter halos form and evolve, and on the physical processes - cooling, star formation and feedback - which drive the formation of galaxies within these merging halos. In this paper, we combine dissipationless cosmological N-body simulations and semi-analytic models of galaxy formation in order to study how these two aspects interact. We focus on the differences in clustering predicted for galaxies of differing luminosity, colour, morphology and star formation rate and on what these differences can teach us about the galaxy formation process. We show that a "dip" in the amplitude of galaxy correlations between z=0 and z=1 can be an important diagnostic. Such a dip occurs in low-density CDM models because structure forms early and dark matter halos of 10**12 solar masses, containing galaxies with luminosities around L*, are unbiased tracers of the dark matter ...
Directory of Open Access Journals (Sweden)
A. Meenakshi
2016-08-01
Full Text Available Resource allocation is the task of convenient resources to different uses. In the context of an resources, entire economy, can be assigned by different means, such as markets or central planning. Cloud computing has become a new age technology that has got huge potentials in enterprises and markets. Clouds can make it possible to access applications and associated data from anywhere. The fundamental motive of the resource allocation is to allot the available resource in the most effective manner. In the initial phase, a representative resource usage distribution for a group of nodes with identical resource usage patterns is evaluated as resource bundle which can be easily employed to locate a group of nodes fulfilling a standard criterion. In the document, an innovative clustering-based resource aggregation viz. the Improved Hierarchal Agglomerative Clustering Algorithm (IHAC is elegantly launched to realize the compact illustration of a set of identically behaving nodes for scalability. In the subsequent phase concerned with energetic resource allocation procedure, the hybrid optimization technique is brilliantly brought in. The novel technique is devised for scheduling functions to cloud resources which duly consider both financial and evaluation expenses. The efficiency of the novel Resource allocation system is assessed by means of several parameters such the reliability, reusability and certain other metrics. The optimal path choice is the consequence of the hybrid optimization approach. The new-fangled technique allocates the available resource based on the optimal path.
Topology of the correlation networks among major currencies using hierarchical structure methods
Keskin, Mustafa; Deviren, Bayram; Kocakaplan, Yusuf
2011-02-01
We studied the topology of correlation networks among 34 major currencies using the concept of a minimal spanning tree and hierarchical tree for the full years of 2007-2008 when major economic turbulence occurred. We used the USD (US Dollar) and the TL (Turkish Lira) as numeraires in which the USD was the major currency and the TL was the minor currency. We derived a hierarchical organization and constructed minimal spanning trees (MSTs) and hierarchical trees (HTs) for the full years of 2007, 2008 and for the 2007-2008 period. We performed a technique to associate a value of reliability to the links of MSTs and HTs by using bootstrap replicas of data. We also used the average linkage cluster analysis for obtaining the hierarchical trees in the case of the TL as the numeraire. These trees are useful tools for understanding and detecting the global structure, taxonomy and hierarchy in financial data. We illustrated how the minimal spanning trees and their related hierarchical trees developed over a period of time. From these trees we identified different clusters of currencies according to their proximity and economic ties. The clustered structure of the currencies and the key currency in each cluster were obtained and we found that the clusters matched nicely with the geographical regions of corresponding countries in the world such as Asia or Europe. As expected the key currencies were generally those showing major economic activity.
An Adaptive Method for Mining Hierarchical Spatial Co-location Patterns
Directory of Open Access Journals (Sweden)
CAI Jiannan
2016-04-01
Full Text Available Mining spatial co-location patterns plays a key role in spatial data mining. Spatial co-location patterns refer to subsets of features whose objects are frequently located in close geographic proximity. Due to spatial heterogeneity, spatial co-location patterns are usually not the same across geographic space. However, existing methods are mainly designed to discover global spatial co-location patterns, and not suitable for detecting regional spatial co-location patterns. On that account, an adaptive method for mining hierarchical spatial co-location patterns is proposed in this paper. Firstly, global spatial co-location patterns are detected and other non-prevalent co-location patterns are identified as candidate regional co-location patterns. Then, for each candidate pattern, adaptive spatial clustering method is used to delineate localities of that pattern in the study area, and participation ratio is utilized to measure the prevalence of the candidate co-location pattern. Finally, an overlap operation is developed to deduce localities of (k+1-size co-location patterns from localities of k-size co-location patterns. Experiments on both simulated and real-life datasets show that the proposed method is effective for detecting hierarchical spatial co-location patterns.
A Continuous Clustering Method for Vector Fields
Garcke, H.; Preußer, T.; Rumpf, M.; Telea, A.; Weikard, U.; Wijk, J. van
2000-01-01
A new method for the simplification of flow fields is presented. It is based on continuous clustering. A well-known physical clustering model, the Cahn Hillard model which describes phase separation, is modified to reflect the properties of the data to be visualized. Clusters are defined implicitly
Directory of Open Access Journals (Sweden)
Andrew E Bruno
Full Text Available Many bioscience fields employ high-throughput methods to screen multiple biochemical conditions. The analysis of these becomes tedious without a degree of automation. Crystallization, a rate limiting step in biological X-ray crystallography, is one of these fields. Screening of multiple potential crystallization conditions (cocktails is the most effective method of probing a proteins phase diagram and guiding crystallization but the interpretation of results can be time-consuming. To aid this empirical approach a cocktail distance coefficient was developed to quantitatively compare macromolecule crystallization conditions and outcome. These coefficients were evaluated against an existing similarity metric developed for crystallization, the C6 metric, using both virtual crystallization screens and by comparison of two related 1,536-cocktail high-throughput crystallization screens. Hierarchical clustering was employed to visualize one of these screens and the crystallization results from an exopolyphosphatase-related protein from Bacteroides fragilis, (BfR192 overlaid on this clustering. This demonstrated a strong correlation between certain chemically related clusters and crystal lead conditions. While this analysis was not used to guide the initial crystallization optimization, it led to the re-evaluation of unexplained peaks in the electron density map of the protein and to the insertion and correct placement of sodium, potassium and phosphate atoms in the structure. With these in place, the resulting structure of the putative active site demonstrated features consistent with active sites of other phosphatases which are involved in binding the phosphoryl moieties of nucleotide triphosphates. The new distance coefficient, CDcoeff, appears to be robust in this application, and coupled with hierarchical clustering and the overlay of crystallization outcome, reveals information of biological relevance. While tested with a single example the
Directory of Open Access Journals (Sweden)
Mamta Malik
2011-09-01
Full Text Available Cluster detection is a tool employed by GIS scientists who specialize in the field of spatial analysis. This study employed a combination of GIS, RS and a novel 3DCCOM spatial data clustering algorithm to assess the rural demographic development strategies of Sonepat block, Haryana, India. This Study is undertaken in the rural and rural-based district in India to demonstrate the integration of village-level spatial and non-spatial data in GIS environment using Hierarchical Clustering. Spatial clusters of living standard parameters, including family members, male and female population, sex ratio, total male and female education ratio etc. The paper also envisages future development and usefulness of this community GIS, Spatial data clustering tool for grass-root level planning. Any data that showsgeographic (spatial variability can be subject to cluster analysis.
Single pass kernel -means clustering method
Indian Academy of Sciences (India)
T Hitendra Sarma; P Viswanath; B Eswara Reddy
2013-06-01
In unsupervised classiﬁcation, kernel -means clustering method has been shown to perform better than conventional -means clustering method in identifying non-isotropic clusters in a data set. The space and time requirements of this method are $O(n^2)$, where is the data set size. Because of this quadratic time complexity, the kernel -means method is not applicable to work with large data sets. The paper proposes a simple and faster version of the kernel -means clustering method, called single pass kernel k-means clustering method. The proposed method works as follows. First, a random sample $\\mathcal{S}$ is selected from the data set $\\mathcal{D}$. A partition $\\Pi_{\\mathcal{S}}$ is obtained by applying the conventional kernel -means method on the random sample $\\mathcal{S}$. The novelty of the paper is, for each cluster in $\\Pi_{\\mathcal{S}}$, the exact cluster center in the input space is obtained using the gradient descent approach. Finally, each unsampled pattern is assigned to its closest exact cluster center to get a partition of the entire data set. The proposed method needs to scan the data set only once and it is much faster than the conventional kernel -means method. The time complexity of this method is $O(s^2+t+nk)$ where is the size of the random sample $\\mathcal{S}$, is the number of clusters required, and is the time taken by the gradient descent method (to ﬁnd exact cluster centers). The space complexity of the method is $O(s^2)$. The proposed method can be easily implemented and is suitable for large data sets, like those in data mining applications. Experimental results show that, with a small loss of quality, the proposed method can signiﬁcantly reduce the time taken than the conventional kernel -means clustering method. The proposed method is also compared with other recent similar methods.
Directory of Open Access Journals (Sweden)
I-Hsuan Lin
Full Text Available Oncogenic transformation of normal cells often involves epigenetic alterations, including histone modification and DNA methylation. We conducted whole-genome bisulfite sequencing to determine the DNA methylomes of normal breast, fibroadenoma, invasive ductal carcinomas and MCF7. The emergence, disappearance, expansion and contraction of kilobase-sized hypomethylated regions (HMRs and the hypomethylation of the megabase-sized partially methylated domains (PMDs are the major forms of methylation changes observed in breast tumor samples. Hierarchical clustering of HMR revealed tumor-specific hypermethylated clusters and differential methylated enhancers specific to normal or breast cancer cell lines. Joint analysis of gene expression and DNA methylation data of normal breast and breast cancer cells identified differentially methylated and expressed genes associated with breast and/or ovarian cancers in cancer-specific HMR clusters. Furthermore, aberrant patterns of X-chromosome inactivation (XCI was found in breast cancer cell lines as well as breast tumor samples in the TCGA BRCA (breast invasive carcinoma dataset. They were characterized with differentially hypermethylated XIST promoter, reduced expression of XIST, and over-expression of hypomethylated X-linked genes. High expressions of these genes were significantly associated with lower survival rates in breast cancer patients. Comprehensive analysis of the normal and breast tumor methylomes suggests selective targeting of DNA methylation changes during breast cancer progression. The weak causal relationship between DNA methylation and gene expression observed in this study is evident of more complex role of DNA methylation in the regulation of gene expression in human epigenetics that deserves further investigation.
A hierarchical network modeling method for railway tunnels safety assessment
Zhou, Jin; Xu, Weixiang; Guo, Xin; Liu, Xumin
2017-02-01
Using network theory to model risk-related knowledge on accidents is regarded as potential very helpful in risk management. A large amount of defects detection data for railway tunnels is collected in autumn every year in China. It is extremely important to discover the regularities knowledge in database. In this paper, based on network theories and by using data mining techniques, a new method is proposed for mining risk-related regularities to support risk management in railway tunnel projects. A hierarchical network (HN) model which takes into account the tunnel structures, tunnel defects, potential failures and accidents is established. An improved Apriori algorithm is designed to rapidly and effectively mine correlations between tunnel structures and tunnel defects. Then an algorithm is presented in order to mine the risk-related regularities table (RRT) from the frequent patterns. At last, a safety assessment method is proposed by consideration of actual defects and possible risks of defects gained from the RRT. This method cannot only generate the quantitative risk results but also reveal the key defects and critical risks of defects. This paper is further development on accident causation network modeling methods which can provide guidance for specific maintenance measure.
WSNs data acquisition by combining hierarchical routing method and compressive sensing.
Zou, Zhiqiang; Hu, Cunchen; Zhang, Fei; Zhao, Hao; Shen, Shu
2014-09-09
We address the problem of data acquisition in large distributed wireless sensor networks (WSNs). We propose a method for data acquisition using the hierarchical routing method and compressive sensing for WSNs. Only a few samples are needed to recover the original signal with high probability since sparse representation technology is exploited to capture the similarities and differences of the original signal. To collect samples effectively in WSNs, a framework for the use of the hierarchical routing method and compressive sensing is proposed, using a randomized rotation of cluster-heads to evenly distribute the energy load among the sensors in the network. Furthermore, L1-minimization and Bayesian compressed sensing are used to approximate the recovery of the original signal from the smaller number of samples with a lower signal reconstruction error. We also give an extensive validation regarding coherence, compression rate, and lifetime, based on an analysis of the theory and experiments in the environment with real world signals. The results show that our solution is effective in a large distributed network, especially for energy constrained WSNs.
Cluster Analysis of the Newcastle Electronic Corpus of Tyneside English: A Comparison of Methods
Moisl, Hermann; Jones, Val
2005-01-01
This article examines the feasibility of an empirical approach to sociolinguistic analysis of the Newcastle Electronic Corpus of Tyneside English using exploratory multivariate methods. It addresses a known problem with one class of such methods, hierarchical cluster analysis¿that different clusteri
Kernel method-based fuzzy clustering algorithm
Institute of Scientific and Technical Information of China (English)
Wu Zhongdong; Gao Xinbo; Xie Weixin; Yu Jianping
2005-01-01
The fuzzy C-means clustering algorithm(FCM) to the fuzzy kernel C-means clustering algorithm(FKCM) to effectively perform cluster analysis on the diversiform structures are extended, such as non-hyperspherical data, data with noise, data with mixture of heterogeneous cluster prototypes, asymmetric data, etc. Based on the Mercer kernel, FKCM clustering algorithm is derived from FCM algorithm united with kernel method. The results of experiments with the synthetic and real data show that the FKCM clustering algorithm is universality and can effectively unsupervised analyze datasets with variform structures in contrast to FCM algorithm. It is can be imagined that kernel-based clustering algorithm is one of important research direction of fuzzy clustering analysis.
Franke, R.
2016-11-01
In many networks discovered in biology, medicine, neuroscience and other disciplines special properties like a certain degree distribution and hierarchical cluster structure (also called communities) can be observed as general organizing principles. Detecting the cluster structure of an unknown network promises to identify functional subdivisions, hierarchy and interactions on a mesoscale. It is not trivial choosing an appropriate detection algorithm because there are multiple network, cluster and algorithmic properties to be considered. Edges can be weighted and/or directed, clusters overlap or build a hierarchy in several ways. Algorithms differ not only in runtime, memory requirements but also in allowed network and cluster properties. They are based on a specific definition of what a cluster is, too. On the one hand, a comprehensive network creation model is needed to build a large variety of benchmark networks with different reasonable structures to compare algorithms. On the other hand, if a cluster structure is already known, it is desirable to separate effects of this structure from other network properties. This can be done with null model networks that mimic an observed cluster structure to improve statistics on other network features. A third important application is the general study of properties in networks with different cluster structures, possibly evolving over time. Currently there are good benchmark and creation models available. But what is left is a precise sandbox model to build hierarchical, overlapping and directed clusters for undirected or directed, binary or weighted complex random networks on basis of a sophisticated blueprint. This gap shall be closed by the model CHIMERA (Cluster Hierarchy Interconnection Model for Evaluation, Research and Analysis) which will be introduced and described here for the first time.
Local Approximation and Hierarchical Methods for Stochastic Optimization
Cheng, Bolong
In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the
Inter-Cluster Routing Authentication for Ad Hoc Networks by a Hierarchical Key Scheme
Institute of Scientific and Technical Information of China (English)
Yueh-Min Huang; Hua-Yi Lin; Tzone-I Wang
2006-01-01
Dissimilar to traditional networks, the features of mobile wireless devices that can actively form a network without any infrastructure mean that mobile ad hoc networks frequently display partition due to node mobility or link failures. These indicate that an ad hoc network is difficult to provide on-line access to a trusted authority server. Therefore,applying traditional Public Key Infrastructure (PKI) security framework to mobile ad hoc networks will cause insecurities.This study proposes a scalable and elastic key management scheme integrated into Cluster Based Secure Routing Protocol (CBSRP) to enhance security and non-repudiation of routing authentication, and introduces an ID-Based internal routing authentication scheme to enhance the routing performance in an internal cluster. Additionally, a method of performing routing authentication between internal and external clusters, as well as inter-cluster routing authentication, is developed.The proposed cluster-based key management scheme distributes trust to an aggregation of cluster heads using a threshold scheme faculty, provides Certificate Authority (CA) with a fault tolerance mechanism to prevent a single point of compromise or failure, and saves CA large repositories from maintaining member certificates, making ad hoc networks robust to malicious behaviors and suitable for numerous mobile devices.
Nanowire-based polypyrrole hierarchical structures synthesized by a two-step electrochemical method.
Ge, Dongtao; Huang, Sanqing; Qi, Rucai; Mu, Jing; Shen, Yuqing; Shi, Wei
2009-08-03
A simple two-step electrochemical method is proposed for the synthesis of nanowire-based polypyrrole hierarchical structures. In the first step, microstructured polypyrrole films are prepared by electropolymerization. Then, polypyrrole nanowires are electrodeposited on the surface of the as-synthesized microstructured polypyrrole films. As a result, hierarchical structures of polypyrrole nanowires on polypyrrole microstructures are obtained. The surface wettabilities of the resulting nanowire-based polypyrrole hierarchical structures are examined. It is expected that this two-step method can be developed into a versatile route to produce nanowire-based polypyrrole hierarchical structures with different morphologies and surface properties.
Wild; Blankley
2000-01-01
Four different two-dimensional fingerprint types (MACCS, Unity, BCI, and Daylight) and nine methods of selecting optimal cluster levels from the output of a hierarchical clustering algorithm were evaluated for their ability to select clusters that represent chemical series present in some typical examples of chemical compound data sets. The methods were evaluated using a Ward's clustering algorithm on subsets of the publicly available National Cancer Institute HIV data set, as well as with compounds from our corporate data set. We make a number of observations and recommendations about the choice of fingerprint type and cluster level selection methods for use in this type of clustering
Directory of Open Access Journals (Sweden)
Markus Uhrig
Full Text Available Alzheimer's disease (AD is characterized by neuronal degeneration and cell loss. Abeta(42, in contrast to Abeta(40, is thought to be the pathogenic form triggering the pathological cascade in AD. In order to unravel overall gene regulation we monitored the transcriptomic responses to increased or decreased Abeta(40 and Abeta(42 levels, generated and derived from its precursor C99 (C-terminal fragment of APP comprising 99 amino acids in human neuroblastoma cells. We identified fourteen differentially expressed transcripts by hierarchical clustering and discussed their involvement in AD. These fourteen transcripts were grouped into two main clusters each showing distinct differential expression patterns depending on Abeta(40 and Abeta(42 levels. Among these transcripts we discovered an unexpected inverse and strong differential expression of neurogenin 2 (NEUROG2 and KIAA0125 in all examined cell clones. C99-overexpression had a similar effect on NEUROG2 and KIAA0125 expression as a decreased Abeta(42/Abeta(40 ratio. Importantly however, an increased Abeta(42/Abeta(40 ratio, which is typical of AD, had an inverse expression pattern of NEUROG2 and KIAA0125: An increased Abeta(42/Abeta(40 ratio up-regulated NEUROG2, but down-regulated KIAA0125, whereas the opposite regulation pattern was observed for a decreased Abeta(42/Abeta(40 ratio. We discuss the possibilities that the so far uncharacterized KIAA0125 might be a counter player of NEUROG2 and that KIAA0125 could be involved in neurogenesis, due to the involvement of NEUROG2 in developmental neural processes.
Directory of Open Access Journals (Sweden)
Diane G O Saunders
Full Text Available Rust fungi are obligate biotrophic pathogens that cause considerable damage on crop plants. Puccinia graminis f. sp. tritici, the causal agent of wheat stem rust, and Melampsora larici-populina, the poplar leaf rust pathogen, have strong deleterious impacts on wheat and poplar wood production, respectively. Filamentous pathogens such as rust fungi secrete molecules called disease effectors that act as modulators of host cell physiology and can suppress or trigger host immunity. Current knowledge on effectors from other filamentous plant pathogens can be exploited for the characterisation of effectors in the genome of recently sequenced rust fungi. We designed a comprehensive in silico analysis pipeline to identify the putative effector repertoire from the genome of two plant pathogenic rust fungi. The pipeline is based on the observation that known effector proteins from filamentous pathogens have at least one of the following properties: (i contain a secretion signal, (ii are encoded by in planta induced genes, (iii have similarity to haustorial proteins, (iv are small and cysteine rich, (v contain a known effector motif or a nuclear localization signal, (vi are encoded by genes with long intergenic regions, (vii contain internal repeats, and (viii do not contain PFAM domains, except those associated with pathogenicity. We used Markov clustering and hierarchical clustering to classify protein families of rust pathogens and rank them according to their likelihood of being effectors. Using this approach, we identified eight families of candidate effectors that we consider of high value for functional characterization. This study revealed a diverse set of candidate effectors, including families of haustorial expressed secreted proteins and small cysteine-rich proteins. This comprehensive classification of candidate effectors from these devastating rust pathogens is an initial step towards probing plant germplasm for novel resistance components.
Saunders, Diane G. O.; Win, Joe; Cano, Liliana M.; Szabo, Les J.; Kamoun, Sophien; Raffaele, Sylvain
2012-01-01
Rust fungi are obligate biotrophic pathogens that cause considerable damage on crop plants. Puccinia graminis f. sp. tritici, the causal agent of wheat stem rust, and Melampsora larici-populina, the poplar leaf rust pathogen, have strong deleterious impacts on wheat and poplar wood production, respectively. Filamentous pathogens such as rust fungi secrete molecules called disease effectors that act as modulators of host cell physiology and can suppress or trigger host immunity. Current knowledge on effectors from other filamentous plant pathogens can be exploited for the characterisation of effectors in the genome of recently sequenced rust fungi. We designed a comprehensive in silico analysis pipeline to identify the putative effector repertoire from the genome of two plant pathogenic rust fungi. The pipeline is based on the observation that known effector proteins from filamentous pathogens have at least one of the following properties: (i) contain a secretion signal, (ii) are encoded by in planta induced genes, (iii) have similarity to haustorial proteins, (iv) are small and cysteine rich, (v) contain a known effector motif or a nuclear localization signal, (vi) are encoded by genes with long intergenic regions, (vii) contain internal repeats, and (viii) do not contain PFAM domains, except those associated with pathogenicity. We used Markov clustering and hierarchical clustering to classify protein families of rust pathogens and rank them according to their likelihood of being effectors. Using this approach, we identified eight families of candidate effectors that we consider of high value for functional characterization. This study revealed a diverse set of candidate effectors, including families of haustorial expressed secreted proteins and small cysteine-rich proteins. This comprehensive classification of candidate effectors from these devastating rust pathogens is an initial step towards probing plant germplasm for novel resistance components. PMID:22238666
Evolution of platinum hierarchical microstructure amine - Assisted growth via solvothermal method
Ooi, Mahayatun Dayana Johan; Aziz, Azlan Abdul
2015-04-01
Here we studied the formation of Platinum hierarchical microstructure by varying the synthesis time using amine assisted growth via solvothermal method. A small cluster of particles was produced at a shorter synthesis time (5h) while fully grown flower-like microstructure were formed at 9h of reaction. The synthesized Pt particles exhibit high absorption peak at 230 nm corresponding to Pt absorption peak. The catalytic property of the synthesized Pt is greatly influenced by its geometrical shape. The fully grown flower-like particles exhibit large electrochemical surface area (4.88 cm-2 g-1) and catalytic stability at a longer period, which can serve as a potential catalyst for electro-oxidation of formic acid.
Hierarchical black hole triples in young star clusters: impact of Kozai-Lidov resonance on mergers
Kimpson, Thomas O; Mapelli, Michela; Ziosi, Brunetto M
2016-01-01
Mergers of compact object binaries are one of the most powerful sources of gravitational waves (GWs) in the frequency range of second-generation ground-based gravitational wave detectors (Advanced LIGO and Virgo). Dynamical simulations of young dense star clusters (SCs) indicate that ~27 per cent of all double compact object binaries are members of hierarchical triple systems (HTs). In this paper, we consider 570 HTs composed of three compact objects (black holes or neutron stars) that formed dynamically in N-body simulations of young dense SCs. We simulate them for a Hubble time with a new code based on the Mikkola's algorithmic regularization scheme, including the 2.5 post-Newtonian term. We find that ~88 per cent of the simulated systems develop Kozai-Lidov (KL) oscillations. KL resonance triggers the merger of the inner binary in three systems (corresponding to 0.5 per cent of the simulated HTs), by increasing the eccentricity of the inner binary. Accounting for KL oscillations leads to an increase of the...
Hierarchical black hole triples in young star clusters: impact of Kozai-Lidov resonance on mergers
Kimpson, Thomas O.; Spera, Mario; Mapelli, Michela; Ziosi, Brunetto M.
2016-12-01
Mergers of compact-object binaries are one of the most powerful sources of gravitational waves (GWs) in the frequency range of second-generation ground-based GW detectors (advanced LIGO and Virgo). Dynamical simulations of young dense star clusters (SCs) indicate that ˜27 per cent of all double compact-object binaries are members of hierarchical triple systems (HTs). In this paper, we consider 570 HTs composed of three compact objects (black holes or neutron stars) that formed dynamically in N-body simulations of young dense SCs. We simulate them for a Hubble time with a new code based on the Mikkola's algorithmic regularization scheme, including the 2.5 post-Newtonian term. We find that ˜88 per cent of the simulated systems develop Kozai-Lidov (KL) oscillations. KL resonance triggers the merger of the inner binary in three systems (corresponding to 0.5 per cent of the simulated HTs), by increasing the eccentricity of the inner binary. Accounting for KL oscillations leads to an increase of the total expected merger rate by ≈50 per cent. All binaries that merge because of KL oscillations were formed by dynamical exchanges (i.e. none is a primordial binary) and have chirp mass >20 M⊙. This result might be crucial to interpret the formation channel of the first recently detected GW events.
Directory of Open Access Journals (Sweden)
S.S. Arya
2012-10-01
Full Text Available Thepla is an Indian unleavened flatbread made from whole-wheat flour with added spices and vegetables. It is particularly consumed in western zone of the India. The preparation of thepla is tedious, time consuming and requires skill. In the present study standardization of thepla ingredients were carried out by standardizing each ingredient on the basis of Overall Acceptability (OA score. Sensory analysis was carried out using nine-point hedonic rating scale with ten trained panellists. Standardized ingredients of thepla were: salt 3%, red chili powder 2.5%, fenugreek leaves 12%, cumin seed powder 0.6%, coriander seed powder 0.6%, ginger garlic paste (1:1 6%, asafoetida 0.6% and oil 3% w/w of whole wheat flour on the basis of highest sensory OA score. Further thepla process parameters such as time, temperature, diameter of thepla and weight of dough were standardized on the basis of sensory OA score. Obtained sensory score data was processed for Hierarchical Cluster Analysis (HCA.
A new Hierarchical Group Key Management based on Clustering Scheme for Mobile Ad Hoc Networks
Directory of Open Access Journals (Sweden)
Ayman EL-SAYED
2014-05-01
Full Text Available The migration from wired network to wireless network has been a global trend in the past few decades because they provide anytime-anywhere networking services. The wireless networks are rapidly deployed in the future, secure wireless environment will be mandatory. As well, The mobility and scalability brought by wireless network made it possible in many applications. Among all the contemporary wireless networks,Mobile Ad hoc Networks (MANET is one of the most important and unique applications. MANET is a collection of autonomous nodes or terminals which communicate with each other by forming a multihop radio network and maintaining connectivity in a decentralized manner. Due to the nature of unreliable wireless medium data transfer is a major problem in MANET and it lacks security and reliability of data. The most suitable solution to provide the expected level of security to these services is the provision of a key management protocol. A Key management is vital part of security. This issue is even bigger in wireless network compared to wired network. The distribution of keys in an authenticated manner is a difficult task in MANET. When a member leaves or joins the group, it needs to generate a new key to maintain forward and backward secrecy. In this paper, we propose a new group key management schemes namely a Hierarchical, Simple, Efficient and Scalable Group Key (HSESGK based on clustering management scheme for MANETs and different other schemes are classified. Group members deduce the group key in a distributed manner.
Energy Technology Data Exchange (ETDEWEB)
Li Tingting; Dai Ling; Li Lun; Hu Xuejiao; Dong Linjie; Li Jianjian; Salim, Sule Khalfan; Fu Jieying [Key Laboratory of Pesticides and Chemical Biology, Ministry of Education, College of Chemistry, Central China Normal University, Wuhan, Hubei 430079 (China); Zhong Hongying, E-mail: hyzhong@mail.ccnu.edu.cn [Key Laboratory of Pesticides and Chemical Biology, Ministry of Education, College of Chemistry, Central China Normal University, Wuhan, Hubei 430079 (China)
2011-01-17
Rapid identification of unknown microorganisms of clinical and agricultural importance is not only critical for accurate diagnosis of infections but also essential for appropriate and prompt treatment. We describe here a rapid method for microorganisms typing based on quantitative analysis of fatty acids by iFAT approach (Isotope-coded Fatty Acid Transmethylation). In this work, lyophilized cell lysates were directly mixed with 0.5 M NaOH solution in d3-methanol and n-hexane. After 1 min of ultrasonication, the top n-hexane layer was combined with a mixture of standard d0-methanol derived fatty acid methylesters with known concentration. Measurement of intensity ratios of d3/d0 labeled fragment ion and molecular ion pairs at the corresponding target fatty acids provides a quantitative basis for hierarchical clustering. In the resultant dendrogram, the Euclidean distance between unknown species and known species quantitatively reveals their differences or shared similarities in fatty acid related pathways. It is of particular interest to apply this method for typing fungal species because fungi has distinguished lipid biosynthetic pathways that have been targeted for lots of drugs or fungicides compared with bacteria and animals. The proposed method has no dependence on the availability of genome or proteome databases. Therefore, it is can be applicable for a broad range of unknown microorganisms or mutant species.
Bayesian structural equation modeling method for hierarchical model validation
Energy Technology Data Exchange (ETDEWEB)
Jiang Xiaomo [Department of Civil and Environmental Engineering, Vanderbilt University, Box 1831-B, Nashville, TN 37235 (United States)], E-mail: xiaomo.jiang@vanderbilt.edu; Mahadevan, Sankaran [Department of Civil and Environmental Engineering, Vanderbilt University, Box 1831-B, Nashville, TN 37235 (United States)], E-mail: sankaran.mahadevan@vanderbilt.edu
2009-04-15
A building block approach to model validation may proceed through various levels, such as material to component to subsystem to system, comparing model predictions with experimental observations at each level. Usually, experimental data becomes scarce as one proceeds from lower to higher levels. This paper presents a structural equation modeling approach to make use of the lower-level data for higher-level model validation under uncertainty, integrating several components: lower-level data, higher-level data, computational model, and latent variables. The method proposed in this paper uses latent variables to model two sets of relationships, namely, the computational model to system-level data, and lower-level data to system-level data. A Bayesian network with Markov chain Monte Carlo simulation is applied to represent the two relationships and to estimate the influencing factors between them. Bayesian hypothesis testing is employed to quantify the confidence in the predictive model at the system level, and the role of lower-level data in the model validation assessment at the system level. The proposed methodology is implemented for hierarchical assessment of three validation problems, using discrete observations and time-series data.
Xiao, Qijun
This thesis discusses the full scope of a project exploring the physics of hierarchical clusters of interacting nanomagnets. These clusters may be relevant for novel applications such as multilevel data storage devices. The work can be grouped into three main activities: micromagnetic simulation, fabrication and characterization of proof-of-concept prototype devices, and efforts to scale down the structures by creating the hierarchical structures with the aid of diblock copolymer self assembly. Theoretical micromagnetic studies and simulations based on Landau-Lifshitz-Gilbert (LLG) equation were conducted on nanoscale single domain magnetic entities. For the simulated nanomagnet clusters with perpendicular uniaxial anisotropy, the simulation showed the switching field distributions, the stability of the magnetostatic states with distinctive total cluster perpendicular moments, and the stepwise magnetic switching curves. For simulated nanomagnet clusters with in-plane shape anisotropy, the simulation showed the stepwise switching behaviors governed by thermal agitation and cluster configurations. Proof-of-concept cluster devices with three interacting Co nanomagnets were fabricated by e-beam lithography (EBL) and pulse-reverse electrochemical deposition (PRECD). EBL patterning on a suspended 100 nm SiN membrane showed improved lateral lithography resolution to 30 nm. The Co nanomagnets deposited using the PRECD method showed perpendicular anisotropy. The switching experiments with external applied fields were able to switch the Co nanomagnets through the four magnetostatic states with distinctive total perpendicular cluster magnetization, and proved the feasibility of multilevel data storage devices based on the cluster concept. Shrinking the structures size was experimented by the aid of diblock copolymer. Thick poly(styrene)-b-poly(methyl methacrylate) (PS-b-PMMA) diblock copolymer templates aligned with external electrical field were used to fabricate long Ni
Directory of Open Access Journals (Sweden)
Jun Ren
2014-01-01
Full Text Available Many evidences have demonstrated that protein complexes are overlapping and hierarchically organized in PPI networks. Meanwhile, the large size of PPI network wants complex detection methods have low time complexity. Up to now, few methods can identify overlapping and hierarchical protein complexes in a PPI network quickly. In this paper, a novel method, called MCSE, is proposed based on λ-module and “seed-expanding.” First, it chooses seeds as essential PPIs or edges with high edge clustering values. Then, it identifies protein complexes by expanding each seed to a λ-module. MCSE is suitable for large PPI networks because of its low time complexity. MCSE can identify overlapping protein complexes naturally because a protein can be visited by different seeds. MCSE uses the parameter λ_th to control the range of seed expanding and can detect a hierarchical organization of protein complexes by tuning the value of λ_th. Experimental results of S. cerevisiae show that this hierarchical organization is similar to that of known complexes in MIPS database. The experimental results also show that MCSE outperforms other previous competing algorithms, such as CPM, CMC, Core-Attachment, Dpclus, HC-PIN, MCL, and NFC, in terms of the functional enrichment and matching with known protein complexes.
The Local Maximum Clustering Method and Its Application in Microarray Gene Expression Data Analysis
Directory of Open Access Journals (Sweden)
Chen Yidong
2004-01-01
Full Text Available An unsupervised data clustering method, called the local maximum clustering (LMC method, is proposed for identifying clusters in experiment data sets based on research interest. A magnitude property is defined according to research purposes, and data sets are clustered around each local maximum of the magnitude property. By properly defining a magnitude property, this method can overcome many difficulties in microarray data clustering such as reduced projection in similarities, noises, and arbitrary gene distribution. To critically evaluate the performance of this clustering method in comparison with other methods, we designed three model data sets with known cluster distributions and applied the LMC method as well as the hierarchic clustering method, the -mean clustering method, and the self-organized map method to these model data sets. The results show that the LMC method produces the most accurate clustering results. As an example of application, we applied the method to cluster the leukemia samples reported in the microarray study of Golub et al. (1999.
Institute of Scientific and Technical Information of China (English)
王伟涛; 王宝善
2012-01-01
相似地震是具有相似波形记录的一组地震,往往以地震丛集的方式发生,而重复地震是一种特殊的相似地震,一般具有相近的震源机制解和几乎重合的破裂面积.对相似地震特别是重复地震的研究是我们认识断层的结构和变化的重要手段.本文提出了一种基于相似度距离概念和聚类分析技术的相似地震识别方法,可以利用单个台站对其记录到的地震事件进行快速的相似地震和重复地震识别.我们将此方法应用于汶川地震东北端的余震序列,获得了该地区相似地震的分布图像,并对其中存在的重复地震的发震机制进行了讨论分析.%Similar earthquakes are a group of earthquakes which have highly similar waveform at one or more seismic stations, they always occur as clusters in a limited space. Repeating earthquakes, as distinguished similar earthquakes, have nearly identical focal mechanism and overlapped rupture area. They provide an important means for studying the structure and property variation of fault systems. Here we present a method, which is based on similarity distance matrix and hierarchical clustering algorithm, to perform multilevel quick identification of similar earthquakes by one single station. We apply this method to Wenchuan northeast aftershock sequence and obtain the spatial and time distribution of different level similar earthquakes. The ability for detecting repeating earthquakes as well as the possible mechanism of burst-type repeating earthquakes in this region are discussed in the end.
Stern, Michael D; Maltseva, Larissa A; Juhaszova, Magdalena; Sollott, Steven J; Lakatta, Edward G; Maltsev, Victor A
2014-05-01
rate in response to β-adrenergic stimulation. The model indicates that the hierarchical clustering of surface RyRs in SANCs may be a crucial adaptive mechanism. Pathological desynchronization of the clocks may explain sinus node dysfunction in heart failure and RyR mutations.
Data Reduction Method for Categorical Data Clustering
Sánchez Garreta, José Salvador; Rendón, Eréndira; García, Rene A.; Abundez, Itzel; Gutiérrez, Citlalih; Gasca, Eduardo
2008-01-01
Categorical data clustering constitutes an important part of data mining; its relevance has recently drawn attention from several researchers. As a step in data mining, however, clustering encounters the problem of large amount of data to be processed. This article offers a solution for categorical clustering algorithms when working with high volumes of data by means of a method that summarizes the database. This is done using a structure called CM-tree. In order to test our metho...
Directory of Open Access Journals (Sweden)
M. Khoobiyan
2017-04-01
Full Text Available Manufacturing flexibility is a multidimensional concept and manufacturing companies act differently in using these dimensions. The purpose of this study is to investigate taxonomy and identify dominant groups of manufacturing flexibility. Dimensions of manufacturing flexibility are extracted by content analysis of literature and expert judgements. Manufacturing flexibility was measured by using a questionnaire developed to survey managers of manufacturing companies. The sample size was set at 379. To identify dominant groups of flexibility based on dimensions of flexibility determined, Hierarchical Cluster Analysis (HCA, Imperialist Competitive Algorithms (ICAs and Support Vector Machines (SVMs were used by cluster validity indices. The best algorithm for clustering was SVMs with three clusters, designated as leading delivery-based flexibility, frugal flexibility and sufficient plan-based flexibility.
Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters
Energy Technology Data Exchange (ETDEWEB)
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Directory of Open Access Journals (Sweden)
Muntaner Carles
2012-04-01
Full Text Available Abstract Background An important contribution of the social determinants of health perspective has been to inquire about non-medical determinants of population health. Among these, labour market regulations are of vital significance. In this study, we investigate the labour market regulations among low- and middle-income countries (LMICs and propose a labour market taxonomy to further understand population health in a global context. Methods Using Gross National Product per capita, we classify 113 countries into either low-income (n = 71 or middle-income (n = 42 strata. Principal component analysis of three standardized indicators of labour market inequality and poverty is used to construct 2 factor scores. Factor score reliability is evaluated with Cronbach's alpha. Using these scores, we conduct a hierarchical cluster analysis to produce a labour market taxonomy, conduct zero-order correlations, and create box plots to test their associations with adult mortality, healthy life expectancy, infant mortality, maternal mortality, neonatal mortality, under-5 mortality, and years of life lost to communicable and non-communicable diseases. Labour market and health data are retrieved from the International Labour Organization's Key Indicators of Labour Markets and World Health Organization's Statistical Information System. Results Six labour market clusters emerged: Residual (n = 16, Emerging (n = 16, Informal (n = 10, Post-Communist (n = 18, Less Successful Informal (n = 22, and Insecure (n = 31. Primary findings indicate: (i labour market poverty and population health is correlated in both LMICs; (ii association between labour market inequality and health indicators is significant only in low-income countries; (iii Emerging (e.g., East Asian and Eastern European countries and Insecure (e.g., sub-Saharan African nations clusters are the most advantaged and disadvantaged, respectively, with the remaining clusters experiencing levels of population
Directory of Open Access Journals (Sweden)
Xiaowei Li
2017-01-01
Full Text Available A large number of studies demonstrated that major depressive disorder (MDD is characterized by the alterations in brain functional connections which is also identifiable during the brain’s “resting-state.” But, in the present study, the approach of constructing functional connectivity is often biased by the choice of the threshold. Besides, more attention was paid to the number and length of links in brain networks, and the clustering partitioning of nodes was unclear. Therefore, minimum spanning tree (MST analysis and the hierarchical clustering were first used for the depression disease in this study. Resting-state electroencephalogram (EEG sources were assessed from 15 healthy and 23 major depressive subjects. Then the coherence, MST, and the hierarchical clustering were obtained. In the theta band, coherence analysis showed that the EEG coherence of the MDD patients was significantly higher than that of the healthy controls especially in the left temporal region. The MST results indicated the higher leaf fraction in the depressed group. Compared with the normal group, the major depressive patients lost clustering in frontal regions. Our findings suggested that there was a stronger brain interaction in the MDD group and a left-right functional imbalance in the frontal regions for MDD controls.
Martelet, G.; Truffert, C.; Tourlière, B.; Ledru, P.; Perrin, J.
2006-09-01
In highly weathered environments, it is crucial that geological maps provide information concerning both the regolith and the bedrock, for societal needs, such as land-use, mineral or water resources management. Often, geologists are facing the challenge of upgrading existing maps, as relevant information concerning weathering processes and pedogenesis is currently missing. In rugged areas in particular, where access to the field is difficult, ground observations are sparsely available, and need therefore to be complemented using methods based on remotely sensed data. For this purpose, we discuss the use of Agglomerative Hierarchical Clustering (AHC) on eU, K and eTh airborne gamma-ray spectrometry grids. The AHC process allows primarily to segment the geophysical maps into zones having coherent U, K and Th contents. The analysis of these contents are discussed in terms of geochemical signature for lithological attribution of classes, as well as the use of a dendrogram, which gives indications on the hierarchical relations between classes. Unsupervised classification maps resulting from AHC can be considered as spatial models of the distribution of the radioelement content in surface and sub-surface formations. The source of gamma rays emanating from the ground is primarily related to the geochemistry of the bedrock and secondarily to modifications of the radioelement distribution by weathering and other secondary mechanisms, such as mobilisation by wind or water. The interpretation of the obtained predictive classified maps, their U, K, Th contents, and the dendrogram, in light of available geological knowledge, allows to separate signatures related to regolith and solid geology. Consequently, classification maps can be integrated within a GIS environment and used by the geologist as a support for mapping bedrock lithologies and their alteration. We illustrate the AHC classification method in the region of Cayenne using high-resolution airborne radiometric data
Document Clustering using Sequential Information Bottleneck Method
Gayathri, P J; Punithavalli, M
2010-01-01
This paper illustrates the Principal Direction Divisive Partitioning (PDDP) algorithm and describes its drawbacks and introduces a combinatorial framework of the Principal Direction Divisive Partitioning (PDDP) algorithm, then describes the simplified version of the EM algorithm called the spherical Gaussian EM (sGEM) algorithm and Information Bottleneck method (IB) is a technique for finding accuracy, complexity and time space. The PDDP algorithm recursively splits the data samples into two sub clusters using the hyper plane normal to the principal direction derived from the covariance matrix, which is the central logic of the algorithm. However, the PDDP algorithm can yield poor results, especially when clusters are not well separated from one another. To improve the quality of the clustering results problem, it is resolved by reallocating new cluster membership using the IB algorithm with different settings. IB Method gives accuracy but time consumption is more. Furthermore, based on the theoretical backgr...
Chang, Yuchao; Tang, Hongying; Cheng, Yongbo; Zhao, Qin; Yuan, Baoqing Li andXiaobing
2017-07-19
Routing protocols based on topology control are significantly important for improving network longevity in wireless sensor networks (WSNs). Traditionally, some WSN routing protocols distribute uneven network traffic load to sensor nodes, which is not optimal for improving network longevity. Differently to conventional WSN routing protocols, we propose a dynamic hierarchical protocol based on combinatorial optimization (DHCO) to balance energy consumption of sensor nodes and to improve WSN longevity. For each sensor node, the DHCO algorithm obtains the optimal route by establishing a feasible routing set instead of selecting the cluster head or the next hop node. The process of obtaining the optimal route can be formulated as a combinatorial optimization problem. Specifically, the DHCO algorithm is carried out by the following procedures. It employs a hierarchy-based connection mechanism to construct a hierarchical network structure in which each sensor node is assigned to a special hierarchical subset; it utilizes the combinatorial optimization theory to establish the feasible routing set for each sensor node, and takes advantage of the maximum-minimum criterion to obtain their optimal routes to the base station. Various results of simulation experiments show effectiveness and superiority of the DHCO algorithm in comparison with state-of-the-art WSN routing algorithms, including low-energy adaptive clustering hierarchy (LEACH), hybrid energy-efficient distributed clustering (HEED), genetic protocol-based self-organizing network clustering (GASONeC), and double cost function-based routing (DCFR) algorithms.
Bellisola, Giuseppe; Cinque, Gianfelice; Vezzalini, Marzia; Moratti, Elisabetta; Silvestri, Giovannino; Redaelli, Sara; Gambacorti Passerini, Carlo; Wehbe, Katia; Sorio, Claudio
2013-07-21
We tested the ability of Fourier Transform (FT) InfraRed (IR) microspectroscopy (microFTIR) in combination with unsupervised Hierarchical Cluster Analysis (HCA) in identifying drug-resistance/sensitivity in leukemic cells exposed to tyrosine kinase inhibitors (TKIs). Experiments were carried out in a well-established mouse model of human Chronic Myelogenous Leukemia (CML). Mouse-derived pro-B Ba/F3 cells transfected with and stably expressing the human p210(BCR-ABL) drug-sensitive wild-type BCR-ABL or the V299L or T315I p210(BCR-ABL) drug-resistant BCR-ABL mutants were exposed to imatinib-mesylate (IMA) or dasatinib (DAS). MicroFTIR was carried out at the Diamond IR beamline MIRIAM where the mid-IR absorbance spectra of individual Ba/F3 cells were acquired using the high brilliance IR synchrotron radiation (SR) via aperture of 15 × 15 μm(2) in sizes. A conventional IR source (globar) was used to compare average spectra over 15 cells or more. IR signatures of drug actions were identified by supervised analyses in the spectra of TKI-sensitive cells. Unsupervised HCA applied to selected intervals of wavenumber allowed us to classify the IR patterns of viable (drug-resistant) and apoptotic (drug-sensitive) cells with an accuracy of >95%. The results from microFTIR + HCA analysis were cross-validated with those obtained via immunochemical methods, i.e. immunoblotting and flow cytometry (FC) that resulted directly and significantly correlated. We conclude that this combined microFTIR + HCA method potentially represents a rapid, convenient and robust screening approach to study the impact of drugs in leukemic cells as well as in peripheral blasts from patients in clinical trials with new anti-leukemic drugs.
Adaptive Integral Method for Higher-Order Hierarchical Method of Moments
DEFF Research Database (Denmark)
Kim, Oleksiy S.; Meincke, Peter
2006-01-01
The Adaptive Integral Method (AIM) is applied to solve the volume integral equation in conjunction with the higher-order Method of Moments (MoM). The classical AIM is modified for larger discretization cells to take advantage of higher-order MoM. The technique combines the low computational...... complexity and memory requirements of AIM with the reduced number of unknowns and higher-order convergence of higher-order hierarchical Legendre basis functions. Numerical examples given show the advantages of the proposed technique over AIM based on low-order basis functions in terms of memory...
Directory of Open Access Journals (Sweden)
WenJun Zhang
2016-06-01
Full Text Available Some networks, including biological networks, consist of hierarchical sub-networks / modules. Based on my previous study, in present study a method for both identifying hierarchical sub-networks / modules and weighting network links is proposed. It is based on the cluster analysis in which between-node similarity in sets of adjacency nodes is used. Two matrices, linkWeightMat and linkClusterIDs, are achieved by using the algorithm. Two links with both the same weight in linkWeightMat and the same cluster ID in linkClusterIDs belong to the same sub-network / module. Two links with the same weight in linkWeightMat but different cluster IDs in linkClusterIDs belong to two sub-networks / modules at the same hirarchical level. However, a link with an unique cluster ID in linkClusterIDs does not belong to any sub-networks / modules. A sub-network / module of the greater weight is the more connected sub-network / modules. Matlab codes of the algorithm are presented.
Hocking, Alex; Davey, Neil; Sun, Yi
2015-01-01
We present a novel unsupervised learning approach to automatically segment and label images in astronomical surveys. Automation of this procedure will be essential as next-generation surveys enter the petabyte scale: data volumes will exceed the capability of even large crowd-sourced analyses. We demonstrate how a growing neural gas (GNG) can be used to encode the feature space of imaging data. When coupled with a technique called hierarchical clustering, imaging data can be automatically segmented and labelled by organising nodes in the GNG. The key distinction of unsupervised learning is that these labels need not be known prior to training, rather they are determined by the algorithm itself. Importantly, after training a network can be be presented with images it has never 'seen' before and provide consistent categorisation of features. As a proof-of-concept we demonstrate application on data from the Hubble Space Telescope Frontier Fields: images of clusters of galaxies containing a mixture of galaxy type...
A Method of Database Cross Migration by Modeling the Database Object Hierarchically
Institute of Scientific and Technical Information of China (English)
安永进; 全学哲
2014-01-01
In:this paper we study the general database migration methods and present the migration method based on the hierarchical method of database objects. This method supports the automatic migration of the database without user’s manual work and especialy any data loss.
Analytical relations concerning the collapse time in hierarchically clustered cosmological models
Gambera, M
1997-01-01
By means of numerical methods, we solve the equations of motion for the collapse of a shell of baryonic matter, made of galaxies and substructure falling into the central regions of a cluster of galaxies, taking into account the effect of the dynamical friction. The parameters on which the dynamical friction mainly depends are: the peaks' height, the number of peaks inside a protocluster multiplied by the correlation function evaluated at the origin, the filtering radius and the nucleus radius of the protocluster of galaxies. We show how the collapse time (Tau) of the shell depends on these parameters. We give a formula that links the dynamical friction coefficient (Eta) o the parameters mentioned above and an analytic relation between the collapse time and (Eta). Finally, we obtain an analytical relation between (Tau) and the mean overdensity (mean Delta) within the shell. All the analytical relations that we find are in excellent agreement with the numerical integration.
Morgan, Katy E; Forbes, Andrew B; Keogh, Ruth H; Jairath, Vipul; Kahan, Brennan C
2017-01-30
In cluster randomised cross-over (CRXO) trials, clusters receive multiple treatments in a randomised sequence over time. In such trials, there is usual correlation between patients in the same cluster. In addition, within a cluster, patients in the same period may be more similar to each other than to patients in other periods. We demonstrate that it is necessary to account for these correlations in the analysis to obtain correct Type I error rates. We then use simulation to compare different methods of analysing a binary outcome from a two-period CRXO design. Our simulations demonstrated that hierarchical models without random effects for period-within-cluster, which do not account for any extra within-period correlation, performed poorly with greatly inflated Type I errors in many scenarios. In scenarios where extra within-period correlation was present, a hierarchical model with random effects for cluster and period-within-cluster only had correct Type I errors when there were large numbers of clusters; with small numbers of clusters, the error rate was inflated. We also found that generalised estimating equations did not give correct error rates in any scenarios considered. An unweighted cluster-level summary regression performed best overall, maintaining an error rate close to 5% for all scenarios, although it lost power when extra within-period correlation was present, especially for small numbers of clusters. Results from our simulation study show that it is important to model both levels of clustering in CRXO trials, and that any extra within-period correlation should be accounted for. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Škrbić, Biljana; Héberger, Károly; Durišić-Mladenović, Nataša
2013-10-01
Sum of ranking differences (SRD) was applied for comparing multianalyte results obtained by several analytical methods used in one or in different laboratories, i.e., for ranking the overall performances of the methods (or laboratories) in simultaneous determination of the same set of analytes. The data sets for testing of the SRD applicability contained the results reported during one of the proficiency tests (PTs) organized by EU Reference Laboratory for Polycyclic Aromatic Hydrocarbons (EU-RL-PAH). In this way, the SRD was also tested as a discriminant method alternative to existing average performance scores used to compare mutlianalyte PT results. SRD should be used along with the z scores--the most commonly used PT performance statistics. SRD was further developed to handle the same rankings (ties) among laboratories. Two benchmark concentration series were selected as reference: (a) the assigned PAH concentrations (determined precisely beforehand by the EU-RL-PAH) and (b) the averages of all individual PAH concentrations determined by each laboratory. Ranking relative to the assigned values and also to the average (or median) values pointed to the laboratories with the most extreme results, as well as revealed groups of laboratories with similar overall performances. SRD reveals differences between methods or laboratories even if classical test(s) cannot. The ranking was validated using comparison of ranks by random numbers (a randomization test) and using seven folds cross-validation, which highlighted the similarities among the (methods used in) laboratories. Principal component analysis and hierarchical cluster analysis justified the findings based on SRD ranking/grouping. If the PAH-concentrations are row-scaled, (i.e., z scores are analyzed as input for ranking) SRD can still be used for checking the normality of errors. Moreover, cross-validation of SRD on z scores groups the laboratories similarly. The SRD technique is general in nature, i.e., it can
Energy Efficient Backoff Hierarchical Clustering Algorithms for Multi-Hop Wireless Sensor Networks
Institute of Scientific and Technical Information of China (English)
Jun Wang; Yong-Tao Cao; Jun-Yuan Xie; Shi-Fu Chen
2011-01-01
Compared with flat routing protocols, clustering is a fundamental performance improvement technique in wireless sensor networks, which can increase network scalability and lifetime. In this paper, we integrate the multi-hop technique with a backoff-based clustering algorithm to organize sensors. By using an adaptive backoff strategy, the algorithm not only realizes load balance among sensor node, but also achieves fairly uniform cluster head distribution across the network. Simulation results also demonstrate our algorithm is more energy-efficient than classical ones. Our algorithm is also easily extended to generate a hierarchy of cluster heads to obtain better network management and energy-efficiency.
Hierarchical hybrid testability modeling and evaluation method based on information fusion
Institute of Scientific and Technical Information of China (English)
Xishan Zhang; Kaoli Huang; Pengcheng Yan; Guangyao Lian
2015-01-01
In order to meet the demand of testability analysis and evaluation for complex equipment under a smal sample test in the equipment life cycle, the hierarchical hybrid testability model-ing and evaluation method (HHTME), which combines the testabi-lity structure model (TSM) with the testability Bayesian networks model (TBNM), is presented. Firstly, the testability network topo-logy of complex equipment is built by using the hierarchical hybrid testability modeling method. Secondly, the prior conditional prob-ability distribution between network nodes is determined through expert experience. Then the Bayesian method is used to update the conditional probability distribution, according to history test information, virtual simulation information and similar product in-formation. Final y, the learned hierarchical hybrid testability model (HHTM) is used to estimate the testability of equipment. Compared with the results of other modeling methods, the relative deviation of the HHTM is only 0.52%, and the evaluation result is the most accurate.
Astionenko, I. O.; Litvinenko, O. I.; Osipova, N. V.; Tuluchenko, G. Ya.; Khomchenko, A. N.
2016-10-01
Recently the interpolation bases of the hierarchical type have been used for the problem solving of the approximation of multiple arguments functions (such as in the finite-element method). In this work the cognitive graphical method of constructing of the hierarchical form bases on the serendipity finite elements is suggested, which allowed to get the alternative bases on a biquadratic finite element from the serendipity family without internal knots' inclusion. The cognitive-graphic method allowed to improve the known interpolation procedure of Taylor and to get the modified elements with irregular arrangement of knots. The proposed procedures are universal and are spread in the area of finite-elements.
The Development of Cluster and Histogram Methods
Swendsen, Robert H.
2003-11-01
This talk will review the history of both cluster and histogram methods for Monte Carlo simulations. Cluster methods are based on the famous exact mapping by Fortuin and Kasteleyn from general Potts models onto a percolation representation. I will discuss the Swendsen-Wang algorithm, as well as its improvement and extension to more general spin models by Wolff. The Replica Monte Carlo method further extended cluster simulations to deal with frustrated systems. The history of histograms is quite extensive, and can only be summarized briefly in this talk. It goes back at least to work by Salsburg et al. in 1959. Since then, it has been forgotten and rediscovered several times. The modern use of the method has exploited its ability to efficiently determine the location and height of peaks in various quantities, which is of prime importance in the analysis of critical phenomena. The extensions of this approach to the multiple histogram method and multicanonical ensembles have allowed information to be obtained over a broad range of parameters. Histogram simulations and analyses have become standard techniques in Monte Carlo simulations.
Directory of Open Access Journals (Sweden)
Ashim Kumar Ghosh
2011-12-01
Full Text Available Wireless sensor nodes are use most embedded computing application. Multihop cluster hierarchy has been presented for large wireless sensor networks (WSNs that can provide scalable routing, data aggregation, and querying. The energy consumption rate for sensors in a WSN varies greatly based on the protocols the sensors use for communications. In this paper we present a cluster based routing algorithm. One of our main goals is to design the energy efficient routing protocol. Here we try to solve the usual problems of WSNs. We know the efficiency of WSNs depend upon the distance between node to base station and the amount of data to be transferred and the performance of clustering is greatly influenced by the selection of cluster-heads, which are in charge of creating clusters and controlling member nodes. This algorithm makes the best use of node with low number of cluster head know as super node. Here we divided the full region in four equal zones and the centre area of the region is used to select for super node. Each zone is considered separately and the zone may be or not divided further that’s depending upon the density of nodes in that zone and capability of the super node. This algorithm forms multilayer communication. The no of layer depends on the network current load and statistics. Our algorithm is easily extended to generate a hierarchy of cluster heads to obtain better network management and energy efficiency.
Deepthi, Dasika Ratna
2009-01-01
In this paper, we prove a crucial theorem called Mirroring Theorem which affirms that given a collection of samples with enough information in it such that it can be classified into classes and subclasses then (i) There exists a mapping which classifies and subclassifies these samples (ii) There exists a hierarchical classifier which can be constructed by using Mirroring Neural Networks (MNNs) in combination with a clustering algorithm that can approximate this mapping. Thus, the proof of the Mirroring theorem provides a theoretical basis for the existence and a practical feasibility of constructing hierarchical classifiers, given the maps. Our proposed Mirroring Theorem can also be considered as an extension to Kolmogrovs theorem in providing a realistic solution for unsupervised classification. The techniques we develop, are general in nature and have led to the construction of learning machines which are (i) tree like in structure, (ii) modular (iii) with each module running on a common algorithm (tandem a...
Energy flow in plate assembles by hierarchical version of finite element method
DEFF Research Database (Denmark)
Wachulec, Marcin; Kirkegaard, Poul Henning
method has been proposed. In this paper a modified hierarchical version of finite element method is used for modelling of energy flow in plate assembles. The formulation includes description of in-plane forces so that planes lying in different planes can be modelled. Two examples considered are: L......-corner of two rectangular plates an a I-shaped plate girder made of five plates. Energy distribution among plates due to harmonic load is studied and the comparison of performance between the hierarchical and standard finite element formulation is presented....
An adaptive spatial clustering method for automatic brain MR image segmentation
Institute of Scientific and Technical Information of China (English)
Jingdan Zhang; Daoqing Dai
2009-01-01
In this paper, an adaptive spatial clustering method is presented for automatic brain MR image segmentation, which is based on a competitive learning algorithm-self-organizing map (SOM). We use a pattern recognition approach in terms of feature generation and classifier design. Firstly, a multi-dimensional feature vector is constructed using local spatial information. Then, an adaptive spatial growing hierarchical SOM (ASGHSOM) is proposed as the classifier, which is an extension of SOM, fusing multi-scale segmentation with the competitive learning clustering algorithm to overcome the problem of overlapping grey-scale intensities on boundary regions. Furthermore, an adaptive spatial distance is integrated with ASGHSOM, in which local spatial information is considered in the cluster-ing process to reduce the noise effect and the classification ambiguity. Our proposed method is validated by extensive experiments using both simulated and real MR data with varying noise level, and is compared with the state-of-the-art algorithms.
Clustering of hydrological data: a review of methods for runoff predictions in ungauged basins
Dogulu, Nilay; Kentel, Elcin
2017-04-01
There is a great body of research that has looked into the challenge of hydrological predictions in ungauged basins as driven by the Prediction in Ungauged Basins (PUB) initiative of the International Association of Hydrological Sciences (IAHS). Transfer of hydrological information (e.g. model parameters, flow signatures) from gauged to ungauged catchment, often referred as "regionalization", is the main objective and benefits from identification of hydrologically homogenous regions. Within this context, indirect representation of hydrologic similarity for ungauged catchments, which is not a straightforward task due to absence of streamflow measurements and insufficient knowledge of hydrologic behavior, has been explored in the literature. To this aim, clustering methods have been widely adopted. While most of the studies employ hard clustering techniques such as hierarchical (divisive or agglomerative) clustering, there have been more recent attempts taking advantage of fuzzy set theory (fuzzy clustering) and nonlinear methods (e.g. self-organizing maps). The relevant research findings from this fundamental task of hydrologic sciences have revealed the value of different clustering methods for improved understanding of catchment hydrology. However, despite advancements there still remains challenges and yet opportunities for research on clustering for regionalization purposes. The present work provides an overview of clustering techniques and their applications in hydrology with focus on regionalization for the PUB problem. Identifying their advantages and disadvantages, we discuss the potential of innovative clustering methods and reflect on future challenges in view of the research objectives of the PUB initiative.
Mapping Cigarettes Similarities using Cluster Analysis Methods
Directory of Open Access Journals (Sweden)
Lorentz JÃƒÂ¤ntschi
2007-09-01
Full Text Available The aim of the research was to investigate the relationship and/or occurrences in and between chemical composition information (tar, nicotine, carbon monoxide, market information (brand, manufacturer, price, and public health information (class, health warning as well as clustering of a sample of cigarette data. A number of thirty cigarette brands have been analyzed. Six categorical (cigarette brand, manufacturer, health warnings, class and four continuous (tar, nicotine, carbon monoxide concentrations and package price variables were collected for investigation of chemical composition, market information and public health information. Multiple linear regression and two clusterization techniques have been applied. The study revealed interesting remarks. The carbon monoxide concentration proved to be linked with tar and nicotine concentration. The applied clusterization methods identified groups of cigarette brands that shown similar characteristics. The tar and carbon monoxide concentrations were the main criteria used in clusterization. An analysis of a largest sample could reveal more relevant and useful information regarding the similarities between cigarette brands.
Using hierarchical Bayesian methods to examine the tools of decision-making
Directory of Open Access Journals (Sweden)
Michael D. Lee
2011-12-01
Full Text Available Hierarchical Bayesian methods offer a principled and comprehensive way to relate psychological models to data. Here we use them to model the patterns of information search, stopping and deciding in a simulated binary comparison judgment task. The simulation involves 20 subjects making 100 forced choice comparisons about the relative magnitudes of two objects (which of two German cities has more inhabitants. Two worked-examples show how hierarchical models can be developed to account for and explain the diversity of both search and stopping rules seen across the simulated individuals. We discuss how the results provide insight into current debates in the literature on heuristic decision making and argue that they demonstrate the power and flexibility of hierarchical Bayesian methods in modeling human decision-making.
Cluster based hierarchical resource searching model in P2P network
Institute of Scientific and Technical Information of China (English)
Yang Ruijuan; Liu Jian; Tian Jingwen
2007-01-01
For the problem of large network load generated by the Gnutella resource-searching model in Peer to Peer (P2P) network, a improved model to decrease the network expense is proposed, which establishes a duster in P2P network,auto-organizes logical layers, and applies a hybrid mechanism of directional searching and flooding. The performance analysis and simulation results show that the proposed hierarchical searching model has availably reduced the generated message load and that its searching-response time performance is as fairly good as that of the Gnutella model.
Comparing the performance of biomedical clustering methods
DEFF Research Database (Denmark)
Wiwie, Christian; Baumbach, Jan; Röttger, Richard
2015-01-01
Identifying groups of similar objects is a popular first step in biomedical data analysis, but it is error-prone and impossible to perform manually. Many computational methods have been developed to tackle this problem. Here we assessed 13 well-known methods using 24 data sets ranging from gene......-ranging comparison we were able to develop a short guideline for biomedical clustering tasks. ClustEval allows biomedical researchers to pick the appropriate tool for their data type and allows method developers to compare their tool to the state of the art....
Munshi, D; Melott, A L; Munshi, Dipak; Coles, Peter; Melott, Adrian L.
1999-01-01
We develop a diagrammatic technique to represent the multi-point cumulative probability density function (CPDF) of mass fluctuations in terms of the statistical properties of individual collapsed objects and relate this to other statistical descriptors such as cumulants, cumulant correlators and factorial moments. We use this approach to establish key scaling relations describing various measurable statistical quantities if clustering follows a simple general scaling ansatz, as expected in hierarchical models. We test these detailed predictions against high-resolution numerical simulations. We show that, when appropriate variables are used, the count probability distribution function (CPDF) and void probability distribution function (VPF) shows clear scaling properties in the non-linear regime. Generalising the results to the two-point count probability distribution function (2CPDF), and the bivariate void probability function (2VPF) we find good match with numerical simulations. We explore the behaviour of t...
Institute of Scientific and Technical Information of China (English)
王鑫
2012-01-01
针对保费收入预测问题,以最小二乘法拟合为依托,基于谱系聚类分析的方法,运用马氏链模型对2008-2011年大连市人寿保险月度原保险保费收入的数据进行实证模拟仿真,采用定量分析的方法对大连市人寿保险月度原保险保费收入进行定性预测,结果表明该方法在进行定性预测时预测结果比较准确。%In view of problems in insurance premium income prediction,based on Least-squares,hierarchical clustering analysis and Markov chain,an empirical simulation research was made of monthly premium income of Dalian＇s primary life insurance during 2008-2011.Quantitive analysis was made of the same data for qualitative prediction.The results show that this method is fairly accurate in qualitative prediction.
Recent advances in coupled-cluster methods
Bartlett, Rodney J
1997-01-01
Today, coupled-cluster (CC) theory has emerged as the most accurate, widely applicable approach for the correlation problem in molecules. Furthermore, the correct scaling of the energy and wavefunction with size (i.e. extensivity) recommends it for studies of polymers and crystals as well as molecules. CC methods have also paid dividends for nuclei, and for certain strongly correlated systems of interest in field theory.In order for CC methods to have achieved this distinction, it has been necessary to formulate new, theoretical approaches for the treatment of a variety of essential quantities
The polarizable embedding coupled cluster method
DEFF Research Database (Denmark)
Sneskov, Kristian; Schwabe, Tobias; Kongsted, Jacob
2011-01-01
We formulate a new combined quantum mechanics/molecular mechanics (QM/MM) method based on a self-consistent polarizable embedding (PE) scheme. For the description of the QM region, we apply the popular coupled cluster (CC) method detailing the inclusion of electrostatic and polarization effects...... all coupled to a polarizable MM environment. In the process, we identify CC densitylike intermediates that allow for a very efficient implementation retaining a computational low cost of the QM/MM terms even when the number of MM sites increases. The strengths of the new implementation are illustrated...
A 3D AgCl hierarchical superstructure synthesized by a wet chemical oxidation method.
Lou, Zaizhu; Huang, Baibiao; Ma, Xiangchao; Zhang, Xiaoyang; Qin, Xiaoyan; Wang, Zeyan; Dai, Ying; Liu, Yuanyuan
2012-12-07
A novel 3D AgCl hierarchical superstructure, with fast growth along the 〈111〉 directions of cubic seeds, is synthesized by using a wet chemical oxidation method. The morphological structures and the growth process are investigated by scanning electron microscopy and X-ray diffraction. The crystal structures are analyzed by their crystallographic orientations. The surface energy of AgCl facets {100}, {110}, and {111} with absorbance of Cl(-) ions is studied by density functional theory calculations. Based on the experimental and computational results, a plausible mechanism is proposed to illustrate the formation of the 3D AgCl hierarchical superstructures. With more active sites, the photocatalytic activity of the 3D AgCl hierarchical superstructures is better than those of concave and cubic ones in oxygen evolution under irradiation by visible light.
一种层次聚类的RDF图语义检索方法研究%Hierarchical clustering-based semantic retrieval of RDF graph
Institute of Scientific and Technical Information of China (English)
刘宁; 左凤华; 张俊
2012-01-01
The cun-ent research related RDF graph retrieve exists some problems, such as low efficiency of memory usage, low search efficiency and so on. This paper proposed a hierarchical clustering semantic retrieval model on RDF graph and the method based on the model to solve aforesaid problems. That extracting entities from RDF graph and hierarchical clustering by the guidance of the ontology library made the complex graph structure into a tree structure for efficient retrieval. Orientating target object which was one of nodes in the model in RDF conducted the semantic expansion queries. Retrieval efficiency increased because retrieval scope narrow down as construction of retrieval model and recall ratio increased by the semantic expansion queries.%针对当前信息资源描述框架(RDF)检索过程中存在的内存使用过大及检索效率低等问题,提出一个RDF图的层次聚类语义检索模型,设计并实现了相应的检索方法.首先从RDF图中抽取实体数据,在本体库的指导下,通过层次聚类,将复杂的图形结构转换为适合检索的树型结构；根据在树中查找到的目标对象,确定其在RDF图中的位置,进行语义扩充查询.检索模型的构建缩小了检索范围,从而提高了检索效率,其语义扩充查询还可以得到较好的查全率.
Cluster analysis for applications
Anderberg, Michael R
1973-01-01
Cluster Analysis for Applications deals with methods and various applications of cluster analysis. Topics covered range from variables and scales to measures of association among variables and among data units. Conceptual problems in cluster analysis are discussed, along with hierarchical and non-hierarchical clustering methods. The necessary elements of data analysis, statistics, cluster analysis, and computer implementation are integrated vertically to cover the complete path from raw data to a finished analysis.Comprised of 10 chapters, this book begins with an introduction to the subject o
K-Profiles: A Nonlinear Clustering Method for Pattern Detection in High Dimensional Data
Directory of Open Access Journals (Sweden)
Kai Wang
2015-01-01
Full Text Available With modern technologies such as microarray, deep sequencing, and liquid chromatography-mass spectrometry (LC-MS, it is possible to measure the expression levels of thousands of genes/proteins simultaneously to unravel important biological processes. A very first step towards elucidating hidden patterns and understanding the massive data is the application of clustering techniques. Nonlinear relations, which were mostly unutilized in contrast to linear correlations, are prevalent in high-throughput data. In many cases, nonlinear relations can model the biological relationship more precisely and reflect critical patterns in the biological systems. Using the general dependency measure, Distance Based on Conditional Ordered List (DCOL that we introduced before, we designed the nonlinear K-profiles clustering method, which can be seen as the nonlinear counterpart of the K-means clustering algorithm. The method has a built-in statistical testing procedure that ensures genes not belonging to any cluster do not impact the estimation of cluster profiles. Results from extensive simulation studies showed that K-profiles clustering not only outperformed traditional linear K-means algorithm, but also presented significantly better performance over our previous General Dependency Hierarchical Clustering (GDHC algorithm. We further analyzed a gene expression dataset, on which K-profile clustering generated biologically meaningful results.
Directory of Open Access Journals (Sweden)
Dasika Ratna Deepthi
2009-10-01
Full Text Available In this paper, we prove a crucial theorem called “Mirroring Theorem” which affirms that given a collection of samples with enough information in it such that it can be classified into classes and sub-classes then (i There exists a mapping which classifies and subclassifies these samples (ii There exists a hierarchical classifier which can be constructed by using Mirroring Neural Networks (MNNs in combination with a clustering algorithm that can approximate this mapping. Thus, the proof of the Mirroring theorem provides a theoretical basis for the existence and a practical feasibility of constructing hierarchical classifiers, given the maps. Our proposed Mirroring Theorem can also be considered as an extension to Kolmogrov’s theorem in providing a realistic solution for unsupervised classification. The techniques we develop, are general in nature and have led to the construction of learning machines which are (i tree like in structure, (ii modular (iii with each module running on a common algorithm (tandem algorithm and (iv self-supervised. We have actually built the architecture, developed the tandem algorithm of such a hierarchical classifier and demonstrated it on an example problem.
Topologically clustering: a method for discarding mismatches
Wang, Yongtao; Zhang, Dazhi; Gao, Chenqiang; Tian, Jinwen
2007-11-01
Wide baseline stereo correspondence has become a challenging and attractive problem in computer vision and its related applications. Getting high correct ratio initial matches is a very important step of general wide baseline stereo correspondence algorithm. Ferrari et al. suggested a voting scheme called topological filter in [3] to discard mismatches from initial matches, but they didn't give theoretical analysis of their method. Furthermore, the parameter of their scheme was uncertain. In this paper, we improved Ferraris' method based on our theoretical analysis, and presented a novel scheme called topologically clustering to discard mismatches. The proposed method has been tested using many famous wide baseline image pairs and the experimental results showed that the developed method can efficiently extract high correct ratio matches from low correct ratio initial matches for wide baseline image pairs.
Fuzzy Clustering - Principles, Methods and Examples
DEFF Research Database (Denmark)
Kroszynski, Uri; Zhou, Jianjun
1998-01-01
One of the most remarkable advances in the field of identification and control of systems -in particular mechanical systems- whose behaviour can not be described by means of the usual mathematical models, has been achieved by the application of methods of fuzzy theory.In the framework of a study...... about identification of "black-box" properties by analysis of system input/output data sets, we have prepared an introductory note on the principles and the most popular data classification methods used in fuzzy modeling. This introductory note also includes some examples that illustrate the use...... of the methods. The examples were solved by hand and served as a test bench for exploration of the MATLAB capabilities included in the Fuzzy Control Toolbox. The fuzzy clustering methods described include Fuzzy c-means (FCM), Fuzzy c-lines (FCL) and Fuzzy c-elliptotypes (FCE)....
TWO-LEVEL HIERARCHICAL COORDINATION QUEUING METHOD FOR TELECOMMUNICATION NETWORK NODES
Directory of Open Access Journals (Sweden)
M. V. Semenyaka
2014-07-01
Full Text Available The paper presents hierarchical coordination queuing method. Within the proposed method a queuing problem has been reduced to optimization problem solving that was presented as two-level hierarchical structure. The required distribution of flows and bandwidth allocation was calculated at the first level independently for each macro-queue; at the second level solutions obtained on lower level for each queue were coordinated in order to prevent probable network link overload. The method of goal coordination has been determined for multilevel structure managing, which makes it possible to define the order for consideration of queue cooperation restrictions and calculation tasks distribution between levels of hierarchy. Decisions coordination was performed by the method of Lagrange multipliers. The study of method convergence has been carried out by analytical modeling.
Scheduling method based on virtual flattened architecture for Hierarchical system-on-chip
Institute of Scientific and Technical Information of China (English)
ZHANG Dong; ZHANG Jin-yi; YANG Xiao-dong; YANG Yi
2009-01-01
As the technology of IP-core-reused has been widely used, a lot of intellectual property (IP) cores have been embedded in different layers of system-on-chip (SOC). Although the cycles of development and overhead are reduced by this method, it is a challenge to the SOC test. This paper proposes a scheduling method based on the virtual flattened architecture for hierarchical SOC, which breaks the hierarchical architecture to the virtual flattened one. Moreover, this method has more advantages compared with the traditional one, which tests the parent cores and child cores separately. Finally, the method is verified by the ITC'02 benchmark, and gives good results that reduce the test time and overhead effectively.
Wu, Shengming; Xia, Tian; Wang, Jingping; Lu, Feifei; Xu, Chunbo; Zhang, Xianfa; Huo, Lihua; Zhao, Hui
2017-06-01
Herein, Ultrathin mesoporous Co3O4 nanosheets-constructed hierarchical clusters (UMCN-HCs) have been successfully synthesized via a facile hydrothermal method followed by a subsequent thermolysis treatment at 600 °C in air. The products consist of cluster-like Co3O4 microarchitectures, which are assembled by numerous ultrathin mesoporous Co3O4 nanosheets. When tested as anode materials for lithium-ion batteries, UMCN-HCs deliver a high reversible capacity of 1067 mAh g-1 at a current density of 100 mA g-1 after 100 cycles. Even at 2 A g-1, a stable capacity as high as 507 mAh g-1 can be achieved after 500 cycles. The high reversible capacity, excellent cycling stability, and good rate capability of UMCN-HCs may be attributed to their mesoporous sheet-like nanostructure. The sheet-layered structure of UMCN-HCs may buffer the volume change during the lithiation-delithiation process, and the mesoporous characteristic make lithium-ion transfer more easily at the interface between the active electrode and the electrolyte.
Validity of the t-plot method to assess microporosity in hierarchical micro/mesoporous materials.
Galarneau, Anne; Villemot, François; Rodriguez, Jeremy; Fajula, François; Coasne, Benoit
2014-11-11
The t-plot method is a well-known technique which allows determining the micro- and/or mesoporous volumes and the specific surface area of a sample by comparison with a reference adsorption isotherm of a nonporous material having the same surface chemistry. In this paper, the validity of the t-plot method is discussed in the case of hierarchical porous materials exhibiting both micro- and mesoporosities. Different hierarchical zeolites with MCM-41 type ordered mesoporosity are prepared using pseudomorphic transformation. For comparison, we also consider simple mechanical mixtures of microporous and mesoporous materials. We first show an intrinsic failure of the t-plot method; this method does not describe the fact that, for a given surface chemistry and pressure, the thickness of the film adsorbed in micropores or small mesopores (plot method to estimate the micro- and mesoporous volumes of hierarchical samples is then discussed, and an abacus is given to correct the underestimated microporous volume by the t-plot method.
Hierarchical Affinity Propagation
Givoni, Inmar; Frey, Brendan J
2012-01-01
Affinity propagation is an exemplar-based clustering algorithm that finds a set of data-points that best exemplify the data, and associates each datapoint with one exemplar. We extend affinity propagation in a principled way to solve the hierarchical clustering problem, which arises in a variety of domains including biology, sensor networks and decision making in operational research. We derive an inference algorithm that operates by propagating information up and down the hierarchy, and is efficient despite the high-order potentials required for the graphical model formulation. We demonstrate that our method outperforms greedy techniques that cluster one layer at a time. We show that on an artificial dataset designed to mimic the HIV-strain mutation dynamics, our method outperforms related methods. For real HIV sequences, where the ground truth is not available, we show our method achieves better results, in terms of the underlying objective function, and show the results correspond meaningfully to geographi...
MANNER OF STOCKS SORTING USING CLUSTER ANALYSIS METHODS
Directory of Open Access Journals (Sweden)
Jana Halčinová
2014-06-01
Full Text Available The aim of the present article is to show the possibility of using the methods of cluster analysis in classification of stocks of finished products. Cluster analysis creates groups (clusters of finished products according to similarity in demand i.e. customer requirements for each product. Manner stocks sorting of finished products by clusters is described a practical example. The resultants clusters are incorporated into the draft layout of the distribution warehouse.
Stenning, D. C.; Wagner-Kaiser, R.; Robinson, E.; van Dyk, D. A.; von Hippel, T.; Sarajedini, A.; Stein, N.
2016-07-01
We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations. Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties—age, metallicity, helium abundance, distance, absorption, and initial mass—are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and also show how model misspecification can potentially be identified. As a proof of concept, we analyze the two stellar populations of globular cluster NGC 5272 using our model and methods. (BASE-9 is available from GitHub: https://github.com/argiopetech/base/releases).
Fuzzy Clustering Using C-Means Method
Directory of Open Access Journals (Sweden)
Georgi Krastev
2015-05-01
Full Text Available The cluster analysis of fuzzy clustering according to the fuzzy c-means algorithm has been described in this paper: the problem about the fuzzy clustering has been discussed and the general formal concept of the problem of the fuzzy clustering analysis has been presented. The formulation of the problem has been specified and the algorithm for solving it has been described.
建筑物层次空间聚类方法研究%Hierarchical spatial clustering of buildings
Institute of Scientific and Technical Information of China (English)
邓敏; 孙前虎; 文小岳; 徐枫
2011-01-01
建筑物空间聚类是实现居民地地图自动综合的有效方法.基于图论和Gestalt原理,发展了一种层次的建筑物聚类方法.该方法可以深层次地挖掘建筑物图形的视觉特性,将面状地物信息充分合理地表达在聚类结果中.依据视觉感知原理,借助Dealaunay三角网构建方法,分析了地图上建筑物的自身形状特性和相互间的邻接关系,并依据建筑物间的可视区域均值距离建立了加权邻近结构图,确定了建筑物的邻近关系(定性约束).根据Gestalt准则将邻近性、方向性和几何特征等量化为旋转卡壳距离约束和几何相似度约束.通过实例验证了层次聚类方法得到更加符合人类认知的建筑物聚类结果.%Spatial clustering provides an effective approach for generalization of residential area in automated cartographic generalization.Based on graph theory and Gestalt principle, a hierarchical approach is proposed in this paper.This approach can be utilized to discover the graphical structure formed by buildings, which is obtained with the consideration of shape, size and neighboring relations.The neighboring relations are determined by Dclaunay triangulation, which is a qualitative constraint among buildings.A weighted neighboring structural graph is obtained by setting visual distance as the weight of the linking edge between adjacent buildings.Two levels of quantitative constraints are developed by considering the Gestalt factors, I.e.proximity, orientation and geometry of buildings.One is the rotating calipers minimum distance;the other is the geometric similarity measure.Through experiments it is illustrated that the results by the hierarchical spatial clustering proposed in this paper are consistent with human perception.
Energy flow in plate assembles by hierarchical version of finite element method
DEFF Research Database (Denmark)
Wachulec, Marcin; Kirkegaard, Poul Henning
method has been proposed. In this paper a modified hierarchical version of finite element method is used for modelling of energy flow in plate assembles. The formulation includes description of in-plane forces so that planes lying in different planes can be modelled. Two examples considered are: L......The dynamic analysis of structures in medium and high frequencies are usually focused on frequency and spatial averages of energy of components, and not the displacement/velocity fields. This is especially true for structure-borne noise modelling. For the analysis of complicated structures......-corner of two rectangular plates an a I-shaped plate girder made of five plates. Energy distribution among plates due to harmonic load is studied and the comparison of performance between the hierarchical and standard finite element formulation is presented....
A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates
Huang, Weizhang; Kamenski, Lennard; Lang, Jens
2010-03-01
A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.
Noma, Hisashi; Matsui, Shigeyuki
2013-05-20
The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression.
Formation of an O-Star Cluster by Hierarchical Accretion in G20.08-0.14 N
Galván-Madrid, Roberto; Zhang, Qizhou; Kurtz, Stan; Rodríguez, Luis F; Ho, Paul T P
2009-01-01
Spectral line and continuum observations of the ionized and molecular gas in G20.08-0.14 N explore the dynamics of accretion over a range of spatial scales in this massive star forming region. Very Large Array observations of NH_3 at 4'' angular resolution show a large scale (0.5 pc) molecular accretion flow around and into a star cluster with three small, bright HII regions. Higher resolution (0.4'') observations with the Submillimeter Array in hot core molecules (CH_3CN, OCS, and SO_2) and the VLA in NH_3, show that the two brightest and smallest HII regions are themselves surrounded by smaller scale (0.05 pc) accretion flows. The axes of rotation of the large and small scale flows are aligned, and the time scale for the contraction of the cloud is short enough, 0.1 Myr, for the large scale accretion flow to deliver significant mass to the smaller scales within the star formation time scale. The flow structure appears to be continuous and hierarchical from larger to smaller scales. Millimeter radio recombin...
Wei, Lan; Wang, Xiaobo; Mu, Shanxue; Sun, Lixin; Yu, Zhiguo
2015-06-01
A sensitive, rapid and simple ultra high performance liquid chromatography with electrospray ionization tandem mass spectrometry method was developed to determine seven constituents (umbelliferone, apigenin, triumbelletin, daphnoretin, arctigenin, genkwanin and emodin) in Wikstroemia indica (L.) C. A. Mey. The chromatographic analysis was performed on an ACQUITY UPLC® BEH C18 column (2.1 × 50 mm, 1.7 μm) by gradient elution with the mobile phase of 0.05% formic acid aqueous solution (A) and acetonitrile (B). Multiple reaction monitoring mode with positive and negative electrospray ionization interface was carried out to detect the components. This method was validated in terms of specificity, linearity, accuracy, precision and stability. Excellent linear behavior was observed over the certain concentration ranges with the correlation coefficient values higher than 0.999. The intraday and innerday precisions were within 2.0%. The recoveries of seven analytes were 99.4-101.1% with relative standard deviation less than 1.2%. The 18 Wikstroemia indica samples from different origins were classified by hierarchical clustering analysis according to the contents of seven components. The results demonstrated that the developed method could successfully be used to quantify simultaneously of seven components in Wikstroemia indica and could be a helpful tool for the detection and confirmation of the quality of traditional Chinese medicines.
Integrated management of thesis using clustering method
Astuti, Indah Fitri; Cahyadi, Dedy
2017-02-01
Thesis is one of major requirements for student in pursuing their bachelor degree. In fact, finishing the thesis involves a long process including consultation, writing manuscript, conducting the chosen method, seminar scheduling, searching for references, and appraisal process by the board of mentors and examiners. Unfortunately, most of students find it hard to match all the lecturers' free time to sit together in a seminar room in order to examine the thesis. Therefore, seminar scheduling process should be on the top of priority to be solved. Manual mechanism for this task no longer fulfills the need. People in campus including students, staffs, and lecturers demand a system in which all the stakeholders can interact each other and manage the thesis process without conflicting their timetable. A branch of computer science named Management Information System (MIS) could be a breakthrough in dealing with thesis management. This research conduct a method called clustering to distinguish certain categories using mathematics formulas. A system then be developed along with the method to create a well-managed tool in providing some main facilities such as seminar scheduling, consultation and review process, thesis approval, assessment process, and also a reliable database of thesis. The database plays an important role in present and future purposes.
Discrete range clustering using Monte Carlo methods
Chatterji, G. B.; Sridhar, B.
1993-01-01
For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.
Hierarchical Direct Time Integration Method and Adaptive Procedure for Dynamic Analysis
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
New hierarchical direct time integration method for structural dynamic analysis is developed by using Taylor series expansions in each time step. Very accurate results can be obtained by increasing the order of the Taylor series. Furthermore, the local error can be estimated by simply comparing the solutions obtained by the proposed method with the higher order solutions. This local estimate is then used to develop an adaptive order-control technique. Numerical examples are given to illustrate the performance of the present method and its adaptive procedure.
Comparative Study of K-means and Robust Clustering
Directory of Open Access Journals (Sweden)
Shashi Sharma
2013-09-01
Full Text Available Data mining is the mechanism of implementing patterns in large amount of data sets involving methods at the intersection of artificial intelligence, machine learning, statistics, and database systems. Clustering is the very big area in which grouping of same type of objects in data mining. Clustering has divided into different categories – partitioned clustering and hierarchical clustering. In this paper we study two types of clustering first is Kmeans which is part of partitioned clustering. Kmeans clustering generates a specific number of disjoint, flat (non-hierarchical clusters. Second clustering is robust clustering which is part of hierarchical clustering. This clustering uses Jaccard coefficient instead of using the distance measures to find the similarity between the data or documents to classify the clusters. We show comparison between Kmeans clustering and robust clustering which is better for categorical data.
Directory of Open Access Journals (Sweden)
Jin-Zhong Wu
2013-01-01
Full Text Available A simple and reliable method of high-performance liquid chromatography with photodiode array detection (HPLC-DAD was developed to evaluate the quality of Receptaculum Nelumbinis (dried receptacle of Nelumbo nucifera through establishing chromatographic fingerprint and simultaneous determination of five flavonol glycosides, including hyperoside, isoquercitrin, quercetin-3-O-β-d-glucuronide, isorhamnetin-3-O-β-d-galactoside and syringetin-3-O-β-d-glucoside. In quantitative analysis, the five components showed good regression (R > 0.9998 within linear ranges, and their recoveries were in the range of 98.31%–100.32%. In the chromatographic fingerprint, twelve peaks were selected as the characteristic peaks to assess the similarities of different samples collected from different origins in China according to the State Food and Drug Administration (SFDA requirements. Furthermore, hierarchical cluster analysis (HCA was also applied to evaluate the variation of chemical components among different sources of Receptaculum Nelumbinis in China. This study indicated that the combination of quantitative and chromatographic fingerprint analysis can be readily utilized as a quality control method for Receptaculum Nelumbinis and its related traditional Chinese medicinal preparations.
Yao, Xin; Zhou, Guisheng; Tang, Yuping; Guo, Sheng; Qian, Dawei; Duan, Jin-Ao
2015-02-01
Ginkgo biloba leaf extract has been widely used in dietary supplements and more recently in some foods and beverages. In addition to the well-known flavonol glycosides and terpene lactones, G. biloba leaves are also rich in nucleobases and nucleosides. To determine the content of nucleobases and nucleosides in G. biloba leaves at trace levels, a reliable method has been established by using hydrophilic interaction ultra performance liquid chromatography coupled with triple-quadrupole tandem mass spectrometry (HILIC-UPLC-TQ-MS/MS) working in multiple reaction monitoring mode. Eleven nucleobases and nucleosides were simultaneously determined in seven min. The proposed method was fully validated in terms of linearity, sensitivity, and repeatability, as well as recovery. Furthermore, hierarchical clustering analysis (HCA) was performed to evaluate and classify the samples according to the contents of the eleven chemical constituents. The established approach could be helpful for evaluation of the potential values as dietary supplements and the quality control of G. biloba leaves, which might also be utilized for the investigation of other medicinal herbs containing nucleobases and nucleosides.
Directory of Open Access Journals (Sweden)
Omholt Stig W
2011-06-01
Full Text Available Abstract Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs to variation in features of the trajectories of the state variables (outputs throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR, where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR and ordinary least squares (OLS regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback
Institute of Scientific and Technical Information of China (English)
苏腾飞; 孟俊敏; 张晰
2013-01-01
图像分割是SAR溢油检测中的关键步骤,但由于SAR影像中存在斑点噪声,使得一般的图像分割算法难以收到理想的效果,严重影响溢油检测的精度.发展一种基于凝聚层次聚类(Hierarchical Agglomerative Clustering,HAC)的溢油SAR图像分割算法.该算法利用多尺度分割的思想,能够有效保持SAR影像中溢油斑块的形状特征,并能减少细碎斑块的产生.利用2010年墨西哥湾的Envisat ASAR影像开展了溢油SAR图像分割实验,并将该算法和Canny边缘检测、OTSU阈值分割、FCM分割、水平集分割等方法进行了对比.结果显示,HAC方法可以有效减少细碎斑块的产生,有助于提高SAR溢油检测的精度.%Image segmentation is a crucial stage in the SAR oil spill detection.However,the common image segmentation algorithms can hardly achieve satisfactory results due to speckle noise in the SAR images,thus affecting seriously the accuracy of oil spill detection.For this reason,an image segmentation algorithm which is based on HAC (Hierarchical Agglomerative Clustering) is developed for the oil spill SAR images.This method takes advantage of multi-resolution segmentation to maintain effectively the shape property of oil spill patches,and can reduce the formation of small patches.By using Envisat ASAR images of the Gulf of Mexico obtained in 2010,an experiment of SAR oil spill image segmentation has been conducted.Comparing with other approaches such as Canny,OTSU,FCM and Levelset,the results show that HAC can effectively reduce the producing of small patches,which is helpful to improve the accuracy of SAR oil spill detection.
Michel, Pierre; Baumstarck, Karine; Boyer, Laurent; Fernandez, Oscar; Flachenecker, Peter; Pelletier, Jean; Loundou, Anderson; Ghattas, Badih; Auquier, Pascal
2017-01-01
To enhance the use of quality of life (QoL) measures in clinical practice, it is pertinent to help clinicians interpret QoL scores. The aim of this study was to define clusters of QoL levels from a specific questionnaire (MusiQoL) for multiple sclerosis (MS) patients using a new method of interpretable clustering based on unsupervised binary trees and to test the validity regarding clinical and functional outcomes. In this international, multicenter, cross-sectional study, patients with MS were classified using a hierarchical top-down method of Clustering using Unsupervised Binary Trees. The clustering tree was built using the 9 dimension scores of the MusiQoL in 2 stages, growing and tree reduction (pruning and joining). A 3-group structure was considered, as follows: "high," "moderate," and "low" QoL levels. Clinical and QoL data were compared between the 3 clusters. A total of 1361 patients were analyzed: 87 were classified with "low," 1173 with "moderate," and 101 with "high" QoL levels. The clustering showed satisfactory properties, including repeatability (using bootstrap) and discriminancy (using factor analysis). The 3 clusters consistently differentiated patients based on sociodemographic and clinical characteristics, and the QoL scores were assessed using a generic questionnaire, ensuring the clinical validity of the clustering. The study suggests that Clustering using Unsupervised Binary Trees is an original, innovative, and relevant classification method to define clusters of QoL levels in MS patients.
Directory of Open Access Journals (Sweden)
Issam SAHMOUDI
2013-12-01
Full Text Available Document Clustering is a branch of a larger area of scientific study kn own as data mining .which is an unsupervised classification using to find a structu re in a collection of unlabeled data. The useful information in the documents can be accompanied b y a large amount of noise words when using Full Tex t Representation, and therefore will affect negativel y the result of the clustering process. So it is w ith great need to eliminate the noise words and keeping just the useful information in order to enhance the qual ity of the clustering results. This problem occurs with di fferent degree for any language such as English, European, Hindi, Chinese, and Arabic Language. To o vercome this problem, in this paper, we propose a new and efficient Keyphrases extraction method base d on the Suffix Tree data structure (KpST, the extracted Keyphrases are then used in the clusterin g process instead of Full Text Representation. The proposed method for Keyphrases extraction is langua ge independent and therefore it may be applied to a ny language. In this investigation, we are interested to deal with the Arabic language which is one of th e most complex languages. To evaluate our method, we condu ct an experimental study on Arabic Documents using the most popular Clustering approach of Hiera rchical algorithms: Agglomerative Hierarchical algorithm with seven linkage techniques and a varie ty of distance functions and similarity measures to perform Arabic Document Clustering task. The obtain ed results show that our method for extracting Keyphrases increases the quality of the clustering results. We propose also to study the effect of using the stemming for the testing dataset to cluster it with the same documents clustering techniques and similarity/distance measures.
Land cover classification of remotely sensed image with hierarchical iterative method
Institute of Scientific and Technical Information of China (English)
LI Peijun; HUANG Yingduan
2005-01-01
Based on the analysis of the single-stage classification results obtained by the multitemporal SPOT 5 and Landsat 7 ETM + multispectral images separately and the derived variogram texture, the best data combinations for each land cover class are selected, and the hierarchical iterative classification is then applied for land cover mapping. The proposed classification method combines the multitemporal images of different resolutions with the image texture, which can greatly improve the classification accuracy. The method and strategies proposed in the study can be easily transferred to other similar applications.
A Latent Variable Clustering Method for Wireless Sensor Networks
DEFF Research Database (Denmark)
Vasilev, Vladislav; Iliev, Georgi; Poulkov, Vladimir
2016-01-01
In this paper we derive a clustering method based on the Hidden Conditional Random Field (HCRF) model in order to maximizes the performance of a wireless sensor. Our novel approach to clustering in this paper is in the application of an index invariant graph that we defined in a previous work...... obtain by running simulations of a time dynamic sensor network. The performance of the proposed method outperforms the existing clustering methods, such as the Girvan-Newmans algorithm, the Kargers algorithm and the Spectral Clustering method, in terms of packet acceptance probability and delay....
Nikolic, Katarina; Filipic, Slavica; Smoliński, Adam; Kaliszan, Roman; Agbaba, Danica
2013-01-01
PURPOSE. Rate of brain penetration (logPS), brain/plasma equilibration rate (logPS-brain), and extent of blood-brain barrier permeation (logBB) of 29 α-adrenergic and imidazoline-receptors ligands were examined in Quantitative-Structure-Property Relationship (QSPR) study. METHODS. Experimentally determined chromatographic retention data (logKw at pH 4.4, slope (S) at pH 4.4, logKw at pH 7.4, slope (S) at pH 7.4, logKw at pH 9.1, and slope (S) at pH 9.1) and capillary electrophoresis migration parameters (μeff at pH 4.4, μeff at pH 7.4, and μeff at pH 9.1), together with calculated molecular descriptors, were used as independent variables in the QSPR study by use of partial least square (PLS) methodology. RESULTS. Predictive potential of the formed QSPR models, QSPR(logPS), QSPR(logPS-brain), QSPR(logBB), was confirmed by cross- and external validation. Hydrophilicity (Hy) and H-indices (H7m) were selected as significant parameters negatively correlated with both logPS and logPS-brain, while topological polar surface area (TPSA(NO)) was chosen as molecular descriptor negatively correlated with both logPS and logBB. The principal component analysis (PCA) and hierarchical clustering analysis (HCA) were applied to cluster examined drugs based on their chromatographic, electrophoretic and molecular properties. Significant positive correlations were obtained between the slope (S) at pH 7.4 and logBB in A/B cluster and between the logKw at pH 9.1 and logPS in C/D cluster. CONCLUSIONS. Results of the QSPR, clustering and correlation studies could be used as novel tool for evaluation of blood-brain barrier permeation of related α-adrenergic/imidazoline receptor ligands.This article is open to POST-PUBLICATION REVIEW. Registered readers (see "For Readers") may comment by clicking on ABSTRACT on the issue's contents page.PURPOSE. Rate of brain penetration (logPS), brain/plasma equilibration rate (logPS-brain), and extent of blood-brain barrier permeation (logBB) of 29
Institute of Scientific and Technical Information of China (English)
祝永志; 张丹丹; 曹宝香; 禹继国
2012-01-01
针对多核SMP机群的体系结构特点,讨论了MPI+ OpenMP混合并行程序设计技术.提出了一种多层次化混合设计新方法.设计了N-body问题的多层次化并行算法,并在曙光5000A机群上与传统的混合算法作了性能方面的比较.结果表明,该层次化混合并行算法具有更好的扩展性和加速比.%For multi-core SMP cluster systems, this paper discusses hybrid parallel programming techniques based on MPI and OpenMP.We propose a new hybrid parallel programming methods lhat are aware of architecture hierarchy on SMP cluster systems. We design a hierarchically parallel algorithm on the N-body problem, and compared its performance with traditional hybrid parallel algorithms on the Dawning 5000A cluster. The results indicate that our hierarchically hybrid parallel algorithm has better scalability and speedup than others.
A Survey of Hierarchical Classification Methods%层次分类方法综述
Institute of Scientific and Technical Information of China (English)
陆彦婷; 陆建峰; 杨静宇
2013-01-01
层次分类方法利用类别层次结构来分解问题和组织分类器，可有效解决多类分类问题。依据是否要求类别之间存在显式层次关系，层次分类方法可分为两大类。文中对不要求类别之间存在显式层次关系的层次分类方法进行综述。首先归纳和阐述此类方法所采用的基本框架，然后介绍和分析其中若干关键技术的研究进展，最后从算法和应用两个角度对国内外相关研究进行详细叙述，进而对现有方法进行总结，并给出进一步研究的方向。%Hierarchical classification ( HC ) , decomposing problem and organizing the classifiers according to the category hierarchy, is an efficient solution for multi-class classification problem. Depending on whether an explicit hierarchical relationship among categories is required, HC methods can be divided into two types. In this paper, the HC methods which do not require explicit hierarchical relationship among categories are reviewed systematically. Firstly, the basic framework of this type of methods is outlined. Then, the research progresses of several key techniques are elaborated and analyzed. Next, the related research work at home and abroad is described in detail from both algorithm and application perspectives. Finally, the existing methods are summarized and several future research directions are pointed out.
A Novel Data Hierarchical Fusion Method for Gas Turbine Engine Performance Fault Diagnosis
Directory of Open Access Journals (Sweden)
Feng Lu
2016-10-01
Full Text Available Gas path fault diagnosis involves the effective utilization of condition-based sensor signals along engine gas path to accurately identify engine performance failure. The rapid development of information processing technology has led to the use of multiple-source information fusion for fault diagnostics. Numerous efforts have been paid to develop data-based fusion methods, such as neural networks fusion, while little research has focused on fusion architecture or the fusion of different method kinds. In this paper, a data hierarchical fusion using improved weighted Dempster–Shaffer evidence theory (WDS is proposed, and the integration of data-based and model-based methods is presented for engine gas-path fault diagnosis. For the purpose of simplifying learning machine typology, a recursive reduced kernel based extreme learning machine (RR-KELM is developed to produce the fault probability, which is considered as the data-based evidence. Meanwhile, the model-based evidence is achieved using particle filter-fuzzy logic algorithm (PF-FL by engine health estimation and component fault location in feature level. The outputs of two evidences are integrated using WDS evidence theory in decision level to reach a final recognition decision of gas-path fault pattern. The characteristics and advantages of two evidences are analyzed and used as guidelines for data hierarchical fusion framework. Our goal is that the proposed methodology provides much better performance of gas-path fault diagnosis compared to solely relying on data-based or model-based method. The hierarchical fusion framework is evaluated in terms to fault diagnosis accuracy and robustness through a case study involving fault mode dataset of a turbofan engine that is generated by the general gas turbine simulation. These applications confirm the effectiveness and usefulness of the proposed approach.
Fuzzy Clustering Methods and their Application to Fuzzy Modeling
DEFF Research Database (Denmark)
Kroszynski, Uri; Zhou, Jianjun
1999-01-01
Fuzzy modeling techniques based upon the analysis of measured input/output data sets result in a set of rules that allow to predict system outputs from given inputs. Fuzzy clustering methods for system modeling and identification result in relatively small rule-bases, allowing fast, yet accurate...... prediction of outputs. This article presents an overview of some of the most popular clustering methods, namely Fuzzy Cluster-Means (FCM) and its generalizations to Fuzzy C-Lines and Elliptotypes. The algorithms for computing cluster centers and principal directions from a training data-set are described....... A method to obtain an optimized number of clusters is outlined. Based upon the cluster's characteristics, a behavioural model is formulated in terms of a rule-base and an inference engine. The article reviews several variants for the model formulation. Some limitations of the methods are listed...
A New Feature Selection Method for Text Clustering
Institute of Scientific and Technical Information of China (English)
XU Junling; XU Baowen; ZHANG Weifeng; CUI Zifeng; ZHANG Wei
2007-01-01
Feature selection methods have been successfully applied to text categorization but seldom applied to text clustering due to the unavailability of class label information. In this paper, a new feature selection method for text clustering based on expectation maximization and cluster validity is proposed. It uses supervised feature selection method on the intermediate clustering result which is generated during iterative clustering to do feature selection for text clustering; meanwhile, the Davies-Bouldin's index is used to evaluate the intermediate feature subsets indirectly. Then feature subsets are selected according to the curve of the DaviesBouldin's index. Experiment is carried out on several popular datasets and the results show the advantages of the proposed method.
Functional annotation of hierarchical modularity.
Directory of Open Access Journals (Sweden)
Kanchana Padmanabhan
Full Text Available In biological networks of molecular interactions in a cell, network motifs that are biologically relevant are also functionally coherent, or form functional modules. These functionally coherent modules combine in a hierarchical manner into larger, less cohesive subsystems, thus revealing one of the essential design principles of system-level cellular organization and function-hierarchical modularity. Arguably, hierarchical modularity has not been explicitly taken into consideration by most, if not all, functional annotation systems. As a result, the existing methods would often fail to assign a statistically significant functional coherence score to biologically relevant molecular machines. We developed a methodology for hierarchical functional annotation. Given the hierarchical taxonomy of functional concepts (e.g., Gene Ontology and the association of individual genes or proteins with these concepts (e.g., GO terms, our method will assign a Hierarchical Modularity Score (HMS to each node in the hierarchy of functional modules; the HMS score and its p-value measure functional coherence of each module in the hierarchy. While existing methods annotate each module with a set of "enriched" functional terms in a bag of genes, our complementary method provides the hierarchical functional annotation of the modules and their hierarchically organized components. A hierarchical organization of functional modules often comes as a bi-product of cluster analysis of gene expression data or protein interaction data. Otherwise, our method will automatically build such a hierarchy by directly incorporating the functional taxonomy information into the hierarchy search process and by allowing multi-functional genes to be part of more than one component in the hierarchy. In addition, its underlying HMS scoring metric ensures that functional specificity of the terms across different levels of the hierarchical taxonomy is properly treated. We have evaluated our
Selections of data preprocessing methods and similarity metrics for gene cluster analysis
Institute of Scientific and Technical Information of China (English)
YANG Chunmei; WAN Baikun; GAO Xiaofeng
2006-01-01
Clustering is one of the major exploratory techniques for gene expression data analysis. Only with suitable similarity metrics and when datasets are properly preprocessed, can results of high quality be obtained in cluster analysis. In this study, gene expression datasets with external evaluation criteria were preprocessed as normalization by line, normalization by column or logarithm transformation by base-2, and were subsequently clustered by hierarchical clustering, k-means clustering and self-organizing maps (SOMs) with Pearson correlation coefficient or Euclidean distance as similarity metric. Finally, the quality of clusters was evaluated by adjusted Rand index. The results illustrate that k-means clustering and SOMs have distinct advantages over hierarchical clustering in gene clustering, and SOMs are a bit better than k-means when randomly initialized. It also shows that hierarchical clustering prefers Pearson correlation coefficient as similarity metric and dataset normalized by line. Meanwhile, k-means clustering and SOMs can produce better clusters with Euclidean distance and logarithm transformed datasets. These results will afford valuable reference to the implementation of gene expression cluster analysis.
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
This paper focuses on the methods and process of spatial aggregation based on semantic and geometric characteristics of spatial objects and relations among the objects with the help of spatial data structure (Formal Data Structure),the Local Constrained Delaunay Triangulations and semantic hierarchy.The adjacent relation among connected objects and unconnected objects has been studied through constrained triangle as elementary processing unit in aggregation operation.The hierarchical semantic analytical matrix is given for analyzing the similarity between objects types and between objects.Several different cases of aggregation have been presented in this paper.
Lv, Wei; Wei, Bo; Xu, Lingling; Zhao, Yan; Gao, Hong; Liu, Jia
2012-10-01
In this work, hierarchical ZnO flowers were synthesized via a sucrose-assisted urea hydrothermal method. The thermogravimetric analysis/differential thermal analysis (TGA-DTA) and Fourier transform infrared spectra (FTIR) showed that sucrose acted as a complexing agent in the synthesis process and assisted combustion during annealing. Photocatalytic activity was evaluated using the degradation of organic dye methyl orange. The sucrose added ZnO flowers showed improved activity, which was mainly attributed to the better crystallinity as confirmed by X-ray photoelectron spectroscopy (XPS) analysis. The effect of sucrose amount on photocatalytic activity was also studied.
Directory of Open Access Journals (Sweden)
Lee Yun-Shien
2008-03-01
Full Text Available Abstract Background The hierarchical clustering tree (HCT with a dendrogram 1 and the singular value decomposition (SVD with a dimension-reduced representative map 2 are popular methods for two-way sorting the gene-by-array matrix map employed in gene expression profiling. While HCT dendrograms tend to optimize local coherent clustering patterns, SVD leading eigenvectors usually identify better global grouping and transitional structures. Results This study proposes a flipping mechanism for a conventional agglomerative HCT using a rank-two ellipse (R2E, an improved SVD algorithm for sorting purpose seriation by Chen 3 as an external reference. While HCTs always produce permutations with good local behaviour, the rank-two ellipse seriation gives the best global grouping patterns and smooth transitional trends. The resulting algorithm automatically integrates the desirable properties of each method so that users have access to a clustering and visualization environment for gene expression profiles that preserves coherent local clusters and identifies global grouping trends. Conclusion We demonstrate, through four examples, that the proposed method not only possesses better numerical and statistical properties, it also provides more meaningful biomedical insights than other sorting algorithms. We suggest that sorted proximity matrices for genes and arrays, in addition to the gene-by-array expression matrix, can greatly aid in the search for comprehensive understanding of gene expression structures. Software for the proposed methods can be obtained at http://gap.stat.sinica.edu.tw/Software/GAP.
Fuzzy Clustering Method for Web User Based on Pages Classification
Institute of Scientific and Technical Information of China (English)
ZHAN Li-qiang; LIU Da-xin
2004-01-01
A new method for Web users fuzzy clustering based on analysis of user interest characteristic is proposed in this article.The method first defines page fuzzy categories according to the links on the index page of the site, then computes fuzzy degree of cross page through aggregating on data of Web log.After that, by using fuzzy comprehensive evaluation method, the method constructs user interest vectors according to page viewing times and frequency of hits, and derives the fuzzy similarity matrix from the interest vectors for the Web users.Finally, it gets the clustering result through the fuzzy clustering method.The experimental results show the effectiveness of the method.
CCM: A Text Classification Method by Clustering
DEFF Research Database (Denmark)
Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock
2011-01-01
In this paper, a new Cluster based Classification Model (CCM) for suspicious email detection and other text classification tasks, is presented. Comparative experiments of the proposed model against traditional classification models and the boosting algorithm are also discussed. Experimental results...... show that the CCM outperforms traditional classification models as well as the boosting algorithm for the task of suspicious email detection on terrorism domain email dataset and topic categorization on the Reuters-21578 and 20 Newsgroups datasets. The overall finding is that applying a cluster based...
A graph clustering method for community detection in complex networks
Zhou, HongFang; Li, Jin; Li, JunHuai; Zhang, FaCun; Cui, YingAn
2017-03-01
Information mining from complex networks by identifying communities is an important problem in a number of research fields, including the social sciences, biology, physics and medicine. First, two concepts are introduced, Attracting Degree and Recommending Degree. Second, a graph clustering method, referred to as AR-Cluster, is presented for detecting community structures in complex networks. Third, a novel collaborative similarity measure is adopted to calculate node similarities. In the AR-Cluster method, vertices are grouped together based on calculated similarity under a K-Medoids framework. Extensive experimental results on two real datasets show the effectiveness of AR-Cluster.
Li, Xin; Yu, Jiaguo; Jaroniec, Mietek
2016-05-01
As a green and sustainable technology, semiconductor-based heterogeneous photocatalysis has received much attention in the last few decades because it has potential to solve both energy and environmental problems. To achieve efficient photocatalysts, various hierarchical semiconductors have been designed and fabricated at the micro/nanometer scale in recent years. This review presents a critical appraisal of fabrication methods, growth mechanisms and applications of advanced hierarchical photocatalysts. Especially, the different synthesis strategies such as two-step templating, in situ template-sacrificial dissolution, self-templating method, in situ template-free assembly, chemically induced self-transformation and post-synthesis treatment are highlighted. Finally, some important applications including photocatalytic degradation of pollutants, photocatalytic H2 production and photocatalytic CO2 reduction are reviewed. A thorough assessment of the progress made in photocatalysis may open new opportunities in designing highly effective hierarchical photocatalysts for advanced applications ranging from thermal catalysis, separation and purification processes to solar cells.
Directory of Open Access Journals (Sweden)
Sergio Briguglio
2003-01-01
Full Text Available A performance-prediction model is presented, which describes different hierarchical workload decomposition strategies for particle in cell (PIC codes on Clusters of Symmetric MultiProcessors. The devised workload decomposition is hierarchically structured: a higher-level decomposition among the computational nodes, and a lower-level one among the processors of each computational node. Several decomposition strategies are evaluated by means of the prediction model, with respect to the memory occupancy, the parallelization efficiency and the required programming effort. Such strategies have been implemented by integrating the high-level languages High Performance Fortran (at the inter-node stage and OpenMP (at the intra-node one. The details of these implementations are presented, and the experimental values of parallelization efficiency are compared with the predicted results.
Open-Source Sequence Clustering Methods Improve the State Of the Art.
Kopylova, Evguenia; Navas-Molina, Jose A; Mercier, Céline; Xu, Zhenjiang Zech; Mahé, Frédéric; He, Yan; Zhou, Hong-Wei; Rognes, Torbjørn; Caporaso, J Gregory; Knight, Rob
2016-01-01
Sequence clustering is a common early step in amplicon-based microbial community analysis, when raw sequencing reads are clustered into operational taxonomic units (OTUs) to reduce the run time of subsequent analysis steps. Here, we evaluated the performance of recently released state-of-the-art open-source clustering software products, namely, OTUCLUST, Swarm, SUMACLUST, and SortMeRNA, against current principal options (UCLUST and USEARCH) in QIIME, hierarchical clustering methods in mothur, and USEARCH's most recent clustering algorithm, UPARSE. All the latest open-source tools showed promising results, reporting up to 60% fewer spurious OTUs than UCLUST, indicating that the underlying clustering algorithm can vastly reduce the number of these derived OTUs. Furthermore, we observed that stringent quality filtering, such as is done in UPARSE, can cause a significant underestimation of species abundance and diversity, leading to incorrect biological results. Swarm, SUMACLUST, and SortMeRNA have been included in the QIIME 1.9.0 release. IMPORTANCE Massive collections of next-generation sequencing data call for fast, accurate, and easily accessible bioinformatics algorithms to perform sequence clustering. A comprehensive benchmark is presented, including open-source tools and the popular USEARCH suite. Simulated, mock, and environmental communities were used to analyze sensitivity, selectivity, species diversity (alpha and beta), and taxonomic composition. The results demonstrate that recent clustering algorithms can significantly improve accuracy and preserve estimated diversity without the application of aggressive filtering. Moreover, these tools are all open source, apply multiple levels of multithreading, and scale to the demands of modern next-generation sequencing data, which is essential for the analysis of massive multidisciplinary studies such as the Earth Microbiome Project (EMP) (J. A. Gilbert, J. K. Jansson, and R. Knight, BMC Biol 12:69, 2014, http
A top-down hierarchical spatio-temporal process description method and its data organization
Xie, Jiong; Xue, Cunjin
2009-10-01
Modeling and representing spatio-temporal process is the key foundation for analyzing geographic phenomenon and acquiring spatio-temporal high-level knowledge. Spatio-temporal representation methods with bottom-up approach based on object modeling view lack of explicit definition of geographic phenomenon and finer-grained representation of spatio-temporal causal relationships. Based on significant advances in data modeling of spatio-temporal object and event, aimed to represent discrete regional dynamic phenomenon composed with group of spatio-temporal objects, a regional spatio-temporal process description method using Top-Down Hierarchical approach (STP-TDH) is proposed and a data organization structure based on relational database is designed and implemented which builds up the data structure foundation for carrying out advanced data utilization and decision-making. The land use application case indicated that process modeling with top-down approach was proved to be good with the spatio-temporal cognition characteristic of our human, and its hierarchical representation framework can depict dynamic evolution characteristic of regional phenomenon with finer-grained level and can reduce complexity of process description.
Directory of Open Access Journals (Sweden)
Wuyong Qian
2016-09-01
Full Text Available Although medical waste usually accounts for a small fraction of urban municipal waste, its proper disposal has been a challenging issue as it often contains infectious, radioactive, or hazardous waste. This article proposes a two-level hierarchical multicriteria decision model to address medical waste disposal method selection (MWDMS, where disposal methods are assessed against different criteria as intuitionistic fuzzy preference relations and criteria weights are furnished as real values. This paper first introduces new operations for a special class of intuitionistic fuzzy values, whose membership and non-membership information is cross ratio based ]0, 1[-values. New score and accuracy functions are defined in order to develop a comparison approach for ]0, 1[-valued intuitionistic fuzzy numbers. A weighted geometric operator is then put forward to aggregate a collection of ]0, 1[-valued intuitionistic fuzzy values. Similar to Saaty’s 1–9 scale, this paper proposes a cross-ratio-based bipolar 0.1–0.9 scale to characterize pairwise comparison results. Subsequently, a two-level hierarchical structure is formulated to handle multicriteria decision problems with intuitionistic preference relations. Finally, the proposed decision framework is applied to MWDMS to illustrate its feasibility and effectiveness.
Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning
2012-01-01
In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point's position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate.
Qian, Wuyong; Wang, Zhou-Jing; Li, Kevin W
2016-09-09
Although medical waste usually accounts for a small fraction of urban municipal waste, its proper disposal has been a challenging issue as it often contains infectious, radioactive, or hazardous waste. This article proposes a two-level hierarchical multicriteria decision model to address medical waste disposal method selection (MWDMS), where disposal methods are assessed against different criteria as intuitionistic fuzzy preference relations and criteria weights are furnished as real values. This paper first introduces new operations for a special class of intuitionistic fuzzy values, whose membership and non-membership information is cross ratio based ]0, 1[-values. New score and accuracy functions are defined in order to develop a comparison approach for ]0, 1[-valued intuitionistic fuzzy numbers. A weighted geometric operator is then put forward to aggregate a collection of ]0, 1[-valued intuitionistic fuzzy values. Similar to Saaty's 1-9 scale, this paper proposes a cross-ratio-based bipolar 0.1-0.9 scale to characterize pairwise comparison results. Subsequently, a two-level hierarchical structure is formulated to handle multicriteria decision problems with intuitionistic preference relations. Finally, the proposed decision framework is applied to MWDMS to illustrate its feasibility and effectiveness.
Progeny Clustering: A Method to Identify Biological Phenotypes
Hu, Chenyue W.; Kornblau, Steven M.; Slater, John H.; Qutub, Amina A.
2015-01-01
Estimating the optimal number of clusters is a major challenge in applying cluster analysis to any type of dataset, especially to biomedical datasets, which are high-dimensional and complex. Here, we introduce an improved method, Progeny Clustering, which is stability-based and exceptionally efficient in computing, to find the ideal number of clusters. The algorithm employs a novel Progeny Sampling method to reconstruct cluster identity, a co-occurrence probability matrix to assess the clustering stability, and a set of reference datasets to overcome inherent biases in the algorithm and data space. Our method was shown successful and robust when applied to two synthetic datasets (datasets of two-dimensions and ten-dimensions containing eight dimensions of pure noise), two standard biological datasets (the Iris dataset and Rat CNS dataset) and two biological datasets (a cell phenotype dataset and an acute myeloid leukemia (AML) reverse phase protein array (RPPA) dataset). Progeny Clustering outperformed some popular clustering evaluation methods in the ten-dimensional synthetic dataset as well as in the cell phenotype dataset, and it was the only method that successfully discovered clinically meaningful patient groupings in the AML RPPA dataset. PMID:26267476
Kwak, Wonshik; Hwang, Woonbong
2016-02-01
To facilitate the fabrication of superoleophobic surfaces having hierarchical microcubic/nanowire structures (HMNS), even for low surface tension liquids including octane (surface tension = 21.1 mN m-1), and to understand the influences of surface structures on the oleophobicity, we developed a convenient method to achieve superoleophobic surfaces on aluminum substrates using chemical acid etching, anodization and fluorination treatment. The liquid repellency of the structured surface was validated through observable experimental results the contact and sliding angle measurements. The etching condition required to ensure high surface roughness was established, and an optimal anodizing condition was determined, as a critical parameter in building the superoleophobicity. The microcubic structures formed by acid etching are essential for achieving the formation of the hierarchical structure, and therefore, the nanowire structures formed by anodization lead to an enhancement of the superoleophobicity for low surface tension liquids. Under optimized morphology by microcubic/nanowire structures with fluorination treatment, the contact angle over 150° and the sliding angle less than 10° are achieved even for octane.
DEFF Research Database (Denmark)
Thomadsen, Tommy
2005-01-01
of different types of hierarchical networks. This is supplemented by a review of ring network design problems and a presentation of a model allowing for modeling most hierarchical networks. We use methods based on linear programming to design the hierarchical networks. Thus, a brief introduction to the various....... The thesis investigates models for hierarchical network design and methods used to design such networks. In addition, ring network design is considered, since ring networks commonly appear in the design of hierarchical networks. The thesis introduces hierarchical networks, including a classification scheme...... linear programming based methods is included. The thesis is thus suitable as a foundation for study of design of hierarchical networks. The major contribution of the thesis consists of seven papers which are included in the appendix. The papers address hierarchical network design and/or ring network...
Using an Improved Clustering Method to Detect Anomaly Activities
Institute of Scientific and Technical Information of China (English)
LI Han; ZHANG Nan; BAO Lihui
2006-01-01
In this paper, an improved k-means based clustering method (IKCM) is proposed. By refining the initial cluster centers and adjusting the number of clusters by splitting and merging procedures, it can avoid the algorithm resulting in the situation of locally optimal solution and reduce the number of clusters dependency. The IKCM has been implemented and tested. We perform experiments on KDD-99 data set. The comparison experiments with H-means+also have been conducted. The results obtained in this study are very encouraging.
A hierarchical layout design method based on rubber band potentialenergy descending
Directory of Open Access Journals (Sweden)
Ou Cheng Yi
2016-01-01
Full Text Available Strip packing problems is one important sub-problem of the Cutting stock problems. Its application domains include sheet metal, ship making, wood, furniture, garment, shoes and glass. In this paper, a hierarchical layout design method based on rubber band potential-energy descending was proposed. The basic concept of the rubber band enclosing model was described in detail. We divided the layout process into three different stages: initial layout stage, rubber band enclosing stage and local adjustment stage. In different stages, the most efficient strategies were employed for further improving the layout solution. Computational results show that the proposed method performed better than the GLSHA algorithm for three out of nine instances in utilization.
Hierarchical Neural Networks Method for Fault Diagnosis of Large-Scale Analog Circuits
Institute of Scientific and Technical Information of China (English)
TAN Yanghong; HE Yigang; FANG Gefeng
2007-01-01
A novel hierarchical neural networks (HNNs) method for fault diagnosis of large-scale circuits is proposed. The presented techniques using neural networks(NNs) approaches require a large amount of computation for simulating various faulty component possibilities. For large scale circuits, the number of possible faults, and hence the simulations, grow rapidly and become tedious and sometimes even impractical. Some NNs are distributed to the torn sub-blocks according to the proposed torn principles of large scale circuits. And the NNs are trained in batches by different patterns in the light of the presented rules of various patterns when the DC, AC and transient responses of the circuit are available. The method is characterized by decreasing the over-lapped feasible domains of responses of circuits with tolerance and leads to better performance and higher correct classification. The methodology is illustrated by means of diagnosis examples.
Ackerman, Margareta; Branzei, Simina; Loker, David
2011-01-01
In this paper we investigate clustering in the weighted setting, in which every data point is assigned a real valued weight. We conduct a theoretical analysis on the influence of weighted data on standard clustering algorithms in each of the partitional and hierarchical settings, characterising the precise conditions under which such algorithms react to weights, and classifying clustering methods into three broad categories: weight-responsive, weight-considering, and weight-robust. Our analysis raises several interesting questions and can be directly mapped to the classical unweighted setting.
Schaefer, Andreas M.; Daniell, James E.; Wenzel, Friedemann
2017-07-01
Earthquake clustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation for probabilistic seismic hazard assessment. This study introduces the Smart Cluster Method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal cluster identification. It utilises the magnitude-dependent spatio-temporal earthquake density to adjust the search properties, subsequently analyses the identified clusters to determine directional variation and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010-2011 Darfield-Christchurch sequence, a reclassification procedure is applied to disassemble subsequent ruptures using near-field searches, nearest neighbour classification and temporal splitting. The method is capable of identifying and classifying earthquake clusters in space and time. It has been tested and validated using earthquake data from California and New Zealand. A total of more than 1500 clusters have been found in both regions since 1980 with M m i n = 2.0. Utilising the knowledge of cluster classification, the method has been adjusted to provide an earthquake declustering algorithm, which has been compared to existing methods. Its performance is comparable to established methodologies. The analysis of earthquake clustering statistics lead to various new and updated correlation functions, e.g. for ratios between mainshock and strongest aftershock and general aftershock activity metrics.
Schaefer, Andreas M.; Daniell, James E.; Wenzel, Friedemann
2017-03-01
Earthquake clustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation for probabilistic seismic hazard assessment. This study introduces the Smart Cluster Method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal cluster identification. It utilises the magnitude-dependent spatio-temporal earthquake density to adjust the search properties, subsequently analyses the identified clusters to determine directional variation and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010-2011 Darfield-Christchurch sequence, a reclassification procedure is applied to disassemble subsequent ruptures using near-field searches, nearest neighbour classification and temporal splitting. The method is capable of identifying and classifying earthquake clusters in space and time. It has been tested and validated using earthquake data from California and New Zealand. A total of more than 1500 clusters have been found in both regions since 1980 with M m i n = 2.0. Utilising the knowledge of cluster classification, the method has been adjusted to provide an earthquake declustering algorithm, which has been compared to existing methods. Its performance is comparable to established methodologies. The analysis of earthquake clustering statistics lead to various new and updated correlation functions, e.g. for ratios between mainshock and strongest aftershock and general aftershock activity metrics.
Application of the cluster variation method to interstitial solid solutions
Pekelharing, M.I.
2008-01-01
A thermodynamic model for interstitial alloys, based on the Cluster Variation Method (CVM), has been developed, capable of incorporating short range ordering (SRO), long range ordering (LRO), and the mutual interaction between the host and the interstitial sublattices. The obtained cluster-based
A Latent Variable Clustering Method for Wireless Sensor Networks
DEFF Research Database (Denmark)
Vasilev, Vladislav; Mihovska, Albena Dimitrova; Poulkov, Vladimir
2016-01-01
In this paper we derive a clustering method based on the Hidden Conditional Random Field (HCRF) model in order to maximizes the performance of a wireless sensor. Our novel approach to clustering in this paper is in the application of an index invariant graph that we defined in a previous work and...
Object-Oriented Image Clustering Method Using UAS Photogrammetric Imagery
Lin, Y.; Larson, A.; Schultz-Fellenz, E. S.; Sussman, A. J.; Swanson, E.; Coppersmith, R.
2016-12-01
Unmanned Aerial Systems (UAS) have been used widely as an imaging modality to obtain remotely sensed multi-band surface imagery, and are growing in popularity due to their efficiency, ease of use, and affordability. Los Alamos National Laboratory (LANL) has employed the use of UAS for geologic site characterization and change detection studies at a variety of field sites. The deployed UAS equipped with a standard visible band camera to collect imagery datasets. Based on the imagery collected, we use deep sparse algorithmic processing to detect and discriminate subtle topographic features created or impacted by subsurface activities. In this work, we develop an object-oriented remote sensing imagery clustering method for land cover classification. To improve the clustering and segmentation accuracy, instead of using conventional pixel-based clustering methods, we integrate the spatial information from neighboring regions to create super-pixels to avoid salt-and-pepper noise and subsequent over-segmentation. To further improve robustness of our clustering method, we also incorporate a custom digital elevation model (DEM) dataset generated using a structure-from-motion (SfM) algorithm together with the red, green, and blue (RGB) band data for clustering. In particular, we first employ an agglomerative clustering to create an initial segmentation map, from where every object is treated as a single (new) pixel. Based on the new pixels obtained, we generate new features to implement another level of clustering. We employ our clustering method to the RGB+DEM datasets collected at the field site. Through binary clustering and multi-object clustering tests, we verify that our method can accurately separate vegetation from non-vegetation regions, and are also able to differentiate object features on the surface.
Sequential Combination Methods forData Clustering Analysis
Institute of Scientific and Technical Information of China (English)
钱 涛; Ching Y.Suen; 唐远炎
2002-01-01
This paper proposes the use of more than one clustering method to improve clustering performance. Clustering is an optimization procedure based on a specific clustering criterion. Clustering combination can be regardedasatechnique that constructs and processes multiple clusteringcriteria.Sincetheglobalandlocalclusteringcriteriaarecomplementary rather than competitive, combining these two types of clustering criteria may enhance theclustering performance. In our past work, a multi-objective programming based simultaneous clustering combination algorithmhasbeenproposed, which incorporates multiple criteria into an objective function by a weighting method, and solves this problem with constrained nonlinear optimization programming. But this algorithm has high computationalcomplexity.Hereasequential combination approach is investigated, which first uses the global criterion based clustering to produce an initial result, then uses the local criterion based information to improve the initial result with aprobabilisticrelaxation algorithm or linear additive model.Compared with the simultaneous combination method, sequential combination haslow computational complexity. Results on some simulated data and standard test data arereported.Itappearsthatclustering performance improvement can be achieved at low cost through sequential combination.
Graves, T.A.; Kendall, K.C.; Royle, J. Andrew; Stetz, J.B.; Macleod, A.C.
2011-01-01
Few studies link habitat to grizzly bear Ursus arctos abundance and these have not accounted for the variation in detection or spatial autocorrelation. We collected and genotyped bear hair in and around Glacier National Park in northwestern Montana during the summer of 2000. We developed a hierarchical Markov chain Monte Carlo model that extends the existing occupancy and count models by accounting for (1) spatially explicit variables that we hypothesized might influence abundance; (2) separate sub-models of detection probability for two distinct sampling methods (hair traps and rub trees) targeting different segments of the population; (3) covariates to explain variation in each sub-model of detection; (4) a conditional autoregressive term to account for spatial autocorrelation; (5) weights to identify most important variables. Road density and per cent mesic habitat best explained variation in female grizzly bear abundance; spatial autocorrelation was not supported. More female bears were predicted in places with lower road density and with more mesic habitat. Detection rates of females increased with rub tree sampling effort. Road density best explained variation in male grizzly bear abundance and spatial autocorrelation was supported. More male bears were predicted in areas of low road density. Detection rates of males increased with rub tree and hair trap sampling effort and decreased over the sampling period. We provide a new method to (1) incorporate multiple detection methods into hierarchical models of abundance; (2) determine whether spatial autocorrelation should be included in final models. Our results suggest that the influence of landscape variables is consistent between habitat selection and abundance in this system. ?? 2011 The Authors. Animal Conservation ?? 2011 The Zoological Society of London.
Directory of Open Access Journals (Sweden)
Juan C. Rodríguez
2010-07-01
Full Text Available Es ampliamente conocido que algunos campos relacionados con aplicaciones de gráficos realistas requieren modelos tridimensionales altamente detallados. Las tecnologías para esto están bien desarrolladas, sin embargo, en algunos casos los escáneres láser obtienen modelos complejos formados por millones de puntos, por lo que son computacionalmente intratables. En estos casos es conveniente obtener un conjunto reducido de estas muestras con las que reconstruir la superficie de la función. Obtener un enfoque de reducción adecuado que posea un equilibrio entre la pérdida de precisión de la función reconstruida, y el costo computacional es un problema no trivial. En este artículo presentamos un método jerárquico de aglomeración a través de la selección de centros mediante la geométrica, la distribución y la estimación de curvatura de las muestras en el espacio 3D.It is widely known that some fields related to graphic applications require realistic and full detailed three-dimensional models. Technologies for this kind of applications exist. However, in some cases, laser scanner get complex models composed of million of points, making its computationally difficult. In these cases, it is desirable to obtain a reduced set of these samples to reconstruct the function's surface. An appropriate reduction approach with a non-significant loss of accuracy in the reconstructed function with a good balance of computational load is usually a non-trivial problem. In this article, a hierarchical clustering based method by the selection of center using the geometric distribution and curvature estimation of the samples in the 3D space is described.
Urban Fire Risk Clustering Method Based on Fire Statistics
Institute of Scientific and Technical Information of China (English)
WU Lizhi; REN Aizhu
2008-01-01
Fire statistics and fire analysis have become important ways for us to understand the law of fire,prevent the occurrence of fire, and improve the ability to control fire. According to existing fire statistics, the weighted fire risk calculating method characterized by the number of fire occurrence, direct economic losses,and fire casualties was put forward. On the basis of this method, meanwhile having improved K-mean clus-tering arithmetic, this paper established fire dsk K-mean clustering model, which could better resolve the automatic classifying problems towards fire risk. Fire risk cluster should be classified by the absolute dis-tance of the target instead of the relative distance in the traditional cluster arithmetic. Finally, for applying the established model, this paper carded out fire risk clustering on fire statistics from January 2000 to December 2004 of Shenyang in China. This research would provide technical support for urban fire management.
An Effective Method of Producing Small Neutral Carbon Clusters
Institute of Scientific and Technical Information of China (English)
XIA Zhu-Hong; CHEN Cheng-Chu; HSU Yen-Chu
2007-01-01
An effective method of producing small neutral carbon clusters Cn (n = 1-6) is described. The small carbon clusters (positive or negative charge or neutral) are formed by plasma which are produced by a high power 532nm pulse laser ablating the surface of the metal Mn rod to react with small hydrocarbons supplied by a pulse valve, then the neutral carbon clusters are extracted and photo-ionized by another laser (266nm or 355nm) in the ionization region of a linear time-of-flight mass spectrometer. The distributions of the initial neutral carbon clusters are analysed with the ionic species appeared in mass spectra. It is observed that the yield of small carbon clusters with the present method is about 10 times than that of the traditional widely used technology of laser vaporization of graphite.
Directory of Open Access Journals (Sweden)
Sema A. Kalaian
2003-06-01
Full Text Available The objectives of the present mixed-effects meta-analytic application are to provide practical guidelines to: (a Calculate..treatment effect sizes from multiple sites; (b Calculate the overall mean of the site effect sizes and their variances; (c..Model the heterogeneity in these site treatment effects as a function of site and program characteristics plus..unexplained random error using Hierarchical Linear Modeling (HLM; (d Improve the ability of multisite evaluators..and policy makers to reach sound conclusions about the effectiveness of educational and social interventions based on..multisite evaluations; and (e Illustrate the proposed methodology by applying these methods to real multi-site research..data.
Directory of Open Access Journals (Sweden)
Shiza Anand
2015-08-01
Full Text Available As the number of hypertext documents are increasing continuously day by day on world wide web. Therefore clustering methods will be required to bind documents into the clusters repositories according to the similarity lying between the documents. Various clustering methods exist such as Hierarchical Based K-means Fuzzy Logic Based Centroid Based etc. These keyword based clustering methods takes much more amount of time for creating containers and putting documents in their respective containers. These traditional methods use File Handling techniques of different programming languages for creating repositories and transferring web documents into these containers. In contrast openstack4j SDK is a new technique for creating containers and shifting web documents into these containers according to the similarity in much more less amount of time as compared to the traditional methods. Another benefit of this technique is that this SDK understands and reads all types of files such as jpg html pdf doc etc. This paper compares the time required for clustering of documents by using openstack4j and by traditional methods and suggests various search engines to adopt this technique for clustering so that they give result to the user querries in less amount of time.
A method of spherical harmonic analysis in the geosciences via hierarchical Bayesian inference
Muir, J. B.; Tkalčić, H.
2015-11-01
The problem of decomposing irregular data on the sphere into a set of spherical harmonics is common in many fields of geosciences where it is necessary to build a quantitative understanding of a globally varying field. For example, in global seismology, a compressional or shear wave speed that emerges from tomographic images is used to interpret current state and composition of the mantle, and in geomagnetism, secular variation of magnetic field intensity measured at the surface is studied to better understand the changes in the Earth's core. Optimization methods are widely used for spherical harmonic analysis of irregular data, but they typically do not treat the dependence of the uncertainty estimates on the imposed regularization. This can cause significant difficulties in interpretation, especially when the best-fit model requires more variables as a result of underestimating data noise. Here, with the above limitations in mind, the problem of spherical harmonic expansion of irregular data is treated within the hierarchical Bayesian framework. The hierarchical approach significantly simplifies the problem by removing the need for regularization terms and user-supplied noise estimates. The use of the corrected Akaike Information Criterion for picking the optimal maximum degree of spherical harmonic expansion and the resulting spherical harmonic analyses are first illustrated on a noisy synthetic data set. Subsequently, the method is applied to two global data sets sensitive to the Earth's inner core and lowermost mantle, consisting of PKPab-df and PcP-P differential traveltime residuals relative to a spherically symmetric Earth model. The posterior probability distributions for each spherical harmonic coefficient are calculated via Markov Chain Monte Carlo sampling; the uncertainty obtained for the coefficients thus reflects the noise present in the real data and the imperfections in the spherical harmonic expansion.
Clustering Approach to Stock Market Prediction
Directory of Open Access Journals (Sweden)
M.Suresh Babu
2012-01-01
Full Text Available Clustering is an adaptive procedure in which objects are clustered or grouped together, based on the principle of maximizing the intra-class similarity and minimizing the inter-class similarity. Various clustering algorithms have been developed which results to a good performance on datasets for cluster formation. This paper analyze the major clustering algorithms: K-Means, Hierarchical clustering algorithm and reverse K means and compare the performance of these three major clustering algorithms on the aspect of correctly class wise cluster building ability of algorithm. An effective clustering method, HRK (Hierarchical agglomerative and Recursive K-means clustering is proposed, to predict the short-term stock price movements after the release of financial reports. The proposed method consists of three phases. First, we convert each financial report into a feature vector and use the hierarchical agglomerative clustering method to divide the converted feature vectors into clusters. Second, for each cluster, we recursively apply the K-means clustering method to partition each cluster into sub-clusters so that most feature vectors in each subcluster belong to the same class. Then, for each sub cluster, we choose its centroid as the representative feature vector. Finally, we employ the representative feature vectors to predict the stock price movements. The experimental results show the proposed method outperforms SVM in terms of accuracy and average profits.
Variable cluster analysis method for building neural network model
Institute of Scientific and Technical Information of China (English)
王海东; 刘元东
2004-01-01
To address the problems that input variables should be reduced as much as possible and explain output variables fully in building neural network model of complicated system, a variable selection method based on cluster analysis was investigated. Similarity coefficient which describes the mutual relation of variables was defined. The methods of the highest contribution rate, part replacing whole and variable replacement are put forwarded and deduced by information theory. The software of the neural network based on cluster analysis, which can provide many kinds of methods for defining variable similarity coefficient, clustering system variable and evaluating variable cluster, was developed and applied to build neural network forecast model of cement clinker quality. The results show that all the network scale, training time and prediction accuracy are perfect. The practical application demonstrates that the method of selecting variables for neural network is feasible and effective.
A dynamic fuzzy clustering method based on genetic algorithm
Institute of Scientific and Technical Information of China (English)
ZHENG Yan; ZHOU Chunguang; LIANG Yanchun; GUO Dongwei
2003-01-01
A dynamic fuzzy clustering method is presented based on the genetic algorithm. By calculating the fuzzy dissimilarity between samples the essential associations among samples are modeled factually. The fuzzy dissimilarity between two samples is mapped into their Euclidean distance, that is, the high dimensional samples are mapped into the two-dimensional plane. The mapping is optimized globally by the genetic algorithm, which adjusts the coordinates of each sample, and thus the Euclidean distance, to approximate to the fuzzy dissimilarity between samples gradually. A key advantage of the proposed method is that the clustering is independent of the space distribution of input samples, which improves the flexibility and visualization. This method possesses characteristics of a faster convergence rate and more exact clustering than some typical clustering algorithms. Simulated experiments show the feasibility and availability of the proposed method.
Directory of Open Access Journals (Sweden)
Feng-Hsiang Chung
Full Text Available Gene-set-based analysis (GSA, which uses the relative importance of functional gene-sets, or molecular signatures, as units for analysis of genome-wide gene expression data, has exhibited major advantages with respect to greater accuracy, robustness, and biological relevance, over individual gene analysis (IGA, which uses log-ratios of individual genes for analysis. Yet IGA remains the dominant mode of analysis of gene expression data. The Connectivity Map (CMap, an extensive database on genomic profiles of effects of drugs and small molecules and widely used for studies related to repurposed drug discovery, has been mostly employed in IGA mode. Here, we constructed a GSA-based version of CMap, Gene-Set Connectivity Map (GSCMap, in which all the genomic profiles in CMap are converted, using gene-sets from the Molecular Signatures Database, to functional profiles. We showed that GSCMap essentially eliminated cell-type dependence, a weakness of CMap in IGA mode, and yielded significantly better performance on sample clustering and drug-target association. As a first application of GSCMap we constructed the platform Gene-Set Local Hierarchical Clustering (GSLHC for discovering insights on coordinated actions of biological functions and facilitating classification of heterogeneous subtypes on drug-driven responses. GSLHC was shown to tightly clustered drugs of known similar properties. We used GSLHC to identify the therapeutic properties and putative targets of 18 compounds of previously unknown characteristics listed in CMap, eight of which suggest anti-cancer activities. The GSLHC website http://cloudr.ncu.edu.tw/gslhc/ contains 1,857 local hierarchical clusters accessible by querying 555 of the 1,309 drugs and small molecules listed in CMap. We expect GSCMap and GSLHC to be widely useful in providing new insights in the biological effect of bioactive compounds, in drug repurposing, and in function-based classification of complex diseases.
DNA splice site sequences clustering method for conservativeness analysis
Institute of Scientific and Technical Information of China (English)
Quanwei Zhang; Qinke Peng; Tao Xu
2009-01-01
DNA sequences that are near to splice sites have remarkable conservativeness,and many researchers have contributed to the prediction of splice site.In order to mine the underlying biological knowledge,we analyze the conservativeness of DNA splice site adjacent sequences by clustering.Firstly,we propose a kind of DNA splice site sequences clustering method which is based on DBSCAN,and use four kinds of dissimilarity calculating methods.Then,we analyze the conservative feature of the clustering results and the experimental data set.
Color Image Segmentation Method Based on Improved Spectral Clustering Algorithm
Dong Qin
2014-01-01
Contraposing to the features of image data with high sparsity of and the problems on determination of clustering numbers, we try to put forward an color image segmentation algorithm, combined with semi-supervised machine learning technology and spectral graph theory. By the research of related theories and methods of spectral clustering algorithms, we introduce information entropy conception to design a method which can automatically optimize the scale parameter value. So it avoids the unstab...
Institute of Scientific and Technical Information of China (English)
Shahzad Ahmad KHAN; FU Zhengyi; Muhammad ASIF; WANG Weimin; WANG Hao
2015-01-01
Here, a new idea was proposed for template-free synthesis of hierarchical m-ZrO2 nanorods and “their” possible formation mechanism based on a series of chemical reactions by simple hydrothermal method. The traditional preparation methods of hierarchical ZrO2 nanorods involved inexpensive equipment, complicated process, and high production cost. The as-synthesized products composed of many nanorods with 180-200 nm in diameter and 5-7μm in length. The ifnal product after annealing involved hierarchical monoclinic ZrO2 (m-ZrO2) nanorods, namely, the big nanorod was made up of many small nanorods with 40-50 nm in diameter and 500-600 nm in length. The experimental results were useful in understanding the chemical properties of ZrB2 and ZrO2 and the design of the derivatives for m-ZrO2 nanomaterials.
Clustering method based on data division and partition
Institute of Scientific and Technical Information of China (English)
卢志茂; 刘晨; 张春祥; 王蕾
2014-01-01
Many classical clustering algorithms do good jobs on their prerequisite but do not scale well when being applied to deal with very large data sets (VLDS). In this work, a novel division and partition clustering method (DP) was proposed to solve the problem. DP cut the source data set into data blocks, and extracted the eigenvector for each data block to form the local feature set. The local feature set was used in the second round of the characteristics polymerization process for the source data to find the global eigenvector. Ultimately according to the global eigenvector, the data set was assigned by criterion of minimum distance. The experimental results show that it is more robust than the conventional clusterings. Characteristics of not sensitive to data dimensions, distribution and number of nature clustering make it have a wide range of applications in clustering VLDS.
Directory of Open Access Journals (Sweden)
Marín Ignacio
2007-11-01
Full Text Available Abstract Background Classification procedures are widely used in phylogenetic inference, the analysis of expression profiles, the study of biological networks, etc. Many algorithms have been proposed to establish the similarity between two different classifications of the same elements. However, methods to determine significant coincidences between hierarchical and non-hierarchical partitions are still poorly developed, in spite of the fact that the search for such coincidences is implicit in many analyses of massive data. Results We describe a novel strategy to compare a hierarchical and a dichotomic non-hierarchical classification of elements, in order to find clusters in a hierarchical tree in which elements of a given "flat" partition are overrepresented. The key improvement of our strategy respect to previous methods is using permutation analyses of ranked clusters to determine whether regions of the dendrograms present a significant enrichment. We show that this method is more sensitive than previously developed strategies and how it can be applied to several real cases, including microarray and interactome data. Particularly, we use it to compare a hierarchical representation of the yeast mitochondrial interactome and a catalogue of known mitochondrial protein complexes, demonstrating a high level of congruence between those two classifications. We also discuss extensions of this method to other cases which are conceptually related. Conclusion Our method is highly sensitive and outperforms previously described strategies. A PERL script that implements it is available at http://www.uv.es/~genomica/treetracker.
An Examination of Three Spatial Event Cluster Detection Methods
Directory of Open Access Journals (Sweden)
Hensley H. Mariathas
2015-03-01
Full Text Available In spatial disease surveillance, geographic areas with large numbers of disease cases are to be identified, so that targeted investigations can be pursued. Geographic areas with high disease rates are called disease clusters and statistical cluster detection tests are used to identify geographic areas with higher disease rates than expected by chance alone. In some situations, disease-related events rather than individuals are of interest for geographical surveillance, and methods to detect clusters of disease-related events are called event cluster detection methods. In this paper, we examine three distributional assumptions for the events in cluster detection: compound Poisson, approximate normal and multiple hypergeometric (exact. The methods differ on the choice of distributional assumption for the potentially multiple correlated events per individual. The methods are illustrated on emergency department (ED presentations by children and youth (age < 18 years because of substance use in the province of Alberta, Canada, during 1 April 2007, to 31 March 2008. Simulation studies are conducted to investigate Type I error and the power of the clustering methods.
A Clustering Method Based on the Maximum Entropy Principle
Directory of Open Access Journals (Sweden)
Edwin Aldana-Bobadilla
2015-01-01
Full Text Available Clustering is an unsupervised process to determine which unlabeled objects in a set share interesting properties. The objects are grouped into k subsets (clusters whose elements optimize a proximity measure. Methods based on information theory have proven to be feasible alternatives. They are based on the assumption that a cluster is one subset with the minimal possible degree of “disorder”. They attempt to minimize the entropy of each cluster. We propose a clustering method based on the maximum entropy principle. Such a method explores the space of all possible probability distributions of the data to find one that maximizes the entropy subject to extra conditions based on prior information about the clusters. The prior information is based on the assumption that the elements of a cluster are “similar” to each other in accordance with some statistical measure. As a consequence of such a principle, those distributions of high entropy that satisfy the conditions are favored over others. Searching the space to find the optimal distribution of object in the clusters represents a hard combinatorial problem, which disallows the use of traditional optimization techniques. Genetic algorithms are a good alternative to solve this problem. We benchmark our method relative to the best theoretical performance, which is given by the Bayes classifier when data are normally distributed, and a multilayer perceptron network, which offers the best practical performance when data are not normal. In general, a supervised classification method will outperform a non-supervised one, since, in the first case, the elements of the classes are known a priori. In what follows, we show that our method’s effectiveness is comparable to a supervised one. This clearly exhibits the superiority of our method.
A New Method for Medical Image Clustering Using Genetic Algorithm
Directory of Open Access Journals (Sweden)
Akbar Shahrzad Khashandarag
2013-01-01
Full Text Available Segmentation is applied in medical images when the brightness of the images becomes weaker so that making different in recognizing the tissues borders. Thus, the exact segmentation of medical images is an essential process in recognizing and curing an illness. Thus, it is obvious that the purpose of clustering in medical images is the recognition of damaged areas in tissues. Different techniques have been introduced for clustering in different fields such as engineering, medicine, data mining and so on. However, there is no standard technique of clustering to present ideal results for all of the imaging applications. In this paper, a new method combining genetic algorithm and k-means algorithm is presented for clustering medical images. In this combined technique, variable string length genetic algorithm (VGA is used for the determination of the optimal cluster centers. The proposed algorithm has been compared with the k-means clustering algorithm. The advantage of the proposed method is the accuracy in selecting the optimal cluster centers compared with the above mentioned technique.
Cluster Monte Carlo methods for the FePt Hamiltonian
Energy Technology Data Exchange (ETDEWEB)
Lyberatos, A., E-mail: lyb@materials.uoc.gr [Materials Science and Technology Department, P.O. Box 2208, 71003 Heraklion (Greece); Parker, G.J. [HGST, A Western Digital Company, 3403 Yerba Buena Road, San Jose, CA 95135 (United States)
2016-02-15
Cluster Monte Carlo methods for the classical spin Hamiltonian of FePt with long range exchange interactions are presented. We use a combination of the Swendsen–Wang (or Wolff) and Metropolis algorithms that satisfies the detailed balance condition and ergodicity. The algorithms are tested by calculating the temperature dependence of the magnetization, susceptibility and heat capacity of L1{sub 0}-FePt nanoparticles in a range including the critical region. The cluster models yield numerical results in good agreement within statistical error with the standard single-spin flipping Monte Carlo method. The variation of the spin autocorrelation time with grain size is used to deduce the dynamic exponent of the algorithms. Our cluster models do not provide a more accurate estimate of the magnetic properties at equilibrium. - Highlights: • A new cluster Monte Carlo algorithm was applied to FePt nanoparticles. • Magnetic anisotropy imposes a restriction on cluster moves. • Inclusion of Metropolis steps is required to satisfy ergodicity. • In the critical region a percolating cluster occurs for any grain size. • Critical slowing down is not solved by the new cluster algorithms.
Institute of Scientific and Technical Information of China (English)
李新叶; 李新芳
2011-01-01
Family quality default history affects the healthy condition of power transformer greatly in integrated condition assessment. And now, it is usually subjectively decided by expert's experience. A new quantitatively computing method is proposed, that is, using hierarchical clustering technology to analyze the potential evolution regularity and then computing the influence degree of family quality default history on healthy condition of power transformer. To make the clustering result more accurate, line slope distance of condition evolution is proposed as line shape similarity criterion, both data distance criterion and line slope distance criterion are used to cluster data. The experimental result shows that our method is better than traditional hierarchical clustering method, and it is more reasonable to use clustering analysis to calculate the influence degree of family quality default history on power transformer healthy condition.%在变压器状态综合评估的研究中,家族质量缺陷史对变压器健康状态有重要影响,目前多是凭专家经验主观确定.提出利用层次聚类分析技术对同家族变压器状态变化规律进行分析,根据分析结果定量计算家族质量缺陷史对变压器健康状态的影响程度.为提高聚类的准确性,提出用变压器状态变化曲线的斜率距离作为曲线形状的相似性判据,同时用曲线间点数值距离和斜率距离构成交集约束判据进行聚类.实例分析表明改进的层次聚类算法优于传统的层次聚类算法,由聚类分析结果计算家族质量缺陷史对变压器健康状态的影响得出的结果更合理.
A new method to prepare colloids of size-controlled clusters from a matrix assembly cluster source
Cai, Rongsheng; Jian, Nan; Murphy, Shane; Bauer, Karl; Palmer, Richard E.
2017-05-01
A new method for the production of colloidal suspensions of physically deposited clusters is demonstrated. A cluster source has been used to deposit size-controlled clusters onto water-soluble polymer films, which are then dissolved to produce colloidal suspensions of clusters encapsulated with polymer molecules. This process has been demonstrated using different cluster materials (Au and Ag) and polymers (polyvinylpyrrolidone, polyvinyl alcohol, and polyethylene glycol). Scanning transmission electron microscopy of the clusters before and after colloidal dispersion confirms that the polymers act as stabilizing agents. We propose that this method is suitable for the production of biocompatible colloids of ultraprecise clusters.
A liquid drop model for embedded atom method cluster energies
Finley, C. W.; Abel, P. B.; Ferrante, J.
1996-01-01
Minimum energy configurations for homonuclear clusters containing from two to twenty-two atoms of six metals, Ag, Au, Cu, Ni, Pd, and Pt have been calculated using the Embedded Atom Method (EAM). The average energy per atom as a function of cluster size has been fit to a liquid drop model, giving estimates of the surface and curvature energies. The liquid drop model gives a good representation of the relationship between average energy and cluster size. As a test the resulting surface energies are compared to EAM surface energy calculations for various low-index crystal faces with reasonable agreement.
CHANGE DETECTION BY FUSING ADVANTAGES OF THRESHOLD AND CLUSTERING METHODS
Directory of Open Access Journals (Sweden)
M. Tan
2017-09-01
Full Text Available In change detection (CD of medium-resolution remote sensing images, the threshold and clustering methods are two kinds of the most popular ones. It is found that the threshold method of the expectation maximum (EM algorithm usually generates a CD map including many false alarms but almost detecting all changes, and the fuzzy local information c-means algorithm (FLICM obtains a homogeneous CD map but with some missed detections. Therefore, we aim to design a framework to improve CD results by fusing the advantages of threshold and clustering methods. Experimental results indicate the effectiveness of the proposed method.
[Cluster analysis in biomedical researches].
Akopov, A S; Moskovtsev, A A; Dolenko, S A; Savina, G D
2013-01-01
Cluster analysis is one of the most popular methods for the analysis of multi-parameter data. The cluster analysis reveals the internal structure of the data, group the separate observations on the degree of their similarity. The review provides a definition of the basic concepts of cluster analysis, and discusses the most popular clustering algorithms: k-means, hierarchical algorithms, Kohonen networks algorithms. Examples are the use of these algorithms in biomedical research.
Directory of Open Access Journals (Sweden)
Ibrahim Karahan
2016-04-01
Full Text Available Let C be a nonempty closed convex subset of a real Hilbert space H. Let {T_{n}}:C›H be a sequence of nearly nonexpansive mappings such that F:=?_{i=1}^{?}F(T_{i}?Ø. Let V:C›H be a ?-Lipschitzian mapping and F:C›H be a L-Lipschitzian and ?-strongly monotone operator. This paper deals with a modified iterative projection method for approximating a solution of the hierarchical fixed point problem. It is shown that under certain approximate assumptions on the operators and parameters, the modified iterative sequence {x_{n}} converges strongly to x^{*}?F which is also the unique solution of the following variational inequality: ?0, ?x?F. As a special case, this projection method can be used to find the minimum norm solution of above variational inequality; namely, the unique solution x^{*} to the quadratic minimization problem: x^{*}=argmin_{x?F}?x?². The results here improve and extend some recent corresponding results of other authors.
Directory of Open Access Journals (Sweden)
Linjun Fan
2014-01-01
Full Text Available This paper is concerned with the dynamic evolution analysis and quantitative measurement of primary factors that cause service inconsistency in service-oriented distributed simulation applications (SODSA. Traditional methods are mostly qualitative and empirical, and they do not consider the dynamic disturbances among factors in service’s evolution behaviors such as producing, publishing, calling, and maintenance. Moreover, SODSA are rapidly evolving in terms of large-scale, reusable, compositional, pervasive, and flexible features, which presents difficulties in the usage of traditional analysis methods. To resolve these problems, a novel dynamic evolution model extended hierarchical service-finite state automata (EHS-FSA is constructed based on finite state automata (FSA, which formally depict overall changing processes of service consistency states. And also the service consistency evolution algorithms (SCEAs based on EHS-FSA are developed to quantitatively assess these impact factors. Experimental results show that the bad reusability (17.93% on average is the biggest influential factor, the noncomposition of atomic services (13.12% is the second biggest one, and the service version’s confusion (1.2% is the smallest one. Compared with previous qualitative analysis, SCEAs present good effectiveness and feasibility. This research can guide the engineers of service consistency technologies toward obtaining a higher level of consistency in SODSA.
Performance Analysis of Unsupervised Clustering Methods for Brain Tumor Segmentation
Directory of Open Access Journals (Sweden)
Tushar H Jaware
2013-10-01
Full Text Available Medical image processing is the most challenging and emerging field of neuroscience. The ultimate goal of medical image analysis in brain MRI is to extract important clinical features that would improve methods of diagnosis & treatment of disease. This paper focuses on methods to detect & extract brain tumour from brain MR images. MATLAB is used to design, software tool for locating brain tumor, based on unsupervised clustering methods. K-Means clustering algorithm is implemented & tested on data base of 30 images. Performance evolution of unsupervised clusteringmethods is presented.
Smith, Russell J; Price, James; Hudson, Michael J; Phillipps, Steven
2011-01-01
We explore the variation in stellar population ages for Coma cluster galaxies as a function of projected cluster-centric distance, using a sample of 362 red-sequence galaxies with high signal-to-noise spectroscopy. The sample spans a wide range in luminosity (0.02-4 L*) and extends from the cluster core to near the virial radius. We find a clear distinction in the observed trends of the giant and dwarf galaxies. The ages of red-sequence giants are primarily determined by galaxy mass, with only weak modulation by environment, in the sense that galaxies at larger cluster-centric distance are slightly younger. For red-sequence dwarfs (with mass <10^10 Msun), the roles of mass and environment as predictors of age are reversed: there is little dependence on mass, but strong trends with projected cluster-centric radius are observed. The average age of dwarfs at the 2.5 Mpc limit of our sample is approximately half that of dwarfs near the cluster centre. The gradient in dwarf galaxy ages is a global cluster-centr...
Visualization methods for statistical analysis of microarray clusters
Directory of Open Access Journals (Sweden)
Li Kai
2005-05-01
Full Text Available Abstract Background The most common method of identifying groups of functionally related genes in microarray data is to apply a clustering algorithm. However, it is impossible to determine which clustering algorithm is most appropriate to apply, and it is difficult to verify the results of any algorithm due to the lack of a gold-standard. Appropriate data visualization tools can aid this analysis process, but existing visualization methods do not specifically address this issue. Results We present several visualization techniques that incorporate meaningful statistics that are noise-robust for the purpose of analyzing the results of clustering algorithms on microarray data. This includes a rank-based visualization method that is more robust to noise, a difference display method to aid assessments of cluster quality and detection of outliers, and a projection of high dimensional data into a three dimensional space in order to examine relationships between clusters. Our methods are interactive and are dynamically linked together for comprehensive analysis. Further, our approach applies to both protein and gene expression microarrays, and our architecture is scalable for use on both desktop/laptop screens and large-scale display devices. This methodology is implemented in GeneVAnD (Genomic Visual ANalysis of Datasets and is available at http://function.princeton.edu/GeneVAnD. Conclusion Incorporating relevant statistical information into data visualizations is key for analysis of large biological datasets, particularly because of high levels of noise and the lack of a gold-standard for comparisons. We developed several new visualization techniques and demonstrated their effectiveness for evaluating cluster quality and relationships between clusters.
A novel clustering and supervising users' profiles method
Institute of Scientific and Technical Information of China (English)
Zhu Mingfu; Zhang Hongbin; Song Fangyun
2005-01-01
To better understand different users' accessing intentions, a novel clustering and supervising method based on accessing path is presented. This method divides users' interest space to express the distribution of users' interests, and directly to instruct the constructing process of web pages indexing for advanced performance.
2012-01-22
ICES REPORT 12-05 January 2012 An Isogeometric Design-through-analysis Methodology based on Adaptive Hierarchical Refinement of NURBS , Immersed...M.J. Borden, E. Rank, T.J.R. Hughes, An Isogeometric Design-through-analysis Methodology based on Adaptive Hierarchical Refinement of NURBS , Immersed...analysis Methodology based on Adaptive Hierarchical Refinement of NURBS , Immersed Boundary Methods, and T-spline CAD Surfaces 5a. CONTRACT NUMBER 5b
New clustering methods for population comparison on paternal lineages.
Juhász, Z; Fehér, T; Bárány, G; Zalán, A; Németh, E; Pádár, Z; Pamjav, H
2015-04-01
The goal of this study is to show two new clustering and visualising techniques developed to find the most typical clusters of 18-dimensional Y chromosomal haplogroup frequency distributions of 90 Western Eurasian populations. The first technique called "self-organizing cloud (SOC)" is a vector-based self-learning method derived from the Self Organising Map and non-metric Multidimensional Scaling algorithms. The second technique is a new probabilistic method called the "maximal relation probability" (MRP) algorithm, based on a probability function having its local maximal values just in the condensation centres of the input data. This function is calculated immediately from the distance matrix of the data and can be interpreted as the probability that a given element of the database has a real genetic relation with at least one of the remaining elements. We tested these two new methods by comparing their results to both each other and the k-medoids algorithm. By means of these new algorithms, we determined 10 clusters of populations based on the similarity of haplogroup composition. The results obtained represented a genetically, geographically and historically well-interpretable picture of 10 genetic clusters of populations mirroring the early spread of populations from the Fertile Crescent to the Caucasus, Central Asia, Arabia and Southeast Europe. The results show that a parallel clustering of populations using SOC and MRP methods can be an efficient tool for studying the demographic history of populations sharing common genetic footprints.
Ye, Meidan; Liu, Hsiang-Yu; Lin, Changjian; Lin, Zhiqun
2013-01-28
Dye-sensitized solar cells (DSSCs) based on hierarchical rutile TiO(2) flower clusters prepared by a facile, one-pot hydrothermal process exhibit a high efficiency. Complex yet appealing rutile TiO(2) flower films are, for the first time, directly hydrothermally grown on a transparent conducting fluorine-doped tin oxide (FTO) substrate. The thickness and density of as-grown flower clusters can be readily tuned by tailoring growth parameters, such as growth time, the addition of cations of different valence and size, initial concentrations of precursor and cation, growth temperature, and acidity. Notably, the small lattice mismatch between the FTO substrate and rutile TiO(2) renders the epitaxial growth of a compact rutile TiO(2) layer on the FTO glass. Intriguingly, these TiO(2) flower clusters can then be exploited as photoanodes to produce DSSCs, yielding a power conversion efficiency of 2.94% despite their rutile nature, which is further increased to 4.07% upon the TiCl(4) treatment.
Vinayaka : A Semi-Supervised Projected Clustering Method Using Differential Evolution
Satish Gajawada; Durga Toshniwal
2012-01-01
Differential Evolution (DE) is an algorithm for evolutionary optimization. Clustering problems have beensolved by using DE based clustering methods but these methods may fail to find clusters hidden insubspaces of high dimensional datasets. Subspace and projected clustering methods have been proposed inliterature to find subspace clusters that are present in subspaces of dataset. In this paper we proposeVINAYAKA, a semi-supervised projected clustering method based on DE. In this method DE opt...
A hierarchical updating method for finite element model of airbag buffer system under landing impact
Institute of Scientific and Technical Information of China (English)
He Huan; Chen Zhe; He Cheng; Ni Lei; Chen Guoping
2015-01-01
In this paper, we propose an impact finite element (FE) model for an airbag landing buf-fer system. First, an impact FE model has been formulated for a typical airbag landing buffer sys-tem. We use the independence of the structure FE model from the full impact FE model to develop a hierarchical updating scheme for the recovery module FE model and the airbag system FE model. Second, we define impact responses at key points to compare the computational and experimental results to resolve the inconsistency between the experimental data sampling frequency and experi-mental triggering. To determine the typical characteristics of the impact dynamics response of the airbag landing buffer system, we present the impact response confidence factors (IRCFs) to evalu-ate how consistent the computational and experiment results are. An error function is defined between the experimental and computational results at key points of the impact response (KPIR) to serve as a modified objective function. A radial basis function (RBF) is introduced to construct updating variables for a surrogate model for updating the objective function, thereby converting the FE model updating problem to a soluble optimization problem. Finally, the developed method has been validated using an experimental and computational study on the impact dynamics of a classic airbag landing buffer system.
Directory of Open Access Journals (Sweden)
Xiang Zhou
2014-02-01
Full Text Available As for the problem of how to carry out task scheduling and data replication effectively in the grid and to reduce task’s execution time, this thesis proposes the task scheduling algorithm and the optimum dynamic data replication algorithm and builds a scheme to effectively combine these two algorithms. First of all, the scheme adopts the ISS algorithm considering the number of tasks waiting queue, the location of task demand data and calculation capacity of site by adopting the method of network structure’s hierarchical scheduling to calculate the cost of comprehensive task with the proper weight efficiency and search out the best compute node area. And then the algorithm of ODHRA is adopted to analyze the data transmission time, memory access latency, waiting copy requests in the queue and the distance between nodes, choose out the best replications location in many copies combined with copy placement and copy management to reduce the file access time. The simulation results show that the proposed scheme compared with other algorithm has better performance in terms of average task execution time.
Hierarchical Bayesian methods for estimation of parameters in a longitudinal HIV dynamic system.
Huang, Yangxin; Liu, Dacheng; Wu, Hulin
2006-06-01
HIV dynamics studies have significantly contributed to the understanding of HIV infection and antiviral treatment strategies. But most studies are limited to short-term viral dynamics due to the difficulty of establishing a relationship of antiviral response with multiple treatment factors such as drug exposure and drug susceptibility during long-term treatment. In this article, a mechanism-based dynamic model is proposed for characterizing long-term viral dynamics with antiretroviral therapy, described by a set of nonlinear differential equations without closed-form solutions. In this model we directly incorporate drug concentration, adherence, and drug susceptibility into a function of treatment efficacy, defined as an inhibition rate of virus replication. We investigate a Bayesian approach under the framework of hierarchical Bayesian (mixed-effects) models for estimating unknown dynamic parameters. In particular, interest focuses on estimating individual dynamic parameters. The proposed methods not only help to alleviate the difficulty in parameter identifiability, but also flexibly deal with sparse and unbalanced longitudinal data from individual subjects. For illustration purposes, we present one simulation example to implement the proposed approach and apply the methodology to a data set from an AIDS clinical trial. The basic concept of the longitudinal HIV dynamic systems and the proposed methodologies are generally applicable to any other biomedical dynamic systems.
A hierarchical updating method for finite element model of airbag buffer system under landing impact
Directory of Open Access Journals (Sweden)
He Huan
2015-12-01
Full Text Available In this paper, we propose an impact finite element (FE model for an airbag landing buffer system. First, an impact FE model has been formulated for a typical airbag landing buffer system. We use the independence of the structure FE model from the full impact FE model to develop a hierarchical updating scheme for the recovery module FE model and the airbag system FE model. Second, we define impact responses at key points to compare the computational and experimental results to resolve the inconsistency between the experimental data sampling frequency and experimental triggering. To determine the typical characteristics of the impact dynamics response of the airbag landing buffer system, we present the impact response confidence factors (IRCFs to evaluate how consistent the computational and experiment results are. An error function is defined between the experimental and computational results at key points of the impact response (KPIR to serve as a modified objective function. A radial basis function (RBF is introduced to construct updating variables for a surrogate model for updating the objective function, thereby converting the FE model updating problem to a soluble optimization problem. Finally, the developed method has been validated using an experimental and computational study on the impact dynamics of a classic airbag landing buffer system.
Institute of Scientific and Technical Information of China (English)
LIU Ya-jing; MAO Shan-jun; LI Mei; YAO Ji-ming
2007-01-01
Coal mine safety is a complex system, which is controlled by a number of interrelated factors and is difficult to estimate. This paper proposes an index system of safety assessment based on correlated factors involved in coal mining and a comprehensive evaluation model that combines the advantages of the AHP and a grey clustering method to guarantee the accuracy and objectivity of weight coefficients. First, we confirmed the weight of every index using the AHP, then did a general safety assessment by means of a grey clustering method. This model analyses the status of mining safety both qualitatively and quantitatively. It keeps management and technical groups informed of the situation of the coal production line in real time, which aids in making correct decisions based on practical safety issues. A case study in the application of the model is presented. The results show that the method is applicable and realistic with regard to the core of a coal mine's safety management. Consequently, the safe production of a mine and the awareness of advanced safe production management is accelerated.
He, Fang; Chen, Xi
2016-11-01
The accelerating accumulation and risk concentration of Chinese local financing platforms debts have attracted wide attention throughout the world. Due to the network of financial exposures among institutions, the failure of several platforms or regions of systemic importance will probably trigger systemic risk and destabilize the financial system. However, the complex network of credit relationships in Chinese local financing platforms at the state level remains unknown. To fill this gap, we presented the first complex networks and hierarchical cluster analysis of the credit market of Chinese local financing platforms using the ;bottom up; method from firm-level data. Based on balance-sheet channel, we analyzed the topology and taxonomy by applying the analysis paradigm of subdominant ultra-metric space to an empirical data in 2013. It is remarked that we chose to extract the network of co-financed financing platforms in order to evaluate the effect of risk contagion from platforms to bank system. We used the new credit similarity measure by combining the factor of connectivity and size, to extract minimal spanning trees (MSTs) and hierarchical trees (HTs). We found that: (1) the degree distributions of credit correlation backbone structure of Chinese local financing platforms are fat tailed, and the structure is unstable with respect to targeted failures; (2) the backbone is highly hierarchical, and largely explained by the geographic region; (3) the credit correlation backbone structure based on connectivity and size is significantly heterogeneous; (4) key platforms and regions of systemic importance, and contagion path of systemic risk are obtained, which are contributed to preventing systemic risk and regional risk of Chinese local financing platforms and preserving financial stability under the framework of macro prudential supervision. Our approach of credit similarity measure provides a means of recognizing ;systemically important; institutions and regions
Report of a Workshop on Parallelization of Coupled Cluster Methods
Energy Technology Data Exchange (ETDEWEB)
Rodney J. Bartlett Erik Deumens
2008-05-08
The benchmark, ab initio quantum mechanical methods for molecular structure and spectra are now recognized to be coupled-cluster theory. To benefit from the transiiton to tera- and petascale computers, such coupled-cluster methods must be created to run in a scalable fashion. This Workshop, held as a aprt of the 48th annual Sanibel meeting, at St. Simns, Island, GA, addressed that issue. Representatives of all the principal scientific groups who are addressing this topic were in attendance, to exchange information about the problem and to identify what needs to be done in the future. This report summarized the conclusions of the workshop.
Agent-based method for distributed clustering of textual information
Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN
2010-09-28
A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.
Cluster-in-molecule local correlation method for large systems
Institute of Scientific and Technical Information of China (English)
LI Wei; LI ShuHua
2014-01-01
A linear scaling local correlation method,cluster-in-molecule（CIM）method,was developed in the last decade for large systems.The basic idea of the CIM method is that the electron correlation energy of a large system,within the M ller-Plesset perturbation theory（MP）or coupled cluster（CC）theory,can be approximately obtained from solving the corresponding MP or CC equations of various clusters.Each of such clusters consists of a subset of localized molecular orbitals（LMOs）of the target system,and can be treated independently at various theory levels.In the present article,the main idea of the CIM method is reviewed,followed by brief descriptions of some recent developments,including its multilevel extension and different ways of constructing clusters.Then,some applications for large systems are illustrated.The CIM method is shown to be an efficient and reliable method for electron correlation calculations of large systems,including biomolecules and supramolecular complexes.
Cluster Monte Carlo methods for the FePt Hamiltonian
Lyberatos, A.; Parker, G. J.
2016-02-01
Cluster Monte Carlo methods for the classical spin Hamiltonian of FePt with long range exchange interactions are presented. We use a combination of the Swendsen-Wang (or Wolff) and Metropolis algorithms that satisfies the detailed balance condition and ergodicity. The algorithms are tested by calculating the temperature dependence of the magnetization, susceptibility and heat capacity of L10-FePt nanoparticles in a range including the critical region. The cluster models yield numerical results in good agreement within statistical error with the standard single-spin flipping Monte Carlo method. The variation of the spin autocorrelation time with grain size is used to deduce the dynamic exponent of the algorithms. Our cluster models do not provide a more accurate estimate of the magnetic properties at equilibrium.
Marston, Louise; Peacock, Janet L; Yu, Keming; Brocklehurst, Peter; Calvert, Sandra A; Greenough, Anne; Marlow, Neil
2009-07-01
Studies of prematurely born infants contain a relatively large percentage of multiple births, so the resulting data have a hierarchical structure with small clusters of size 1, 2 or 3. Ignoring the clustering may lead to incorrect inferences. The aim of this study was to compare statistical methods which can be used to analyse such data: generalised estimating equations, multilevel models, multiple linear regression and logistic regression. Four datasets which differed in total size and in percentage of multiple births (n = 254, multiple 18%; n = 176, multiple 9%; n = 10 098, multiple 3%; n = 1585, multiple 8%) were analysed. With the continuous outcome, two-level models produced similar results in the larger dataset, while generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) produced divergent estimates using the smaller dataset. For the dichotomous outcome, most methods, except generalised least squares multilevel modelling (ML GH 'xtlogit' in Stata) gave similar odds ratios and 95% confidence intervals within datasets. For the continuous outcome, our results suggest using multilevel modelling. We conclude that generalised least squares multilevel modelling (ML GLS 'xtreg' in Stata) and maximum likelihood multilevel modelling (ML MLE 'xtmixed' in Stata) should be used with caution when the dataset is small. Where the outcome is dichotomous and there is a relatively large percentage of non-independent data, it is recommended that these are accounted for in analyses using logistic regression with adjusted standard errors or multilevel modelling. If, however, the dataset has a small percentage of clusters greater than size 1 (e.g. a population dataset of children where there are few multiples) there appears to be less need to adjust for clustering.
A novel method for a multi-level hierarchical composite with brick-and-mortar structure.
Brandt, Kristina; Wolff, Michael F H; Salikov, Vitalij; Heinrich, Stefan; Schneider, Gerold A
2013-01-01
The fascination for hierarchically structured hard tissues such as enamel or nacre arises from their unique structure-properties-relationship. During the last decades this numerously motivated the synthesis of composites, mimicking the brick-and-mortar structure of nacre. However, there is still a lack in synthetic engineering materials displaying a true hierarchical structure. Here, we present a novel multi-step processing route for anisotropic 2-level hierarchical composites by combining different coating techniques on different length scales. It comprises polymer-encapsulated ceramic particles as building blocks for the first level, followed by spouted bed spray granulation for a second level, and finally directional hot pressing to anisotropically consolidate the composite. The microstructure achieved reveals a brick-and-mortar hierarchical structure with distinct, however not yet optimized mechanical properties on each level. It opens up a completely new processing route for the synthesis of multi-level hierarchically structured composites, giving prospects to multi-functional structure-properties relationships.
Select and Cluster: A Method for Finding Functional Networks of Clustered Voxels in fMRI
DonGiovanni, Danilo
2016-01-01
Extracting functional connectivity patterns among cortical regions in fMRI datasets is a challenge stimulating the development of effective data-driven or model based techniques. Here, we present a novel data-driven method for the extraction of significantly connected functional ROIs directly from the preprocessed fMRI data without relying on a priori knowledge of the expected activations. This method finds spatially compact groups of voxels which show a homogeneous pattern of significant connectivity with other regions in the brain. The method, called Select and Cluster (S&C), consists of two steps: first, a dimensionality reduction step based on a blind multiresolution pairwise correlation by which the subset of all cortical voxels with significant mutual correlation is selected and the second step in which the selected voxels are grouped into spatially compact and functionally homogeneous ROIs by means of a Support Vector Clustering (SVC) algorithm. The S&C method is described in detail. Its performance assessed on simulated and experimental fMRI data is compared to other methods commonly used in functional connectivity analyses, such as Independent Component Analysis (ICA) or clustering. S&C method simplifies the extraction of functional networks in fMRI by identifying automatically spatially compact groups of voxels (ROIs) involved in whole brain scale activation networks. PMID:27656202
Quantum Monte Carlo methods and lithium cluster properties
Energy Technology Data Exchange (ETDEWEB)
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Analysis of protein profiles using fuzzy clustering methods
DEFF Research Database (Denmark)
Karemore, Gopal Raghunath; Ukendt, Sujatha; Rai, Lavanya
clustering methods for their classification followed by various validation measures. The clustering algorithms used for the study were K- means, K- medoid, Fuzzy C-means, Gustafson-Kessel, and Gath-Geva. The results presented in this study conclude that the protein profiles of tissue...... samples recorded by using the HPLC- LIF system and the data analyzed by clustering algorithms quite successfully classifies them as belonging from normal and malignant conditions....
Energy Technology Data Exchange (ETDEWEB)
Mamani-Calcina, Elmer Antonio; Landgraf, Fernando Jose Gomes; Azevedo, Cesar Roberto de Farias, E-mail: c.azevedo@usp.br [Universidade de Sao Paulo (USP), Sao Paulo, SP (Brazil). Escola Politecnica. Departmento de Engenharia Metalurgica e de Materiais
2017-01-15
Microstructural characterization techniques, including EDX (Energy Dispersive X-ray Analysis) microanalyses, were used to investigate the slag inclusions in the microstructure of ferrous artifacts of the Royal Iron Factory of Sao Joao de Ipanema (first steel plant of Brazil, XIX century), the D. Pedro II Bridge (located in Bahia, assembled in XIX century and produced in Scotland) and the archaeological sites of Sao Miguel de Missoes (Rio Grande do Sul, Brazil, production site of iron artifacts, the XVIII century) and Afonso Sardinha (Sao Paulo, Brazil production site of iron artifacts, XVI century). The microanalyses results of the main micro constituents of the microstructure of the slag inclusions were investigated by hierarchical cluster analysis and the dendrogram with the microanalyses results of the wüstite phase (using as critical variables the contents of MnO, MgO, Al{sub 2}O{sub 3}, V{sub 2}O{sub 5} and TiO{sub 2}) allowed the identification of four clusters, which successfully represented the samples of the four investigated sites (Ipanema, Sardinha, Missoes and Bahia). Finally, the comparatively low volumetric fraction of slag inclusions in the samples of Ipanema (∼1%) suggested the existence of technological expertise at the iron making processing in the Royal Iron Factory of Sao Joao de Ipanema. (author)
A PROBABILISTIC EMBEDDING CLUSTERING METHOD FOR URBAN STRUCTURE DETECTION
Directory of Open Access Journals (Sweden)
X. Lin
2017-09-01
Full Text Available Urban structure detection is a basic task in urban geography. Clustering is a core technology to detect the patterns of urban spatial structure, urban functional region, and so on. In big data era, diverse urban sensing datasets recording information like human behaviour and human social activity, suffer from complexity in high dimension and high noise. And unfortunately, the state-of-the-art clustering methods does not handle the problem with high dimension and high noise issues concurrently. In this paper, a probabilistic embedding clustering method is proposed. Firstly, we come up with a Probabilistic Embedding Model (PEM to find latent features from high dimensional urban sensing data by “learning” via probabilistic model. By latent features, we could catch essential features hidden in high dimensional data known as patterns; with the probabilistic model, we can also reduce uncertainty caused by high noise. Secondly, through tuning the parameters, our model could discover two kinds of urban structure, the homophily and structural equivalence, which means communities with intensive interaction or in the same roles in urban structure. We evaluated the performance of our model by conducting experiments on real-world data and experiments with real data in Shanghai (China proved that our method could discover two kinds of urban structure, the homophily and structural equivalence, which means clustering community with intensive interaction or under the same roles in urban space.
Histological image segmentation using fast mean shift clustering method
Wu, Geming; Zhao, Xinyan; Luo, Shuqian; Shi, Hongli
2015-01-01
Background Colour image segmentation is fundamental and critical for quantitative histological image analysis. The complexity of the microstructure and the approach to make histological images results in variable staining and illumination variations. And ultra-high resolution of histological images makes it is hard for image segmentation methods to achieve high-quality segmentation results and low computation cost at the same time. Methods Mean Shift clustering approach is employed for histol...
Clustering Method in Data Mining%数据挖掘中的聚类方法
Institute of Scientific and Technical Information of China (English)
王实; 高文
2000-01-01
In this paper we introduce clustering method at Data Mining.Clustering has been studied very deeply.In the field of Data Mining,clustering is facing the new situation.We summarize the major clustering methods and introduce four kinds of clustering method that have been used broadly in Data Mitring.Finally we draw a conclusion that the partitional clustering method based on distance in data mining is a typical two phase iteration process:1)appoint cluster;2)update the center of cluster.
Co3O4–ZnO hierarchical nanostructures by electrospinning and hydrothermal methods
DEFF Research Database (Denmark)
Kanjwal, Muzafar Ahmed; Sheikh, Faheem A.; Barakat, Nasser A.M.
2011-01-01
A new hierarchical nanostructure that consists of cobalt oxide (Co3O4) and zinc oxide (ZnO) was produced by the electrospinning process followed by a hydrothermal technique. First, electrospinning of a colloidal solution that consisted of zinc nanoparticles, cobalt acetate tetrahydrate and poly...
An accessible method for implementing hierarchical models with spatio-temporal abundance data
Ross, Beth E.; Hooten, Melvin B.; Koons, David N.
2012-01-01
A common goal in ecology and wildlife management is to determine the causes of variation in population dynamics over long periods of time and across large spatial scales. Many assumptions must nevertheless be overcome to make appropriate inference about spatio-temporal variation in population dynamics, such as autocorrelation among data points, excess zeros, and observation error in count data. To address these issues, many scientists and statisticians have recommended the use of Bayesian hierarchical models. Unfortunately, hierarchical statistical models remain somewhat difficult to use because of the necessary quantitative background needed to implement them, or because of the computational demands of using Markov Chain Monte Carlo algorithms to estimate parameters. Fortunately, new tools have recently been developed that make it more feasible for wildlife biologists to fit sophisticated hierarchical Bayesian models (i.e., Integrated Nested Laplace Approximation, ‘INLA’). We present a case study using two important game species in North America, the lesser and greater scaup, to demonstrate how INLA can be used to estimate the parameters in a hierarchical model that decouples observation error from process variation, and accounts for unknown sources of excess zeros as well as spatial and temporal dependence in the data. Ultimately, our goal was to make unbiased inference about spatial variation in population trends over time.
A Hierarchical Bayesian M/EEG Imaging Method Correcting for Incomplete Spatio-Temporal Priors
DEFF Research Database (Denmark)
Stahlhut, Carsten; Attias, Hagai T.; Sekihara, Kensuke;
2013-01-01
In this paper we present a hierarchical Bayesian model, to tackle the highly ill-posed problem that follows with MEG and EEG source imaging. Our model promotes spatiotemporal patterns through the use of both spatial and temporal basis functions. While in contrast to most previous spatio-temporal ...
Semi-supervised consensus clustering for gene expression data analysis
Wang, Yunli; Pan, Youlian
2014-01-01
Background Simple clustering methods such as hierarchical clustering and k-means are widely used for gene expression data analysis; but they are unable to deal with noise and high dimensionality associated with the microarray gene expression data. Consensus clustering appears to improve the robustness and quality of clustering results. Incorporating prior knowledge in clustering process (semi-supervised clustering) has been shown to improve the consistency between the data partitioning and do...
Critérios de formação de carteiras de ativos por meio de Hierarchical Clusters
Directory of Open Access Journals (Sweden)
Pierre Lucena
2010-04-01
Full Text Available Este artigo tem como objetivo principal apresentar e testar uma ferramenta de estatística multivariada em modelos financeiros. Essa metodologia, conhecida como análise de clusters, separa as observações em grupos com suas determinadas características, em contraste com a metodologia tradicional, que é somente a ordem com os quantis. Foi aplicada essa ferramenta em 213 ações negociadas na Bolsa de São Paulo (Bovespa, separando os grupos por tamanho e book-tomarket. Depois, as novas carteiras foram aplicadas no modelo de Fama e French (1996, comparando os resultados numa formação de carteira para quantil e análise de cluster. Foram encontrados melhores resultados na segunda metodologia. Os autores concluem que a análise de cluster pode ser mais adequada porque tende a formar grupos mais homogeneizados, sendo sua aplicação útil para a formação de carteiras e para a teoria financeira.
Distinguishing Functional DNA Words; A Method for Measuring Clustering Levels
Moghaddasi, Hanieh; Khalifeh, Khosrow; Darooneh, Amir Hossein
2017-01-01
Functional DNA sub-sequences and genome elements are spatially clustered through the genome just as keywords in literary texts. Therefore, some of the methods for ranking words in texts can also be used to compare different DNA sub-sequences. In analogy with the literary texts, here we claim that the distribution of distances between the successive sub-sequences (words) is q-exponential which is the distribution function in non-extensive statistical mechanics. Thus the q-parameter can be used as a measure of words clustering levels. Here, we analyzed the distribution of distances between consecutive occurrences of 16 possible dinucleotides in human chromosomes to obtain their corresponding q-parameters. We found that CG as a biologically important two-letter word concerning its methylation, has the highest clustering level. This finding shows the predicting ability of the method in biology. We also proposed that chromosome 18 with the largest value of q-parameter for promoters of genes is more sensitive to dietary and lifestyle. We extended our study to compare the genome of some selected organisms and concluded that the clustering level of CGs increases in higher evolutionary organisms compared to lower ones. PMID:28128320
Distinguishing Functional DNA Words; A Method for Measuring Clustering Levels
Moghaddasi, Hanieh; Khalifeh, Khosrow; Darooneh, Amir Hossein
2017-01-01
Functional DNA sub-sequences and genome elements are spatially clustered through the genome just as keywords in literary texts. Therefore, some of the methods for ranking words in texts can also be used to compare different DNA sub-sequences. In analogy with the literary texts, here we claim that the distribution of distances between the successive sub-sequences (words) is q-exponential which is the distribution function in non-extensive statistical mechanics. Thus the q-parameter can be used as a measure of words clustering levels. Here, we analyzed the distribution of distances between consecutive occurrences of 16 possible dinucleotides in human chromosomes to obtain their corresponding q-parameters. We found that CG as a biologically important two-letter word concerning its methylation, has the highest clustering level. This finding shows the predicting ability of the method in biology. We also proposed that chromosome 18 with the largest value of q-parameter for promoters of genes is more sensitive to dietary and lifestyle. We extended our study to compare the genome of some selected organisms and concluded that the clustering level of CGs increases in higher evolutionary organisms compared to lower ones.
An improved unsupervised clustering-based intrusion detection method
Hai, Yong J.; Wu, Yu; Wang, Guo Y.
2005-03-01
Practical Intrusion Detection Systems (IDSs) based on data mining are facing two key problems, discovering intrusion knowledge from real-time network data, and automatically updating them when new intrusions appear. Most data mining algorithms work on labeled data. In order to set up basic data set for mining, huge volumes of network data need to be collected and labeled manually. In fact, it is rather difficult and impractical to label intrusions, which has been a big restrict for current IDSs and has led to limited ability of identifying all kinds of intrusion types. An improved unsupervised clustering-based intrusion model working on unlabeled training data is introduced. In this model, center of a cluster is defined and used as substitution of this cluster. Then all cluster centers are adopted to detect intrusions. Testing on data sets of KDDCUP"99, experimental results demonstrate that our method has good performance in detection rate. Furthermore, the incremental-learning method is adopted to detect those unknown-type intrusions and it decreases false positive rate.
Unbiased methods for removing systematics from galaxy clustering measurements
Elsner, Franz; Peiris, Hiranya V
2015-01-01
Measuring the angular clustering of galaxies as a function of redshift is a powerful method for tracting information from the three-dimensional galaxy distribution. The precision of such measurements will dramatically increase with ongoing and future wide-field galaxy surveys. However, these are also increasingly sensitive to observational and astrophysical contaminants. Here, we study the statistical properties of three methods proposed for controlling such systematics - template subtraction, basic mode projection, and extended mode projection - all of which make use of externally supplied template maps, designed to characterise and capture the spatial variations of potential systematic effects. Based on a detailed mathematical analysis, and in agreement with simulations, we find that the template subtraction method in its original formulation returns biased estimates of the galaxy angular clustering. We derive closed-form expressions that should be used to correct results for this shortcoming. Turning to th...
Wang, Ke; Wu, Yuefang; Zhang, Huawei
2011-01-01
We present Submillimeter Array (SMA) \\lambda = 0.88mm observations of an infrared dark cloud (IRDC) G28.34+0.06. Located in the quiescent southern part of the G28.34 cloud, the region of interest is a massive ($>10^3$\\,\\msun) molecular clump P1 with a luminosity of $\\sim 10^3$ \\lsun, where our previous SMA observations at 1.3mm have revealed a string of five dust cores of 22-64 \\msun\\ along the 1 pc IR-dark filament. The cores are well aligned at a position angle of 48 degrees and regularly spaced at an average projected separation of 0.16 pc. The new high-resolution, high-sensitivity 0.88\\,mm image further resolves the five cores into ten compact condensations of 1.4-10.6 \\msun, with sizes a few thousands AU. The spatial structure at clump ($\\sim 1$ pc) and core ($\\sim 0.1$ pc) scales indicates a hierarchical fragmentation. While the clump fragmentation is consistent with a cylindrical collapse, the observed fragment masses are much larger than the expected thermal Jeans masses. All the cores are driving CO(...
一种分层分簇的组密钥管理方案%A HIERARCHICAL CLUSTERING-BASED GROUP KEY MANAGEMENT SCHEME
Institute of Scientific and Technical Information of China (English)
李珍格; 游林
2014-01-01
为了满足无线传感器网络组通信的安全，提出一种分层分簇的组密钥管理方案。该方案采用分层的体系结构，将组中节点分为管理层和普通层。BS通过构造特殊的组密钥多项式更新普通层组密钥，而管理层则采用二元单向函数进行组密钥的协商。分析表明，该方案很好满足了无线传感器网络中组密钥管理的前向安全性，后向安全性，并且减小了存储开销、计算开销和通信开销。%In this paper,a hierarchical clustering-based group key management scheme is proposed in order to satisfy the secure group communication in wireless sensor network.The proposed scheme adopts the hierarchical architecture and divides the nodes in the group into master-node layer and terminal layer.The group key of terminal layer is updated by constructing a special group key polynomial in BS,and the binary one-way function is used by the master-node layer for group key negotiation.Analysis demonstrates that the scheme well satisfies the forward security and backward security of the group key management in WSN,and reduces the storage overhead,computation overhead and communication overhead as well.
Directory of Open Access Journals (Sweden)
TIMCHENKO, L.
2012-11-01
Full Text Available Propositions necessary for development of parallel-hierarchical (PH network training methods are discussed in this article. Unlike already known structures of the artificial neural network, where non-normalized (absolute similarity criteria are used for comparison, the suggested structure uses a normalized criterion. Based on the analysis of training rules, a conclusion is made that application of two training methods with a teacher is optimal for PH network training: error correction-based training and memory-based training. Mathematical models of training and a combined method of PH network training for recognition of static and dynamic patterns are developed.
Song, Linze; Shi, Qiang
2015-11-21
Based on recent findings in the hierarchical equations of motion (HEOM) for correlated initial state [Y. Tanimura, J. Chem. Phys. 141, 044114 (2014)], we propose a new stochastic method to obtain the initial conditions for the real time HEOM propagation, which can be used further to calculate the equilibrium correlation functions and symmetrized correlation functions. The new method is derived through stochastic unraveling of the imaginary time influence functional, where a set of stochastic imaginary time HEOM are obtained. The validity of the new method is demonstrated using numerical examples including the spin-Boson model, and the Holstein model with undamped harmonic oscillator modes.
Micromechanics of hierarchical materials
DEFF Research Database (Denmark)
Mishnaevsky, Leon, Jr.
2012-01-01
A short overview of micromechanical models of hierarchical materials (hybrid composites, biomaterials, fractal materials, etc.) is given. Several examples of the modeling of strength and damage in hierarchical materials are summarized, among them, 3D FE model of hybrid composites...... with nanoengineered matrix, fiber bundle model of UD composites with hierarchically clustered fibers and 3D multilevel model of wood considered as a gradient, cellular material with layered composite cell walls. The main areas of research in micromechanics of hierarchical materials are identified, among them......, the investigations of the effects of load redistribution between reinforcing elements at different scale levels, of the possibilities to control different material properties and to ensure synergy of strengthening effects at different scale levels and using the nanoreinforcement effects. The main future directions...
Advanced cluster methods for correlated-electron systems
Energy Technology Data Exchange (ETDEWEB)
Fischer, Andre
2015-04-27
In this thesis, quantum cluster methods are used to calculate electronic properties of correlated-electron systems. A special focus lies in the determination of the ground state properties of a 3/4 filled triangular lattice within the one-band Hubbard model. At this filling, the electronic density of states exhibits a so-called van Hove singularity and the Fermi surface becomes perfectly nested, causing an instability towards a variety of spin-density-wave (SDW) and superconducting states. While chiral d+id-wave superconductivity has been proposed as the ground state in the weak coupling limit, the situation towards strong interactions is unclear. Additionally, quantum cluster methods are used here to investigate the interplay of Coulomb interactions and symmetry-breaking mechanisms within the nematic phase of iron-pnictide superconductors. The transition from a tetragonal to an orthorhombic phase is accompanied by a significant change in electronic properties, while long-range magnetic order is not established yet. The driving force of this transition may not only be phonons but also magnetic or orbital fluctuations. The signatures of these scenarios are studied with quantum cluster methods to identify the most important effects. Here, cluster perturbation theory (CPT) and its variational extention, the variational cluster approach (VCA) are used to treat the respective systems on a level beyond mean-field theory. Short-range correlations are incorporated numerically exactly by exact diagonalization (ED). In the VCA, long-range interactions are included by variational optimization of a fictitious symmetry-breaking field based on a self-energy functional approach. Due to limitations of ED, cluster sizes are limited to a small number of degrees of freedom. For the 3/4 filled triangular lattice, the VCA is performed for different cluster symmetries. A strong symmetry dependence and finite-size effects make a comparison of the results from different clusters difficult
EXPLICIT BOUNDS OF EIGENVALUES FOR STIFFNESS MATRICES BY QUADRATIC HIERARCHICAL BASIS METHOD
Institute of Scientific and Technical Information of China (English)
Sang Dong KIM; Byeong Chun SHIN
2003-01-01
The bounds for the eigenvalues of the stiffness matrices in the finite element discretization corresponding to Lu := -u" with zero boundary conditions by quadratic hierarchical basis are shown explicitly. The condition number of the resulting system behaves like O(1/h)where h is the mesh size. We also analyze a main diagonal preconditioner of the stiffness matrix which reduces the condition number of the preconditioned system to O(1).
Quark-gluon plasma phase transition using cluster expansion method
Syam Kumar, A. M.; Prasanth, J. P.; Bannur, Vishnu M.
2015-08-01
This study investigates the phase transitions in QCD using Mayer's cluster expansion method. The inter quark potential is modified Cornell potential. The equation of state (EoS) is evaluated for a homogeneous system. The behaviour is studied by varying the temperature as well as the number of Charm Quarks. The results clearly show signs of phase transition from Hadrons to Quark-Gluon Plasma (QGP).
Translationally-invariant coupled-cluster method for finite systems
Guardiola, R; Navarro, J; Portesi, M
1998-01-01
The translational invariant formulation of the coupled-cluster method is presented here at the complete SUB(2) level for a system of nucleons treated as bosons. The correlation amplitudes are solution of a non-linear coupled system of equations. These equations have been solved for light and medium systems, considering the central but still semi-realistic nucleon-nucleon S3 interaction.
Institute of Scientific and Technical Information of China (English)
Wang Jia; Zhao Tianbo; Li Zunfeng; Zong Baoning; Du Zexue; Zeng Jianli
2016-01-01
Zeolite FAU composites with a macro/meso-microporous hierarchical structure were hydrothermally synthesized using macro-mesoporous γ-Al2O3 monolith as the substrate by means of the liquid crystallization directing agent (LCDA) induced method. No template was needed throughout the synthesis processes. The structure and porosity of zeolite composites were analyzed by means of X-ray powder diffraction (XRD), scanning electron microscopy (SEM) and N2 adsorption-desorption isotherms. The results showed that the supported zeolite composites with varied zeolitic crystalline phases and different morphologies can be obtained by adjusting the crystallization parameters, such as the crystallization temperature, the composition and the alkalinity of the precursor solution. The presence of LCDA was defined as a determinant for synthesizing the zeolite composites. The mechanisms for formation of the hierarchically porous FAU zeolite composites in the LCDA induced synthesis process were discussed. The resulting monolithic zeolite with a trimodal-porous hierarchical structure shows potential applicability where facile diffusion is required.
Wagstaff, Kiri L.
2012-03-01
particular application involves considerations of the kind of data being analyzed, algorithm runtime efficiency, and how much prior knowledge is available about the problem domain, which can dictate the nature of clusters sought. Fundamentally, the clustering method and its representations of clusters carries with it a definition of what a cluster is, and it is important that this be aligned with the analysis goals for the problem at hand. In this chapter, I emphasize this point by identifying for each algorithm the cluster representation as a model, m_j , even for algorithms that are not typically thought of as creating a “model.” This chapter surveys a basic collection of clustering methods useful to any practitioner who is interested in applying clustering to a new data set. The algorithms include k-means (Section 25.2), EM (Section 25.3), agglomerative (Section 25.4), and spectral (Section 25.5) clustering, with side mentions of variants such as kernel k-means and divisive clustering. The chapter also discusses each algorithm’s strengths and limitations and provides pointers to additional in-depth reading for each subject. Section 25.6 discusses methods for incorporating domain knowledge into the clustering process. This chapter concludes with a brief survey of interesting applications of clustering methods to astronomy data (Section 25.7). The chapter begins with k-means because it is both generally accessible and so widely used that understanding it can be considered a necessary prerequisite for further work in the field. EM can be viewed as a more sophisticated version of k-means that uses a generative model for each cluster and probabilistic item assignments. Agglomerative clustering is the most basic form of hierarchical clustering and provides a basis for further exploration of algorithms in that vein. Spectral clustering permits a departure from feature-vector-based clustering and can operate on data sets instead represented as affinity, or similarity
Clustering of resting state networks.
Directory of Open Access Journals (Sweden)
Megan H Lee
Full Text Available BACKGROUND: The goal of the study was to demonstrate a hierarchical structure of resting state activity in the healthy brain using a data-driven clustering algorithm. METHODOLOGY/PRINCIPAL FINDINGS: The fuzzy-c-means clustering algorithm was applied to resting state fMRI data in cortical and subcortical gray matter from two groups acquired separately, one of 17 healthy individuals and the second of 21 healthy individuals. Different numbers of clusters and different starting conditions were used. A cluster dispersion measure determined the optimal numbers of clusters. An inner product metric provided a measure of similarity between different clusters. The two cluster result found the task-negative and task-positive systems. The cluster dispersion measure was minimized with seven and eleven clusters. Each of the clusters in the seven and eleven cluster result was associated with either the task-negative or task-positive system. Applying the algorithm to find seven clusters recovered previously described resting state networks, including the default mode network, frontoparietal control network, ventral and dorsal attention networks, somatomotor, visual, and language networks. The language and ventral attention networks had significant subcortical involvement. This parcellation was consistently found in a large majority of algorithm runs under different conditions and was robust to different methods of initialization. CONCLUSIONS/SIGNIFICANCE: The clustering of resting state activity using different optimal numbers of clusters identified resting state networks comparable to previously obtained results. This work reinforces the observation that resting state networks are hierarchically organized.
Chen, Yuejiao; Qu, Baihua; Hu, Lingling; Xu, Zhi; Li, Qiuhong; Wang, Taihong
2013-09-01
A facile hydrothermal method is developed for large-scale production of three-dimensional (3D) hierarchical porous nickel cobaltate nanowire cluster arrays derived from nanosheet arrays with robust adhesion on Ni foam. Based on the morphology evolution upon reaction time, a possible formation process is proposed. The role of NH4F in formation of the structure has also been investigated based on different NH4F amounts. This unique structure significantly enhances the electroactive surface areas of the NiCo2O4 arrays, leading to better interfacial/chemical distributions at the nanoscale, fast ion and electron transfer and good strain accommodation. Thus, when it is used for supercapacitor testing, a specific capacitance of 1069 F g-1 at a very high current density of 100 A g-1 was obtained. Even after more than 10 000 cycles at various large current densities, a capacitance of 2000 F g-1 at 10 A g-1 with 93.8% retention can be achieved. It also exhibits a high-power density (26.1 kW kg-1) at a discharge current density of 80 A g-1. When used as an anode material for lithium-ion batteries (LIBs), it presents a high reversible capacity of 976 mA h g-1 at a rate of 200 mA g-1 with good cycling stability and rate capability. This array material is rarely used as an anode material. Our results show that this unique 3D hierarchical porous nickel cobaltite is promising for electrochemical energy applications.A facile hydrothermal method is developed for large-scale production of three-dimensional (3D) hierarchical porous nickel cobaltate nanowire cluster arrays derived from nanosheet arrays with robust adhesion on Ni foam. Based on the morphology evolution upon reaction time, a possible formation process is proposed. The role of NH4F in formation of the structure has also been investigated based on different NH4F amounts. This unique structure significantly enhances the electroactive surface areas of the NiCo2O4 arrays, leading to better interfacial/chemical distributions
Adapted G-mode Clustering Method applied to Asteroid Taxonomy
Hasselmann, Pedro H.; Carvano, Jorge M.; Lazzaro, D.
2013-11-01
The original G-mode was a clustering method developed by A. I. Gavrishin in the late 60's for geochemical classification of rocks, but was also applied to asteroid photometry, cosmic rays, lunar sample and planetary science spectroscopy data. In this work, we used an adapted version to classify the asteroid photometry from SDSS Moving Objects Catalog. The method works by identifying normal distributions in a multidimensional space of variables. The identification starts by locating a set of points with smallest mutual distance in the sample, which is a problem when data is not planar. Here we present a modified version of the G-mode algorithm, which was previously written in FORTRAN 77, in Python 2.7 and using NumPy, SciPy and Matplotlib packages. The NumPy was used for array and matrix manipulation and Matplotlib for plot control. The Scipy had a import role in speeding up G-mode, Scipy.spatial.distance.mahalanobis was chosen as distance estimator and Numpy.histogramdd was applied to find the initial seeds from which clusters are going to evolve. Scipy was also used to quickly produce dendrograms showing the distances among clusters. Finally, results for Asteroids Taxonomy and tests for different sample sizes and implementations are presented.
Super pixel density based clustering automatic image classification method
Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu
2015-12-01
The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.
Time-dependent coupled-cluster method for atomic nuclei
Pigg, D A; Nam, H; Papenbrock, T
2012-01-01
We study time-dependent coupled-cluster theory in the framework of nuclear physics. Based on Kvaal's bi-variational formulation of this method [S. Kvaal, arXiv:1201.5548], we explicitly demonstrate that observables that commute with the Hamiltonian are conserved under time evolution. We explore the role of the energy and of the similarity-transformed Hamiltonian under real and imaginary time evolution and relate the latter to similarity renormalization group transformations. Proof-of-principle computations of He-4 and O-16 in small model spaces, and computations of the Lipkin model illustrate the capabilities of the method.
Segmentation of MRI Volume Data Based on Clustering Method
Directory of Open Access Journals (Sweden)
Ji Dongsheng
2016-01-01
Full Text Available Here we analyze the difficulties of segmentation without tag line of left ventricle MR images, and propose an algorithm for automatic segmentation of left ventricle (LV internal and external profiles. Herein, we propose an Incomplete K-means and Category Optimization (IKCO method. Initially, using Hough transformation to automatically locate initial contour of the LV, the algorithm uses a simple approach to complete data subsampling and initial center determination. Next, according to the clustering rules, the proposed algorithm finishes MR image segmentation. Finally, the algorithm uses a category optimization method to improve segmentation results. Experiments show that the algorithm provides good segmentation results.
A Comparison of Methods for Player Clustering via Behavioral Telemetry
DEFF Research Database (Denmark)
Drachen, Anders; Thurau, Christian; Sifa, Rafet
2013-01-01
can be exceptionally complex, with features recorded for a varying population of users over a temporal segment that can reach years in duration. Categorization of behaviors, whether through descriptive methods (e.g. segmentation) or unsupervised/supervised learning techniques, is valuable for finding...... patterns in the behavioral data, and developing profiles that are actionable to game developers. There are numerous methods for unsupervised clustering of user behavior, e.g. k-means/c-means, Nonnegative Matrix Factorization, or Principal Component Analysis. Although all yield behavior categorizations...
Chen, Yuejiao; Qu, Baihua; Hu, Lingling; Xu, Zhi; Li, Qiuhong; Wang, Taihong
2013-10-21
A facile hydrothermal method is developed for large-scale production of three-dimensional (3D) hierarchical porous nickel cobaltate nanowire cluster arrays derived from nanosheet arrays with robust adhesion on Ni foam. Based on the morphology evolution upon reaction time, a possible formation process is proposed. The role of NH4F in formation of the structure has also been investigated based on different NH4F amounts. This unique structure significantly enhances the electroactive surface areas of the NiCo2O4 arrays, leading to better interfacial/chemical distributions at the nanoscale, fast ion and electron transfer and good strain accommodation. Thus, when it is used for supercapacitor testing, a specific capacitance of 1069 F g(-1) at a very high current density of 100 A g(-1) was obtained. Even after more than 10,000 cycles at various large current densities, a capacitance of 2000 F g(-1) at 10 A g(-1) with 93.8% retention can be achieved. It also exhibits a high-power density (26.1 kW kg(-1)) at a discharge current density of 80 A g(-1). When used as an anode material for lithium-ion batteries (LIBs), it presents a high reversible capacity of 976 mA h g(-1) at a rate of 200 mA g(-1) with good cycling stability and rate capability. This array material is rarely used as an anode material. Our results show that this unique 3D hierarchical porous nickel cobaltite is promising for electrochemical energy applications.
Anwar, Sirajudheen; Crouch, Rebecca A; Awadh Ali, Nasser A; Al-Fatimi, Mohamed A; Setzer, William N; Wessjohann, Ludger
2017-01-09
The hydrodistilled essential oil obtained from the dried leaves of Myrtus communis, collected in Yemen, was analysed by GC-MS. Forty-one compounds were identified, representing 96.3% of the total oil. The major constituents of essential oil were oxygenated monoterpenoids (87.1%), linalool (29.1%), 1,8-cineole (18.4%), α-terpineol (10.8%), geraniol (7.3%) and linalyl acetate (7.4%). The essential oil was assessed for its antimicrobial activity using a disc diffusion assay and resulted in moderate to potent antibacterial and antifungal activities targeting mainly Bacillus subtilis, Staphylococcus aureus and Candida albicans. The oil moderately reduced the diphenylpicrylhydrazyl radical (IC50 = 4.2 μL/mL or 4.1 mg/mL). In vitro cytotoxicity evaluation against HT29 (human colonic adenocarcinoma cells) showed that the essential oil exhibited a moderate antitumor effect with IC50 of 110 ± 4 μg/mL. Hierarchical cluster analysis of M. communis has been carried out based on the chemical compositions of 99 samples reported in the literature, including Yemeni sample.
Bayo, Javier; López-Castellanos, Joaquín
2016-07-01
Process performance and operation of wastewater treatment plants (WWTP) are carried out to ensure their compliance with legislative requirements imposed by European Union. Because a high amount of variables are daily measured, a coherent and structured approach of such a system is required to understand its inherent behavior and performance efficiency. In this sense, both principal factor analysis (PFA) and hierarchical cluster analysis (HCA) are multivariate techniques that have been widely applied to extract and structure information for different purposes. In this paper, both statistical tools are applied in an urban WWTP situated in the Southeast of Spain, a zone with special characteristics related to the geochemical background composition of water and an important use of fertilizers. Four main factors were extracted in association with nutrients, the ionic component, the organic load to the WWTP, and the efficiency of the whole process. HCA allowed distinguish between influent and effluent parameters, although a deeper examination resulted in a dendrogram with groupings similar to those previously reported for PFA.
Shankar, Francesco; Rettura, Alessandro; Bouillot, Vincent; Moreno, Jorge; Licitra, Rossella; Bernardi, Mariangela; Huertas-Company, Marc; Mei, Simona; Ascaso, Begoña; Sheth, Ravi; Delaye, Lauriane; Raichoor, Anand
2015-01-01
The mass and structural evolution of massive galaxies is one of the hottest topics in galaxy formation. This is because it may reveal invaluable insights into the still debated evolutionary processes governing the growth and assembly of spheroids. However, direct comparison between models and observations is usually prevented by the so-called "progenitor bias", i.e., new galaxies entering the observational selection at later epochs, thus eluding a precise study of how pre-existing galaxies actually evolve in size. To limit this effect, we here gather data on high-redshift brightest group and cluster galaxies, evolve their (mean) host halo masses down to z=0 along their main progenitors, and assign as their "descendants" local SDSS central galaxies matched in host halo mass. At face value, the comparison between high redshift and local data suggests a noticeable increase in stellar mass of a factor of >2 since z~1, and of >2.5 in mean effective radius. We then compare the inferred stellar mass and size growth ...
Directory of Open Access Journals (Sweden)
Ashish Kumar Patnaik
2016-03-01
Full Text Available Level of Service (LOS for heterogeneous traffic flow on urban streets is not well defined in Indian context. Hence in this study an attempt is taken to classify urban road networks into number of street classes and average travel speeds on street segments into LOS categories. Divisive Analysis (DIANA Clustering is used for such classification of large amount of speed data collected using GPS receiver. DIANA algorithm and silhouette validation parameter are used to classify Free Flow Speeds (FFS into optimal number of classes and the same algorithm is applied on speed data to determine ranges of different LOS categories. Speed ranges for LOS categories (A–F expressed in percentage of FFS are found to be 90, 70, 50, 40, 25 and 20–25 respectively in the present study. On the other hand, in HCM (2000 it has been mentioned these values are 85 and above, 67–85, 50–67, 40–50, 30–40 and 30 and less percent respectively.
Eldridge, Sandra M; Ashby, Deborah; Kerry, Sally
2006-10-01
Cluster randomized trials are increasingly popular. In many of these trials, cluster sizes are unequal. This can affect trial power, but standard sample size formulae for these trials ignore this. Previous studies addressing this issue have mostly focused on continuous outcomes or methods that are sometimes difficult to use in practice. We show how a simple formula can be used to judge the possible effect of unequal cluster sizes for various types of analyses and both continuous and binary outcomes. We explore the practical estimation of the coefficient of variation of cluster size required in this formula and demonstrate the formula's performance for a hypothetical but typical trial randomizing UK general practices. The simple formula provides a good estimate of sample size requirements for trials analysed using cluster-level analyses weighting by cluster size and a conservative estimate for other types of analyses. For trials randomizing UK general practices the coefficient of variation of cluster size depends on variation in practice list size, variation in incidence or prevalence of the medical condition under examination, and practice and patient recruitment strategies, and for many trials is expected to be approximately 0.65. Individual-level analyses can be noticeably more efficient than some cluster-level analyses in this context. When the coefficient of variation is <0.23, the effect of adjustment for variable cluster size on sample size is negligible. Most trials randomizing UK general practices and many other cluster randomized trials should account for variable cluster size in their sample size calculations.
Efficient Cluster Head Selection Methods for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Jong-Shin Chen
2010-08-01
Full Text Available The past few years have witnessed increased in the potential use of wireless sensor network (WSN such as disaster management, combat field reconnaissance, border protection and security surveillance. Sensors in these applications are expected to be remotely deployed in large numbers and to operate autonomously in unattended environments. Since a WSN is composed of nodes with nonreplenishable energy resource, elongating the network lifetime is the main concern. To support scalability, nodes are often grouped into disjoint clusters. Each cluster would have a leader, often referred as cluster head (CH. A CH is responsible for not only the general request but also assisting the general nodes to route the sensed data to the target nodes. The power-consumption of a CH is higher then of a general (non-CH node. Therefore, the CH selection will affect the lifetime of a WSN. However, the application scenario contexts of WSNs that determine the definitions of lifetime will impact to achieve the objective of elongating lifetime. In this study, we classify the lifetime into different types and give the corresponding CH selection method to achieve the life-time extension objective. Simulation results demonstrate our study can enlarge the life-time for different requests of the sensor networks.
The Integral- and Intermediate-Screened Coupled-Cluster Method
Sørensen, L K
2016-01-01
We present the formulation and implementation of the integral- and intermediate-screened coupled-cluster method (ISSCC). The IISCC method gives a simple and rigorous integral and intermediate screening (IIS) of the coupled-cluster method and will significantly reduces the scaling for all orders of the CC hierarchy exactly like seen for the integral-screened configuration-interaction method (ISCI). The rigorous IIS in the IISCC gives a robust and adjustable error control which should allow for the possibility of converging the energy without any loss of accuracy while retaining low or linear scaling at the same time. The derivation of the IISCC is performed in a similar fashion as in the ISCI where we show that the tensor contractions for the nested commutators are separable up to an overall sign and that this separability can lead to a rigorous IIS. In the nested commutators the integrals are screened in the first tensor contraction and the intermediates are screened in all successive tensor contractions. The...
Optimal sensor placement using FRFs-based clustering method
Li, Shiqi; Zhang, Heng; Liu, Shiping; Zhang, Zhe
2016-12-01
The purpose of this work is to develop an optimal sensor placement method by selecting the most relevant degrees of freedom as actual measure position. Based on observation matrix of a structure's frequency response, two optimal criteria are used to avoid the information redundancy of the candidate degrees of freedom. By using principal component analysis, the frequency response matrix can be decomposed into principal directions and their corresponding singular. A relatively small number of principal directions will maintain a system's dominant response information. According to the dynamic similarity of each degree of freedom, the k-means clustering algorithm is designed to classify the degrees of freedom, and effective independence method deletes the sensors which are redundant of each cluster. Finally, two numerical examples and a modal test are included to demonstrate the efficient of the derived method. It is shown that the proposed method provides a way to extract sub-optimal sets and the selected sensors are well distributed on the whole structure.
Directory of Open Access Journals (Sweden)
Tao Li
2014-04-01
Full Text Available Point-like targets are useful in providing surface deformation with the time series of synthetic aperture radar (SAR images using the multi-temporal interferometric synthetic aperture radar (MTInSAR methodology. However, the spatial density of point-like targets is low, especially in non-urban areas. In this paper, a hierarchical MTInSAR method is proposed to increase the spatial density of deformation measurements by tracking both the point-like targets and the distributed targets with the temporal steadiness of radar backscattering. To efficiently reduce error propagation, the deformation rates on point-like targets with lower amplitude dispersion index values are first estimated using a least squared estimator and a region growing method. Afterwards, the distributed targets are identified using the amplitude dispersion index and a Pearson correlation coefficient through a multi-level processing strategy. Meanwhile, the deformation rates on distributed targets are estimated during the multi-level processing. The proposed MTInSAR method has been tested for subsidence detection over a suburban area located in Tianjin, China using 40 high-resolution TerraSAR-X images acquired between 2009 and 2010, and validated using the ground-based leveling measurements. The experiment results indicate that the spatial density of deformation measurements can be increased by about 250% and that subsidence accuracy can reach to the millimeter level by using the hierarchical MTInSAR method.
Comparing Methods for segmentation of Microcalcification Clusters in Digitized Mammograms
Moradmand, Hajar; Targhi, Hossein Khazaei
2012-01-01
The appearance of microcalcifications in mammograms is one of the early signs of breast cancer. So, early detection of microcalcification clusters (MCCs) in mammograms can be helpful for cancer diagnosis and better treatment of breast cancer. In this paper a computer method has been proposed to support radiologists in detection MCCs in digital mammography. First, in order to facilitate and improve the detection step, mammogram images have been enhanced with wavelet transformation and morphology operation. Then for segmentation of suspicious MCCs, two methods have been investigated. The considered methods are: adaptive threshold and watershed segmentation. Finally, the detected MCCs areas in different algorithms will be compared to find out which segmentation method is more appropriate for extracting MCCs in mammograms.
Directory of Open Access Journals (Sweden)
Bo Jarneving
2007-01-01
Full Text Available In this study a novel method of science mapping is presented which combines bibliographic coupling, as a measure of document-document similarity, with an agglomerative hierarchical cluster method. The focus in this study is on the mapping of so called ‘core documents’, a concept presented first in 1995 by Glänzel and Czerwon. The term ‘core document’ denote documents that have a central position in the research front in terms of many and strong bibliographic coupling links. The identification and mapping of core documents usually requires a large multidisciplinary research setting and in this study the 2003 volume of the Science Citation Index was applied. From this database, a sub-set of core documents reporting on the outbreak of SARS in 2002 was chosen for the demonstration of the application of this mapping method. It was demonstrated that the method, in this case, successfully identified interpretable research themes and that iterative clustering on two subsequent levels of cluster agglomeration may provide with useful and current information.
Saito, Yuta; Shimomura, Masatsugu; Yabu, Hiroshi
2014-09-01
High-performance catalysts and photovoltaics are required for building an environmentally sustainable society. Because catalytic and photovoltaic reactions occur at the interfaces between reactants and surfaces, the chemical, physical, and structural properties of interfaces have been the focus of much research. To improve the performance of these materials further, inorganic porous materials with hierarchic porous architectures have been fabricated. The breath figure technique allows preparing porous films by using water droplets as templates. In this study, a valuable preparation method for hierarchic porous inorganic materials is shown. Hierarchic porous materials are prepared from surface-coated inorganic nanoparticles with amphiphilic copolymers having catechol moieties followed by sintering. Micron-scale pores are prepared by using water droplets as templates, and nanoscale pores are formed between the nanoparticles. The fabrication method allows the preparation of hierarchic porous films from inorganic nanoparticles of various shapes and materials.
DEFF Research Database (Denmark)
Ackerman, Margareta; Ben-David, Shai; Branzei, Simina
2012-01-01
We investigate a natural generalization of the classical clustering problem, considering clustering tasks in which different instances may have different weights.We conduct the first extensive theoretical analysis on the influence of weighted data on standard clustering algorithms in both...... the partitional and hierarchical settings, characterizing the conditions under which algorithms react to weights. Extending a recent framework for clustering algorithm selection, we propose intuitive properties that would allow users to choose between clustering algorithms in the weighted setting and classify...
Institute of Scientific and Technical Information of China (English)
康茜; 李德玉; 王素格; 冀庆斌
2015-01-01
社区发现是社会网络分析的一个基本任务，而社区结构探测是社区发现的一个关键问题。将社区结构中的结点看作信号源，针对信号传递过程中存在信号缺失情况，提出了一种层次聚类社区发现算法。该算法通过度中心性来度量节点接收信号的概率，用于量化节点接受信号过程中的缺失值。经过信号传递，使网络的拓扑结构转化为向量间的几何关系，在此基础上，使用层次聚类算法用于发现社区。为了验证SMHC算法的有效性，通过在三个数据集上与SHC算法、CNM算法、GN算法、Similar算法进行比较，实验结果表明，SMHC算法在一定程度上提高了社区发现的正确率。%Community identification is a basic task of social network analysis, meanwhile the community structure detec-tion is a key problem of community identification. Each node in the community structure is regarded as the signal source. A hierarchical clustering community algorithm is proposed in order to settle the problem of signal missing in the process of signal transmission. The algorithm measures the probability of receiving signals of nodes by degree centrality to quantify the signal missing values. After the signal transmission, the topology of the network is transformed into geometric relation-ships among the vectors. On the basis, the hierarchical clustering algorithm is used to find the community structure. In order to validate the proposed method, this paper compares it with SHC algorithm, CNM algorithm, GN algorithm and Similar algorithm. Under three real networks, the Zachary Club, American Football and Netscience, the experimental results indi-cate that SMHC algorithm can effectively improve precision.
Discovering hierarchical structure in normal relational data
DEFF Research Database (Denmark)
Schmidt, Mikkel Nørgaard; Herlau, Tue; Mørup, Morten
2014-01-01
Hierarchical clustering is a widely used tool for structuring and visualizing complex data using similarity. Traditionally, hierarchical clustering is based on local heuristics that do not explicitly provide assessment of the statistical saliency of the extracted hierarchy. We propose a non-param...
Bayesian methods for estimating the reliability in complex hierarchical networks (interim report).
Energy Technology Data Exchange (ETDEWEB)
Marzouk, Youssef M.; Zurn, Rena M.; Boggs, Paul T.; Diegert, Kathleen V. (Sandia National Laboratories, Albuquerque, NM); Red-Horse, John Robert (Sandia National Laboratories, Albuquerque, NM); Pebay, Philippe Pierre
2007-05-01
Current work on the Integrated Stockpile Evaluation (ISE) project is evidence of Sandia's commitment to maintaining the integrity of the nuclear weapons stockpile. In this report, we undertake a key element in that process: development of an analytical framework for determining the reliability of the stockpile in a realistic environment of time-variance, inherent uncertainty, and sparse available information. This framework is probabilistic in nature and is founded on a novel combination of classical and computational Bayesian analysis, Bayesian networks, and polynomial chaos expansions. We note that, while the focus of the effort is stockpile-related, it is applicable to any reasonably-structured hierarchical system, including systems with feedback.
Institute of Scientific and Technical Information of China (English)
郭红; 黄佳鑫; 郭昆
2015-01-01
The mining and discovery of overlapping and hierarchical communities is a hot topic in the area of social network research. Firstly, an algorithm, discovery of link conmunities based on extended link cluster sequence ( DLC ECS) , is proposed to detect overlapping and hierarchical communities in social networks efficiently. Based on the extended link cluster sequence corresponding to community structures with various densities, the optimal link community is detected after searching for the global optimal density. The link communities are transformed into the node communities, and thus the overlapping communities can be found out. Then, hierarchical link communities extraction based on extended link cluster sequence ( HLCE ECS ) is designed. Hierarchical link communities from the extended link cluster sequence is found by the proposed algorithm. The link communities are transformed into the node communities to find out the overlapping and hierarchical communities. Experimental results on are artificial and real-world datasets demonstrate that DLC ECS algorithm significantly improves the community quality and HLCE ECS algorithm effectively discovers meaningful hierarchical communities.%高质量重叠层次社区的挖掘和发现已成为社会网络研究热点,为更有效地发现社会网络中具有重叠层次性的社区结构,提出基于增广边簇序列的边社区发现算法( DLC ECS)。在产生包含所有可能密度参数对应的社区结构的增广边簇序列的基础上,找出全局最优的密度参数,发现全局最优的边社区结构,将识别的边社区结构转化为节点社区结构,发现具有重叠结构的社区。在该序列的基础上,提出层次边社区提取算法( HLCE ECS),快速发现序列中的层次边社区结构,将识别的边社区结构转化为节点社区结构,发现同时具有重叠和层次结构的社区。在真实数据集和人工数据集上的实验表明,DLC ECS具有
Institute of Scientific and Technical Information of China (English)
诸德超; 邓忠民; 王荇卫
2001-01-01
In the present paper, a series of hierarchical warping functions is developed to analyze the static and dynamic problems of thin walled composite laminated helicopter rotors composed of several layers with single closed cell. This ethod is the development and extension of the traditional constrained warping theory of thin walled metallic beams, which had been proved very successful since 1940s. The warping distribution along the perimeter of each layer is expanded into a series of successively corrective warping functions with the traditional warping function caused by free torsion or free bending as the first term, and is assumed to be piecewise linear along the thickness direction of layers. The governing equations are derived based upon the variational principle of minimum potential energy for static analysis and Rayleigh Quotient for free vibration analysis. Then the hierarchical finite element method is introduced to form a numerical algorithm. Both static and natural vibration problems of sample box beams are analyzed with the present method to show the main mechanical behavior of the thin walled composite laminated helicopter rotor.
多尺度点云噪声检测的密度分析法%Hierarchical Outlier Detection for Point Cloud Data Using a Density Analysis Method
Institute of Scientific and Technical Information of China (English)
朱俊锋; 胡翔云; 张祖勋; 熊小东
2015-01-01
Laser scanning and image matching are both effective ways to get dense point cloud data , however ,outliers obtained from both ways are still inevitable .A novel hierarchical outlier detection method is proposed for the automatic outlier detection of point cloud from image matching and airborne laser scanning .There are two main steps in this method .Firstly ,the hierarchical density estimation is used to remove single and small cluster outliers .Then a progressive TIN method is used to find non‐outliers removed in the previous steps .The experimental results indicate the effectiveness of this method in dealing with the two types of points cloud data .And this method can also handle low quality point cloud data from image matching .The quantitative analysis shows that the outlier detection rate is higher than 97% .%当前机载激光雷达数据和影像匹配得到的点云是密集点云数据的两类主要来源，但都不可避免存在着噪声点。本文提出一种新的点云去噪算法，可适用于这两类数据中所包含的噪声点的去除。算法主要包括两步：第1步利用多尺度的密度算法去除孤立噪声和小的簇状噪声；第2步利用三角网约束将第1步中误检测为噪声的点重新归为正常点。针对真实数据进行了剔噪试验，结果表明本文提出的基于密度分析的多尺度噪声检测算法对孤立噪声和簇状噪声都有较为效，且对于质量较差的影像匹配点云的检测也能有效处理。本文算法检测率达到97％以上。
A Comparison of Methods for Player Clustering via Behavioral Telemetry
DEFF Research Database (Denmark)
Drachen, Anders; Thurau, Christian; Sifa, Rafet;
2013-01-01
The analysis of user behavior in digital games has been aided by the introduction of user telemetry in game development, which provides unprecedented access to quantitative data on user behavior from the installed game clients of the entire population of players. Player behavior telemetry datasets...... can be exceptionally complex, with features recorded for a varying population of users over a temporal segment that can reach years in duration. Categorization of behaviors, whether through descriptive methods (e.g. segmentation) or unsupervised/supervised learning techniques, is valuable for finding...... patterns in the behavioral data, and developing profiles that are actionable to game developers. There are numerous methods for unsupervised clustering of user behavior, e.g. k-means/c-means, Nonnegative Matrix Factorization, or Principal Component Analysis. Although all yield behavior categorizations...
Relativistic extended coupled cluster method for magnetic hyperfine structure constant
Sasmal, Sudip; Nayak, Malaya K; Vaval, Nayana; Pal, Sourav
2015-01-01
This article deals with the general implementation of 4-component spinor relativistic extended coupled cluster (ECC) method to calculate first order property of atoms and molecules in their open-shell ground state configuration. The implemented relativistic ECC is employed to calculate hyperfine structure (HFS) constant of alkali metals (Li, Na, K, Rb and Cs), singly charged alkaline earth metal atoms (Be+, Mg+, Ca+ and Sr+) and molecules (BeH, MgF and CaH). We have compared our ECC results with the calculations based on restricted active space configuration interaction (RAS-CI) method. Our results are in better agreement with the available experimental values than those of the RAS-CI values.
González-González, Andrea; Sánchez-Reyes, Luna L; Delgado Sapien, Gabriela; Eguiarte, Luis E; Souza, Valeria
2013-01-01
Escherichia coli occur as either free-living microorganisms, or within the colons of mammals and birds as pathogenic or commensal bacteria. Although the Mexican population of intestinal E. coli maintains high levels of genetic diversity, the exact mechanisms by which this occurs remain unknown. We therefore investigated the role of homologous recombination and point mutation in the genetic diversification and population structure of Mexican strains of E. coli. This was explored using a multi locus sequence typing (MLST) approach in a non-outbreak related, host-wide sample of 128 isolates. Overall, genetic diversification in this sample appears to be driven primarily by homologous recombination, and to a lesser extent, by point mutation. Since genetic diversity is hierarchically organized according to the MLST genealogy, we observed that there is not a homogeneous recombination rate, but that different rates emerge at different clustering levels such as phylogenetic group, lineage and clonal complex (CC). Moreover, we detected clear signature of substructure among the A+B1 phylogenetic group, where the majority of isolates were differentiated into four discrete lineages. Substructure pattern is revealed by the presence of several CCs associated to a particular life style and host as well as to different genetic diversification mechanisms. We propose these findings as an alternative explanation for the maintenance of the clear phylogenetic signal of this species despite the prevalence of homologous recombination. Finally, we corroborate using both phylogenetic and genetic population approaches as an effective mean to establish epidemiological surveillance tailored to the ecological specificities of each geographic region.
Directory of Open Access Journals (Sweden)
Fu Yuhua
2016-08-01
Full Text Available By using Neutrosophy and Quad-stage Method, the expansions of comparative literature include: comparative social sciences clusters, comparative natural sciences clusters, comparative interdisciplinary sciences clusters, and so on. Among them, comparative social sciences clusters include: comparative literature, comparative history, comparative philosophy, and so on; comparative natural sciences clusters include: comparative mathematics, comparative physics, comparative chemistry, comparative medicine, comparative biology, and so on.
Guo, Jingyu; Tian, Dehua; McKinney, Brett A.; Hartman, John L.
2010-06-01
of physiological homeostasis. To develop the method, 297 gene deletion strains were selected based on gene-drug interactions with hydroxyurea, an inhibitor of ribonucleotide reductase enzyme activity, which is critical for DNA synthesis. To partition the gene functions, these 297 deletion strains were challenged with growth inhibitory drugs known to target different genes and cellular pathways. Q-HTCP-derived growth curves were used to quantify all gene interactions, and the data were used to test the performance of REMc. Fundamental advantages of REMc include objective assessment of total number of clusters and assignment to each cluster a log-likelihood value, which can be considered an indicator of statistical quality of clusters. To assess the biological quality of clusters, we developed a method called gene ontology information divergence z-score (GOid_z). GOid_z summarizes total enrichment of GO attributes within individual clusters. Using these and other criteria, we compared the performance of REMc to hierarchical and K-means clustering. The main conclusion is that REMc provides distinct efficiencies for mining Q-HTCP data. It facilitates identification of phenomic modules, which contribute to buffering mechanisms that underlie cellular homeostasis and the regulation of phenotypic expression.
A Method for Clustering Web Attacks Using Edit Distance
Petrovic, Slobodan; Alvarez, Gonzalo
2003-01-01
Cluster analysis often serves as the initial step in the process of data classification. In this paper, the problem of clustering different length input data is considered. The edit distance as the minimum number of elementary edit operations needed to transform one vector into another is used. A heuristic for clustering unequal length vectors, analogue to the well known k-means algorithm is described and analyzed. This heuristic determines cluster centroids expanding shorter vectors to the l...
Directory of Open Access Journals (Sweden)
Jamie Joseph
2015-06-01
Full Text Available The power of SNP association studies to detect valid relationships with clinical phenotypes in schizophrenia is largely limited by the number of SNPs selected and non-specificity of phenotypes. To address this, we first assessed performance on two visual perceptual organization tasks designed to avoid many generalized deficit confounds, Kanizsa shape perception and contour integration, in a schizophrenia patient sample. Then, to reduce the total number of candidate SNPs analyzed in association with perceptual organization phenotypes, we employed a two-stage strategy: first a priori SNPs from three candidate genes were selected (GAD1, NRG1 and DTNBP1; then a Hierarchical Classes Analysis (HICLAS was performed to reduce the total number of SNPs, based on statistically related SNP clusters. HICLAS reduced the total number of candidate SNPs for subsequent phenotype association analyses from 6 to 3. MANCOVAs indicated that rs10503929 and rs1978340 were associated with the Kanizsa shape perception filling in metric but not the global shape detection metric. rs10503929 was also associated with altered contour integration performance. SNPs not selected by the HICLAS model were unrelated to perceptual phenotype indices. While the contribution of candidate SNPs to perceptual impairments requires further clarification, this study reports the first application of HICLAS as a hypothesis-independent mathematical method for SNP data reduction. HICLAS may be useful for future larger scale genotype-phenotype association studies.
Methods of regional innovative clusters forming and development programs elaboration
Marchuk, Olha
2013-01-01
The aim of the article is to select programmes for the formation and development of innovative cluster structures. The analysis of the backgrounds of formation of innovative clusters was made in the regions of Ukraine. Two types of programmes were suggested for the implamentation of cluster policy at the regional level.
A Grouping Method of Distribution Substations Using Cluster Analysis
Ohtaka, Toshiya; Iwamoto, Shinichi
Recently, it has been considered to group distribution substations together for evaluating the reinforcement planning of distribution systems. However, the grouping is carried out by the knowledge and experience of an expert who is in charge of distribution systems, and a subjective feeling of a human being causes ambiguous grouping at the moment. Therefore, a method for imitating the grouping by the expert has been desired in order to carry out a systematic grouping which has numerical corroboration. In this paper, we propose a grouping method of distribution substations using cluster analysis based on the interconnected power between the distribution substations. Moreover, we consider the geographical constraints such as rivers, roads, business office boundaries and branch boundaries, and also examine a method for adjusting the interconnected power. Simulations are carried out to verify the validity of the proposed method using an example system. From the simulation results, we can find that the imitation of the grouping by the expert becomes possible due to considering the geographical constraints and adjusting the interconnected power, and also the calculation time and iterations can be greatly reduced by introducing the local and tabu search methods.
Lestari, D.; Raharjo, D.; Bustamam, A.; Abdillah, B.; Widhianto, W.
2017-07-01
Dengue virus consists of 10 different constituent proteins and are classified into 4 major serotypes (DEN 1 - DEN 4). This study was designed to perform clustering against 30 protein sequences of dengue virus taken from Virus Pathogen Database and Analysis Resource (VIPR) using Regularized Markov Clustering (R-MCL) algorithm and then we analyze the result. By using Python program 3.4, R-MCL algorithm produces 8 clusters with more than one centroid in several clusters. The number of centroid shows the density level of interaction. Protein interactions that are connected in a tissue, form a complex protein that serves as a specific biological process unit. The analysis of result shows the R-MCL clustering produces clusters of dengue virus family based on the similarity role of their constituent protein, regardless of serotypes.
A Data Cleansing Method for Clustering Large-scale Transaction Databases
Loh, Woong-Kee; Kang, Jun-Gyu
2010-01-01
In this paper, we emphasize the need for data cleansing when clustering large-scale transaction databases and propose a new data cleansing method that improves clustering quality and performance. We evaluate our data cleansing method through a series of experiments. As a result, the clustering quality and performance were significantly improved by up to 165% and 330%, respectively.
Image Clustering Method Based on Density Maps Derived from Self-Organizing Mapping: SOM
Directory of Open Access Journals (Sweden)
Kohei Arai
2012-07-01
Full Text Available A new method for image clustering with density maps derived from Self-Organizing Maps (SOM is proposed together with a clarification of learning processes during a construction of clusters. It is found that the proposed SOM based image clustering method shows much better clustered result for both simulation and real satellite imagery data. It is also found that the separability among clusters of the proposed method is 16% longer than the existing k-mean clustering. It is also found that the separability among clusters of the proposed method is 16% longer than the existing k-mean clustering. In accordance with the experimental results with Landsat-5 TM image, it takes more than 20000 of iteration for convergence of the SOM learning processes.
Mapping the Generator Coordinate Method to the Coupled Cluster Approach
Stuber, Jason L
2015-01-01
The generator coordinate method (GCM) casts the wavefunction as an integral over a weighted set of non-orthogonal single determinantal states. In principle this representation can be used like the configuration interaction (CI) or shell model to systematically improve the approximate wavefunction towards an exact solution. In practice applications have generally been limited to systems with less than three degrees of freedom. This bottleneck is directly linked to the exponential computational expense associated with the numerical projection of broken symmetry Hartree-Fock (HF) or Hartree-Fock-Bogoliubov (HFB) wavefunctions and to the use of a variational rather than a bi-variational expression for the energy. We circumvent these issues by choosing a hole-particle representation for the generator and applying algebraic symmetry projection, via the use of tensor operators and the invariant mean (operator average). The resulting GCM formulation can be mapped directly to the coupled cluster (CC) approach, leading...
Displacement of Building Cluster Using Field Analysis Method
Institute of Scientific and Technical Information of China (English)
Al Tinghua
2003-01-01
This paper presents a field based method to deal with the displacement of building cluster,which is driven by the street widening. The compress of street boundary results in the force to push the building moving inside and the force propagation is a decay process. To describe the phenomenon above, the field theory is introduced with the representation model of isoline. On the basis of the skeleton of Delaunay triangulation,the displacement field is built in which the propagation force is related to the adjacency degree with respect to the street boundary. The study offers the computation of displacement direction and offset distance for the building displacement. The vector operation is performed on the basis of grade and other field concepts.
A Novel Cluster Head Selection Algorithm Based on Fuzzy Clustering and Particle Swarm Optimization.
Ni, Qingjian; Pan, Qianqian; Du, Huimin; Cao, Cen; Zhai, Yuqing
2017-01-01
An important objective of wireless sensor network is to prolong the network life cycle, and topology control is of great significance for extending the network life cycle. Based on previous work, for cluster head selection in hierarchical topology control, we propose a solution based on fuzzy clustering preprocessing and particle swarm optimization. More specifically, first, fuzzy clustering algorithm is used to initial clustering for sensor nodes according to geographical locations, where a sensor node belongs to a cluster with a determined probability, and the number of initial clusters is analyzed and discussed. Furthermore, the fitness function is designed considering both the energy consumption and distance factors of wireless sensor network. Finally, the cluster head nodes in hierarchical topology are determined based on the improved particle swarm optimization. Experimental results show that, compared with traditional methods, the proposed method achieved the purpose of reducing the mortality rate of nodes and extending the network life cycle.
A New Proposed Clustering Method for Energy Efficient Routing in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Sara Nasirian
2017-04-01
Full Text Available Wireless Sensor Networks, have found plenty of applications, nowadays. Due to the presence of tiny and restricted batteries in these little sensors, deployment of a power-efficient routing protocol is a must. Between all the already-proposed routing protocols, the hierarchical ones are more efficient in energy conservation than flat routing protocols. In order to decrease energy consumption and increase the network lifetime, we proposed a new hierarchical routing protocol, by dividing the network area to sectors and two levels and choosing Cluster Heads from the lower level, which is more near to the Base Station. In order to minimize the reverse flow of the data from BS, we use a tree structure in each sector. In addition, the frontier between two levels can be moved during network lifetime and having dead nodes. The results of simulations show that our proposed TSBC protocol outperforms LEACH, Multi-Hop LEACH and many other conventional routing protocols in energy conservation and in network lifetime. One of the most important properties of our scheme that can distinguish it from any other scheme is reverse flow from BS cancellation or at least deduction. The special formula, which is used in our protocol for CH selection, in order to prevent battery depletion in a special spot, can also be adapted to any other hierarchical clustering protocols to achieve higher energy-efficiency. It is still noteworthy to mention that thanks to devising special measures we highly prevent fragmentation occurrence in routing process in the network.
Sahu, Nityananda; Gadre, Shridhar R; Rakshit, Avijit; Bandyopadhyay, Pradipta; Miliordos, Evangelos; Xantheas, Sotiris S
2014-10-28
We report new global minimum candidate structures for the (H2O)25 cluster that are lower in energy than the ones reported previously and correspond to hydrogen bonded networks with 42 hydrogen bonds and an interior, fully coordinated water molecule. These were obtained as a result of a hierarchical approach based on initial Monte Carlo Temperature Basin Paving sampling of the cluster's Potential Energy Surface with the Effective Fragment Potential, subsequent geometry optimization using the Molecular Tailoring Approach with the fragments treated at the second order Møller-Plesset (MP2) perturbation (MTA-MP2) and final refinement of the entire cluster at the MP2 level of theory. The MTA-MP2 optimized cluster geometries, constructed from the fragments, were found to be within <0.5 kcal/mol from the minimum geometries obtained from the MP2 optimization of the entire (H2O)25 cluster. In addition, the grafting of the MTA-MP2 energies yields electronic energies that are within <0.3 kcal/mol from the MP2 energies of the entire cluster while preserving their energy rank order. Finally, the MTA-MP2 approach was found to reproduce the MP2 harmonic vibrational frequencies, constructed from the fragments, quite accurately when compared to the MP2 ones of the entire cluster in both the HOH bending and the OH stretching regions of the spectra.
PROPOSED A HETEROGENEOUS CLUSTERING ALGORITHM TO IMPROVE QOS IN WSN
Directory of Open Access Journals (Sweden)
Mehran Mokhtari
2016-07-01
Full Text Available In this article it has presented leach extended hierarchical 3-level clustered heterogeneous and dynamics algorithm. On suggested protocol (LEH3LA with planning of selected auction cluster head, and alternative cluster head node, problem of delay on processing, processing of selecting members, decrease of expenses, and energy consumption, decrease of sending message, and receiving messages inside the clusters, selecting of cluster heads in large sensor networks were solved. This algorithm uses hierarchical heterogeneous network (3-levels, collective intelligence, and intra-cluster interaction for communications. Also it will solve the problems of sending data in Multi-BS mobile networks, expanding inter-cluster networks, overlap cluster, genesis orphan nodes, boundary change dynamically clusters, using backbone networks, cloud sensor. Using sleep/wake scheduling algorithm or TDMA-schedule alternative cluster head node provides redundancy, and fault tolerance. Local processing in cluster head nodes, and alternative cluster head, intra-cluster and inter-cluster communications such as Multi-HOP cause increase on processing speed, and sending data intra-cluster and inter-cluster. Decrease of overhead network, and increase the load balancing among cluster heads. Using encapsulation of data method, by cluster head nodes, energy consumption decrease during sending data. Also by improving quality of service (QoS in CBRP, LEACH, 802.15.4, decrease of energy consumption in sensors, cluster heads and alternative cluster head nodes, cause increase on lift time of sensor networks
Jiang, Zhenlin; Fang, Shuying; Wang, Chaosheng; Wang, Huaping; Ji, Chengchang
2016-12-01
For a surface to be superhydrophobic a combination of surface roughness and low surface energy is required. In this study, polyorganosiloxane superhydrophobic surfaces were fabricated using a sol-gel and heat treatment process followed by coating with a nanosilica (SiO2) sol and organosiloxane 1, 1, 1, 3, 5, 5, 5-heptamethyl-3-[2-(trimethoxysilyl)ethyl]-trisiloxane (β-HPEOs). The nano-structure was superimposed using self-assembled, surface-modified silica nanoparticles, forming two-dimensional hierarchical structures. The water contact angle (WCA) of polyorganosiloxane superhydrophobic surface was 143.7 ± 0.6°, which was further increased to 156.7 ± 1.1° with water angle hysteresis of 2.5 ± 0.6° by superimposing nanoparticles using a heat treatment process. An analytical characterization of the surface revealed that the nano-silica and polyorganosiloxane formed a micro/nano structure on the films and the wetting behaviour of the films changed from hydrophilic to superhydrophobic. The WCA of these films were 143.7 ± 0.6° and at heat treatment temperatures of less than 400 °C, the WCA increased from 144.5 ± 0.7° to 156.7 ± 1.1°. The prepared superhydrophobic films were stable even after heat treatment at 430 °C for 30 min and their superhydrophobicity was durable for more than 120 days. The effects of heat treatment process on the surface chemistry structure, wettability and morphology of the polyorganosiloxane superhydrophobic films were investigated in detail. The results indicated that the stability of the chemical structure was required to yield a thermally-stable superhydrophobic surface.
Coupled-cluster methods for core-hole dynamics
Picon, Antonio; Cheng, Lan; Hammond, Jeff R.; Stanton, John F.; Southworth, Stephen H.
2014-05-01
Coupled cluster (CC) is a powerful numerical method used in quantum chemistry in order to take into account electron correlation with high accuracy and size consistency. In the CC framework, excited, ionized, and electron-attached states can be described by the equation of motion (EOM) CC technique. However, bringing CC methods to describe molecular dynamics induced by x rays is challenging. X rays have the special feature of interacting with core-shell electrons that are close to the nucleus. Core-shell electrons can be ionized or excited to a valence shell, leaving a core-hole that will decay very fast (e.g. 2.4 fs for K-shell of Ne) by emitting photons (fluorescence process) or electrons (Auger process). Both processes are a clear manifestation of a many-body effect, involving electrons in the continuum in the case of Auger processes. We review our progress of developing EOM-CC methods for core-hole dynamics. Results of the calculations will be compared with measurements on core-hole decays in atomic Xe and molecular XeF2. This work is funded by the Office of Basic Energy Sciences, Office of Science, U.S. Department of Energy, under Contract No. DE-AC02-06CH11357.
Motion estimation using point cluster method and Kalman filter.
Senesh, M; Wolf, A
2009-05-01
The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal
Karnbach, R.; Castex, M. C.; Keto, J. W.; Joppien, M.; Wörmer, J.; Zimmerer, G.; Möller, T.
1993-02-01
Excitation and decay processes in Kr N clusters ( N=2-10 4) were investigated via time- and energy-resolved fluorescence methods with synchrotron radiation excitation. In small clusters ( N<50) in addition to the well-known emission bands of condensed Kr another broad continuous emission is observed. It is assigned to a radiative decay of Kr excimers desorbing from the cluster surface. There are indications that the cluster size where the desorption rate becomes slow is related to a change in sign of the electron affinity of the cluster. Changes of spectral distribution of the fluorescence light with cluster size are interpreted as variations of the vibrational energy flow.
Directory of Open Access Journals (Sweden)
Cooper James B
2010-03-01
Full Text Available Abstract Background Clustering the information content of large high-dimensional gene expression datasets has widespread application in "omics" biology. Unfortunately, the underlying structure of these natural datasets is often fuzzy, and the computational identification of data clusters generally requires knowledge about cluster number and geometry. Results We integrated strategies from machine learning, cartography, and graph theory into a new informatics method for automatically clustering self-organizing map ensembles of high-dimensional data. Our new method, called AutoSOME, readily identifies discrete and fuzzy data clusters without prior knowledge of cluster number or structure in diverse datasets including whole genome microarray data. Visualization of AutoSOME output using network diagrams and differential heat maps reveals unexpected variation among well-characterized cancer cell lines. Co-expression analysis of data from human embryonic and induced pluripotent stem cells using AutoSOME identifies >3400 up-regulated genes associated with pluripotency, and indicates that a recently identified protein-protein interaction network characterizing pluripotency was underestimated by a factor of four. Conclusions By effectively extracting important information from high-dimensional microarray data without prior knowledge or the need for data filtration, AutoSOME can yield systems-level insights from whole genome microarray expression studies. Due to its generality, this new method should also have practical utility for a variety of data-intensive applications, including the results of deep sequencing experiments. AutoSOME is available for download at http://jimcooperlab.mcdb.ucsb.edu/autosome.
Institute of Scientific and Technical Information of China (English)
朱文忠
2014-01-01
为提高电力数据调度效率，缩短电力数据调度延时，提出一种改进的无通信冲突的分布式电力数据聚集调度近似算法，采用Sink根数据聚集树对无线传感器网络中各个节点电力资源数据进行分层数据调度，根据分布式数据集对各个电力节点之间的控制信息进行不断融合处理，在最大独立集的基础上建立一棵根在Sink的数据聚集树。每个节点分配一个时间片，使该节点能在无通信冲突的情况下传输数据。仿真实验表明，采用改进算法得到的聚集延时明显减小，有效保证了电力调度控制的实时性，电力信息数据分层融合度能达到90%以上，而改进前的算法只有10%~50%之间。%In order to improve the power data scheduling efficiency, shorten the power data scheduling delay, and improve matching and integration degree, and improved power scheduling optimization design method based on Sink root data tree hierarchical clustering was proposed for improve the management efficiency. We established a tree root in the Sink data ag-gregation tree based on the maximum independent set. Each node was assigned a time slice, so that the node could transmit data in the absence of communication conflict situations. Simulation results show that the improved algorithm has signifi-cantly reduced aggregation delay, and it has effectively ensured the real-time dispatching control, and the data hierarchical fusion degree can reach more than 90%, while the former algorithm is only 10%~50%.
Efficiency of a Multi-Reference Coupled Cluster method
Giner, Emmanuel; Scemama, Anthony; Malrieu, Jean Paul
2015-01-01
The multi-reference Coupled Cluster method first proposed by Meller et al (J. Chem. Phys. 1996) has been implemented and tested. Guess values of the amplitudes of the single and double excitations (the ${\\hat T}$ operator) on the top of the references are extracted from the knowledge of the coefficients of the Multi Reference Singles and Doubles Configuration Interaction (MRSDCI) matrix. The multiple parentage problem is solved by scaling these amplitudes on the interaction between the references and the Singles and Doubles. Then one proceeds to a dressing of the MRSDCI matrix under the effect of the Triples and Quadruples, the coefficients of which are estimated from the action of ${\\hat T}^2$. This dressing follows the logics of the intermediate effective Hamiltonian formalism. The dressed MRSDCI matrix is diagonalized and the process is iterated to convergence. The method is tested on a series of benchmark systems from Complete Active Spaces (CAS) involving 2 or 4 active electrons up to bond breakings. The...
Multilevel Analysis Methods for Partially Nested Cluster Randomized Trials
Sanders, Elizabeth A.
2011-01-01
This paper explores multilevel modeling approaches for 2-group randomized experiments in which a treatment condition involving clusters of individuals is compared to a control condition involving only ungrouped individuals, otherwise known as partially nested cluster randomized designs (PNCRTs). Strategies for comparing groups from a PNCRT in the…
Hudjimartsu, S. A.; Djatna, T.; Ambarwari, A.; Apriliantono
2017-01-01
The forest fires in Indonesia occurs frequently in the dry season. Almost all the causes of forest fires are caused by the human activity itself. The impact of forest fires is the loss of biodiversity, pollution hazard and harm the economy of surrounding communities. To prevent fires required the method, one of them with spatial temporal clustering. Spatial temporal clustering formed grouping data so that the results of these groupings can be used as initial information on fire prevention. To analyze the fires, used hotspot data as early indicator of fire spot. Hotspot data consists of spatial and temporal dimensions can be processed using the Spatial Temporal Clustering with Kulldorff Scan Statistic (KSS). The result of this research is to the effectiveness of KSS method to cluster spatial hotspot in a case within Riau Province and produces two types of clusters, most cluster and secondary cluster. This cluster can be used as an early fire warning information.
Swarm: robust and fast clustering method for amplicon-based studies
Directory of Open Access Journals (Sweden)
Frédéric Mahé
2014-09-01
Full Text Available Popular de novo amplicon clustering methods suffer from two fundamental flaws: arbitrary global clustering thresholds, and input-order dependency induced by centroid selection. Swarm was developed to address these issues by first clustering nearly identical amplicons iteratively using a local threshold, and then by using clusters’ internal structure and amplicon abundances to refine its results. This fast, scalable, and input-order independent approach reduces the influence of clustering parameters and produces robust operational taxonomic units.
Accounting hierarchical heterogeneity of rock during its working off by explosive methods
Hachay, Olga; Khachay, Oleg
2017-04-01
. Because the information about the structure and state of the environment can be obtained from the geophysical data by interpreting them in frames of the model, which is an approximation to the real environment, therefore you must select it from the class of physically and geologically reasonable. For a description of the geological environment in the form of a rock massif with its natural and technogenic heterogeneity we should use more adequate description as is a discrete model of the environment in the form of a piece wise non-homogeneous block media with embedded heterogeneities of lower rank than the block size . This nesting can be traced back several times, ie, changing the scale of the study, we see that the heterogeneity of lower rank now appear as blocks for the irregularities of the next rank. The simple average of the measured geophysical parameters can lead to a distorted view of the structure of the environment and its evolution. The Institute of Geophysics, UB RAS has developed a hardware-methodological and interpretative system for studying the structure and state of complex geological environment, which has the potential instability and the ability to rebuild the hierarchy structure with significant external influence. The basis of this complex is the developed 3-D technique planshet electromagnetic induction studies in frequency geometrical variant, resting on one side on the interpretation software system for 3-D alternating electromagnetic fields, and on the other hand on developed by Ph.D. A.I.Chelovechkov device for carrying out the inductive research. On the basis of this technology the active monitoring of the structure and state of the rock massif inside the mines of different material composition can be provided, it can be carried out to detect short-term precursors of strong dynamic phenomena according to the electromagnetic induction monitoring. There are developed algorithms for modeling of electromagnetic fields in hierarchic heterogeneous
分层色彩校正算法研究%Research on hierarchical colour correction method
Institute of Scientific and Technical Information of China (English)
赵萍; 王文举; 陈伟
2015-01-01
提出了一种基于视网膜皮层理论和颜色视觉理论分层色彩校正算法：引入颜色视觉理论中的三色学说进行全局分类，使用广义高斯混合模型计算全局系数；简化分层色彩校正模型减少计算量；引用retinex理论对三通道分别进行处理，进行高光区域提取；使用对立学说进行色差计算，根据色度距离和空间距离设置系数权重，并根据系数校正像素；采用分层色彩校正模型整合图像。所提算法融合了颜色视觉理论和视网膜皮层理论，对现有的分层色彩校正进行了进一步的改进。实验验证该算法在模拟人类视觉系统色彩恒常性方面具有很好的合理性和实用性，实验表明该算法对非均匀多光源色偏图像有很好的校正效果。%This paper proposes a hierarchical color correction algorithm which is based on the retinex theory and color vision theories. In order to ensure the accuracy of color correction, it uses hierarchical color correction algorithm. To reduce the amount of computation, it simplifies the hierarchical color correction model. In order to get more reasonable overall classification, it introduces the trichromatic theory. And to extract better high-light regions, it refers to the retinex theory which processes three channels separately. While in order to get better pixel correction coefficient, it bases the theory on opponent color theory to calculate chromatic aberration as well as takes the different effects between spatial distance and Euclidean distance into account to calculate the coefficient. In a word, it combines the retinex theory and the color vision theories, and it further improves the existing hierarchical color correction algorithm. What’s more, the method has been tested on color cast images to testify its rationality and practicality on imitating human being’s color constancy capacity, and the results suggest that the algorithm works well on non
The Cluster Variation Method: A Primer for Neuroscientists
Directory of Open Access Journals (Sweden)
Alianna J. Maren
2016-09-01
Full Text Available Effective Brain–Computer Interfaces (BCIs require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables, is defined in terms of a single interaction enthalpy parameter (h for the case of an equiprobable distribution of bistate (neural/neural ensemble units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found.
Detect overlapping and hierarchical community structure in networks
Shen, Huawei; Cai, Kai; Hu, Mao-Bin
2008-01-01
Clustering and community structure is crucial for many network systems and the related dynamic processes. It has been shown that communities are usually overlapping and hierarchical. However, previous methods investigate these two properties of community structure separately. This paper propose an algorithm (EAGLE) to detect both the overlapping and hierarchical properties of complex community structure together. This algorithm deals with the set of maximal cliques and adopts an agglomerative framework. The quality function of modularity is extended to evaluate the goodness of a cover. The examples of application to real world networks give excellent results.
The Effects of Method of Hierarchical Organization and Sequence on Children's Learning.
Parker, DeAnsin Goodson
This study focused on Robert Gagne's method for curricular development, which consists of structuring a knowledge domain into a learning hierarchy. Two methods of generating learning hierarchies and two different sequencings of these hierarchies were compared and their effects were measured. Four programmed texts were developed from two different…
Directory of Open Access Journals (Sweden)
Xin Gao
2016-01-01
Full Text Available In order to maintain and enhance the operational reliability of a robotic manipulator deployed in space, an operational reliability system control method is presented in this paper. First, a method to divide factors affecting the operational reliability is proposed, which divides the operational reliability factors into task-related factors and cost-related factors. Then the models describing the relationships between the two kinds of factors and control variables are established. Based on this, a multivariable and multiconstraint optimization model is constructed. Second, a hierarchical system control model which incorporates the operational reliability factors is constructed. The control process of the space manipulator is divided into three layers: task planning, path planning, and motion control. Operational reliability related performance parameters are measured and used as the system’s feedback. Taking the factors affecting the operational reliability into consideration, the system can autonomously decide which control layer of the system should be optimized and how to optimize it using a control level adjustment decision module. The operational reliability factors affect these three control levels in the form of control variable constraints. Simulation results demonstrate that the proposed method can achieve a greater probability of meeting the task accuracy requirements, while extending the expected lifetime of the space manipulator.
Hierarchical video summarization
Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.
1998-12-01
We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.
Directory of Open Access Journals (Sweden)
Kohei Arai
2013-07-01
Full Text Available Cluster analysis aims at identifying groups of similar objects and, therefore helps to discover distribution of patterns and interesting correlations in the data sets. In this paper, we propose to provide a consistent partitioning of a dataset which allows identifying any shape of cluster patterns in case of numerical clustering, convex or non-convex. The method is based on layered structure representation that be obtained from measurement distance and angle of numerical data to the centroid data and based on the iterative clustering construction utilizing a nearest neighbor distance between clusters to merge. Encourage result show the effectiveness of the proposed technique.
Yuliang, Qiao; Ying, Wang; Jinchun, Liu
This is an introduction to the method of classifying high, medium and low yield croplands by remote sensing and GIS, which is the result of a key project of The Scientific and Industry Technology Committee of National Defence. In the study, special information related to high, medium and low yield cropland was compounded with TM data. The development of the method of compound hierarchy classification improved accuracy of remote sensing classification greatly.
Ing, Alex; Schwarzbauer, Christian
2014-01-01
Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods – the cluster size statistic (CSS) and cluster mass statistic (CMS) – are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity. PMID:24906136
Ing, Alex; Schwarzbauer, Christian
2014-01-01
Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods--the cluster size statistic (CSS) and cluster mass statistic (CMS)--are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity.
Parallel hierarchical radiosity rendering
Energy Technology Data Exchange (ETDEWEB)
Carter, M.
1993-07-01
In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.
Zhang, G.; Lu, D.; Webster, C.
2014-12-01
The rational management of oil and gas reservoir requires an understanding of its response to existing and planned schemes of exploitation and operation. Such understanding requires analyzing and quantifying the influence of the subsurface uncertainties on predictions of oil and gas production. As the subsurface properties are typically heterogeneous causing a large number of model parameters, the dimension independent Monte Carlo (MC) method is usually used for uncertainty quantification (UQ). Recently, multilevel Monte Carlo (MLMC) methods were proposed, as a variance reduction technique, in order to improve computational efficiency of MC methods in UQ. In this effort, we propose a new acceleration approach for MLMC method to further reduce the total computational cost by exploiting model hierarchies. Specifically, for each model simulation on a new added level of MLMC, we take advantage of the approximation of the model outputs constructed based on simulations on previous levels to provide better initial states of new simulations, which will help improve efficiency by, e.g. reducing the number of iterations in linear system solving or the number of needed time-steps. This is achieved by using mesh-free interpolation methods, such as Shepard interpolation and radial basis approximation. Our approach is applied to a highly heterogeneous reservoir model from the tenth SPE project. The results indicate that the accelerated MLMC can achieve the same accuracy as standard MLMC with a significantly reduced cost.
Sun, Xu; Yang, Lina; Gao, Lianru; Zhang, Bing; Li, Shanshan; Li, Jun
2015-01-01
Center-oriented hyperspectral image clustering methods have been widely applied to hyperspectral remote sensing image processing; however, the drawbacks are obvious, including the over-simplicity of computing models and underutilized spatial information. In recent years, some studies have been conducted trying to improve this situation. We introduce the artificial bee colony (ABC) and Markov random field (MRF) algorithms to propose an ABC-MRF-cluster model to solve the problems mentioned above. In this model, a typical ABC algorithm framework is adopted in which cluster centers and iteration conditional model algorithm's results are considered as feasible solutions and objective functions separately, and MRF is modified to be capable of dealing with the clustering problem. Finally, four datasets and two indices are used to show that the application of ABC-cluster and ABC-MRF-cluster methods could help to obtain better image accuracy than conventional methods. Specifically, the ABC-cluster method is superior when used for a higher power of spectral discrimination, whereas the ABC-MRF-cluster method can provide better results when used for an adjusted random index. In experiments on simulated images with different signal-to-noise ratios, ABC-cluster and ABC-MRF-cluster showed good stability.
Biomedical ontology improves biomedical literature clustering performance: a comparison study.
Yoo, Illhoi; Hu, Xiaohua; Song, Il-Yeol
2007-01-01
Document clustering has been used for better document retrieval and text mining. In this paper, we investigate if a biomedical ontology improves biomedical literature clustering performance in terms of the effectiveness and the scalability. For this investigation, we perform a comprehensive comparison study of various document clustering approaches such as hierarchical clustering methods, Bisecting K-means, K-means and Suffix Tree Clustering (STC). According to our experiment results, a biomedical ontology significantly enhances clustering quality on biomedical documents. In addition, our results show that decent document clustering approaches, such as Bisecting K-means, K-means and STC, gains some benefit from the ontology while hierarchical algorithms showing the poorest clustering quality do not reap the benefit of the biomedical ontology.
Vuurpijl, L.; Schomaker, L.
2000-01-01
This paper describes a two-stage classification method for (1) classification of isolated characters and (2) verification of the classification result. Character prototypes are generated using hierarchical clustering. For those prototypes known to sometimes produce wrong classification results, a
A clustering method of Chinese medicine prescriptions based on modified firefly algorithm.
Yuan, Feng; Liu, Hong; Chen, Shou-Qiang; Xu, Liang
2016-12-01
This paper is aimed to study the clustering method for Chinese medicine (CM) medical cases. The traditional K-means clustering algorithm had shortcomings such as dependence of results on the selection of initial value, trapping in local optimum when processing prescriptions form CM medical cases. Therefore, a new clustering method based on the collaboration of firefly algorithm and simulated annealing algorithm was proposed. This algorithm dynamically determined the iteration of firefly algorithm and simulates sampling of annealing algorithm by fitness changes, and increased the diversity of swarm through expansion of the scope of the sudden jump, thereby effectively avoiding premature problem. The results from confirmatory experiments for CM medical cases suggested that, comparing with traditional K-means clustering algorithms, this method was greatly improved in the individual diversity and the obtained clustering results, the computing results from this method had a certain reference value for cluster analysis on CM prescriptions.
Tilton, James C. (Inventor)
2010-01-01
A method, computer readable storage, and apparatus for implementing recursive segmentation of data with spatial characteristics into regions including splitting-remerging of pixels with contagious region designations and a user controlled parameter for providing a preference for merging adjacent regions to eliminate window artifacts.
Efendiev, Yalchin R.
2015-06-05
In this paper, we develop a multiscale finite element method for solving flows in fractured media. Our approach is based on generalized multiscale finite element method (GMsFEM), where we represent the fracture effects on a coarse grid via multiscale basis functions. These multiscale basis functions are constructed in the offline stage via local spectral problems following GMsFEM. To represent the fractures on the fine grid, we consider two approaches (1) discrete fracture model (DFM) (2) embedded fracture model (EFM) and their combination. In DFM, the fractures are resolved via the fine grid, while in EFM the fracture and the fine grid block interaction is represented as a source term. In the proposed multiscale method, additional multiscale basis functions are used to represent the long fractures, while short-size fractures are collectively represented by a single basis functions. The procedure is automatically done via local spectral problems. In this regard, our approach shares common concepts with several approaches proposed in the literature as we discuss. We would like to emphasize that our goal is not to compare DFM with EFM, but rather to develop GMsFEM framework which uses these (DFM or EFM) fine-grid discretization techniques. Numerical results are presented, where we demonstrate how one can adaptively add basis functions in the regions of interest based on error indicators. We also discuss the use of randomized snapshots (Calo et al. Randomized oversampling for generalized multiscale finite element methods, 2014), which reduces the offline computational cost.
Lin, Chin An
2015-12-01
Fused-silica packaging glass fabricated with a hierarchical structure by integrating small (ultrathin nanorods) and large (honeycomb nanowalls) structures was demonstrated with exceptional light-harvesting solar performance, which is attributed to the subwavelength feature of the nanorods and an efficient scattering ability of the honeycomb nanowalls. Si solar cells covered with the hierarchically structured packaging glass exhibit enhanced conversion efficiency by 5.2% at normal incidence, and the enhancement went up to 46% at the incident angle of 60°. The hierarchical structured packaging glass shows excellent self-cleaning characteristics: 98.8% of the efficiency is maintained after 6 weeks of outdoor exposure, indicating that the nanostructured surface effectively repels polluting dust/particles. The presented self-cleaning omnidirectional light-harvesting design using the hierarchical structured packaging glass is a potential universal scheme for practical solar applications.
Institute of Scientific and Technical Information of China (English)
马绅惟; 刘广钟
2014-01-01
Routing protocol plays a very important role in underwater acoustic sensor networks. Based on the traditional TEEN protocol, a new routing protocol named HCM-TEEN(Hierarchical Cluster-communication Model on TEEN) has been put forward. The improved algorithm sets a new threshold function on the basis of the process of cluster candidate and the cluster elimination, and then introduces a Hierarchical Cluster-communication model in the period of data transmission to optimize the routing process. The experiment by the Matlab proved that HCM-TEEN performed better than the traditional algorithm on the network lifetime and the network average residual energy.%路由协议在水声传感器网络研究领域中扮演着非常重要的角色。基于传统的TEEN协议路由算法，提出了水声传感器网络中簇头分层通信模式的路由算法(HCM-TEEN)。新算法从簇头候选与淘汰过程入手，设置新的阈值函数。在簇头确定完成后，在数据传输阶段引入簇头分层通信模式，从距离和能量的角度上优化路由选择。通过Matlab仿真实验显示， HCM-TEEN算法与传统的算法相比在网络生命周期和节点平均剩余能量上都更具优越性。
Application of the Clustering Method in Molecular Dynamics Simulation of the Diffusion Coefficient
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Using molecular dynamics (MD) simulation, the diffusion of oxygen, methane, ammonia and carbon dioxide in water was simulated in the canonical NVT ensemble, and the diffusion coefficient was analyzed by the clustering method. By comparing to the conventional method (using the Einstein model) and the differentiation-interval variation method, we found that the results obtained by the clustering method used in this study are more close to the experimental values. This method proved to be more reasonable than the other two methods.
Hierarchical Symbolic Analysis of Large Analog Circuits with Totally Coded Method
Institute of Scientific and Technical Information of China (English)
XU Jing-bo
2006-01-01
Symbolic analysis has many applications in the design of analog circuits. Existing approaches rely on two forms of symbolic-expression representation: expanded sum-ofproduct form and arbitrarily nested form. Expanded form suffers the problem that the number of product terms grows exponentially with the size of a circuit. Nested form is neither canonical nor amenable to symbolic manipulation. In this paper, we present a new approach to exact and canonical symbolic analysis by exploiting the sparsity and sharing of product terms. This algorithm, called totally coded method (TCM), consists of representing the symbolic determinant of a circuit matrix by code series and performing symbolic analysis by code manipulation. We describe an efficient code-ordering heuristic and prove that it is optimum for ladder-structured circuits. For practical analog circuits, TCM not only covers all advantages of the algorithm via determinant decision diagrams (DDD) but is more simple and efficient than DDD method.
2009-08-14
involving particle simu- lation of fluid flows at the mesoscale. The Smooth Dissipative Particle Dynamics (SDPD) method developed by Espanol and...coefficients in the SDPD model equations are given in Espanol and Revenga (2003), 30 2.1 SDPD Computational Implementation The SDPD model has been...pp. 786-793. 10 Espanol , P. and Revenga, M. Smoothed Dissipative Particle Dynamics. Physical Review, 2003, Vol. 67. 31 Liu G.R. and Liu, M.B
DEFF Research Database (Denmark)
Stolpe, Mathias; Stidsen, Thomas K.
2005-01-01
of minimizing the weight of a structure subject to displacement and local design-dependent stress constraints. The method iteratively solves a sequence of problems of increasing size of the same type as the original problem. The problems are defined on a design mesh which is initially coarse...... from global optimization, which have only recently become available, for solving the problems in the sequence. Numerical examples of topology design problems of continuum structures with local stress and displacement constraints are presented....
DEFF Research Database (Denmark)
Stolpe, Mathias; Stidsen, Thomas K.
2007-01-01
of minimizing the weight of a structure subject to displacement and local design-dependent stress constraints. The method iteratively treats a sequence of problems of increasing size of the same type as the original problem. The problems are defined on a design mesh which is initially coarse...... from global optimization, which have only recently become available, for solving the problems in the sequence. Numerical examples of topology design problems of continuum structures with local stress and displacement constraints are presented....
2016-12-08
solid curve) and continuous indentation (purple curve for loading and green curve for unloading from 30 Å). The Hertzian prediction is shown as the...by the flash method," Report No. UCRL-52565, Lawrence Livermore Laboratory, October 1978. DISTRIBUTION A: Distribution approved for public release...Mass fraction: liquid RDX(blue), intermediate( red), gas phase( green ) In this case we tested the concept of using mass fractions, defined by binning
Hierarchical Cont-Bouchaud model
Paluch, Robert; Holyst, Janusz A
2015-01-01
We extend the well-known Cont-Bouchaud model to include a hierarchical topology of agent's interactions. The influence of hierarchy on system dynamics is investigated by two models. The first one is based on a multi-level, nested Erdos-Renyi random graph and individual decisions by agents according to Potts dynamics. This approach does not lead to a broad return distribution outside a parameter regime close to the original Cont-Bouchaud model. In the second model we introduce a limited hierarchical Erdos-Renyi graph, where merging of clusters at a level h+1 involves only clusters that have merged at the previous level h and we use the original Cont-Bouchaud agent dynamics on resulting clusters. The second model leads to a heavy-tail distribution of cluster sizes and relative price changes in a wide range of connection densities, not only close to the percolation threshold.
Classification of excessive domestic water consumption using Fuzzy Clustering Method
Zairi Zaidi, A.; Rasmani, Khairul A.
2016-08-01
Demand for clean and treated water is increasing all over the world. Therefore it is crucial to conserve water for better use and to avoid unnecessary, excessive consumption or wastage of this natural resource. Classification of excessive domestic water consumption is a difficult task due to the complexity in determining the amount of water usage per activity, especially as the data is known to vary between individuals. In this study, classification of excessive domestic water consumption is carried out using a well-known Fuzzy C-Means (FCM) clustering algorithm. Consumer data containing information on daily, weekly and monthly domestic water usage was employed for the purpose of classification. Using the same dataset, the result produced by the FCM clustering algorithm is compared with the result obtained from a statistical control chart. The finding of this study demonstrates the potential use of the FCM clustering algorithm for the classification of domestic consumer water consumption data.
Robust multi-scale clustering of large DNA microarray datasets with the consensus algorithm
DEFF Research Database (Denmark)
Grotkjær, Thomas; Winther, Ole; Regenberg, Birgitte
2006-01-01
Motivation: Hierarchical and relocation clustering (e.g. K-means and self-organizing maps) have been successful tools in the display and analysis of whole genome DNA microarray expression data. However, the results of hierarchical clustering are sensitive to outliers, and most relocation methods...... analysis by collecting re-occurring clustering patterns in a co-occurrence matrix. The results show that consensus clustering obtained from clustering multiple times with Variational Bayes Mixtures of Gaussians or K-means significantly reduces the classification error rate for a simulated dataset....... The method is flexible and it is possible to find consensus clusters from different clustering algorithms. Thus, the algorithm can be used as a framework to test in a quantitative manner the homogeneity of different clustering algorithms. We compare the method with a number of state-of-the-art clustering...
Šubelj, Lovro; Waltman, Ludo
2015-01-01
Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between di...
Method for discovering relationships in data by dynamic quantum clustering
Energy Technology Data Exchange (ETDEWEB)
Weinstein, Marvin; Horn, David
2014-10-28
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2014-10-28
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Method for discovering relationships in data by dynamic quantum clustering
Energy Technology Data Exchange (ETDEWEB)
Weinstein, Marvin; Horn, David
2017-05-09
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Systematic hierarchical coarse-graining with the inverse Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Lyubartsev, Alexander P., E-mail: alexander.lyubartsev@mmk.su.se [Division of Physical Chemistry, Arrhenius Laboratory, Stockholm University, S 106 91 Stockholm (Sweden); Naômé, Aymeric, E-mail: aymeric.naome@unamur.be [Division of Physical Chemistry, Arrhenius Laboratory, Stockholm University, S 106 91 Stockholm (Sweden); UCPTS Division, University of Namur, 61 Rue de Bruxelles, B 5000 Namur (Belgium); Vercauteren, Daniel P., E-mail: daniel.vercauteren@unamur.be [UCPTS Division, University of Namur, 61 Rue de Bruxelles, B 5000 Namur (Belgium); Laaksonen, Aatto, E-mail: aatto@mmk.su.se [Division of Physical Chemistry, Arrhenius Laboratory, Stockholm University, S 106 91 Stockholm (Sweden); Science for Life Laboratory, 17121 Solna (Sweden)
2015-12-28
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730–3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
Systematic hierarchical coarse-graining with the inverse Monte Carlo method
Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto
2015-12-01
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730-3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
Energy Technology Data Exchange (ETDEWEB)
Dai, Heng [Pacific Northwest National Laboratory, Richland Washington USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA; Ye, Ming [Department of Scientific Computing, Florida State University, Tallahassee Florida USA; Song, Xuehang [Pacific Northwest National Laboratory, Richland Washington USA; Zachara, John M. [Pacific Northwest National Laboratory, Richland Washington USA
2017-05-01
Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level of the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.