Application of the cluster variation method to interstitial solid solutions
Pekelharing, M.I.
2008-01-01
A thermodynamic model for interstitial alloys, based on the Cluster Variation Method (CVM), has been developed, capable of incorporating short range ordering (SRO), long range ordering (LRO), and the mutual interaction between the host and the interstitial sublattices. The obtained cluster-based
The Cluster Variation Method: A Primer for Neuroscientists
Directory of Open Access Journals (Sweden)
Alianna J. Maren
2016-09-01
Full Text Available Effective Brain–Computer Interfaces (BCIs require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables, is defined in terms of a single interaction enthalpy parameter (h for the case of an equiprobable distribution of bistate (neural/neural ensemble units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found.
Eldridge, Sandra M; Ashby, Deborah; Kerry, Sally
2006-10-01
Cluster randomized trials are increasingly popular. In many of these trials, cluster sizes are unequal. This can affect trial power, but standard sample size formulae for these trials ignore this. Previous studies addressing this issue have mostly focused on continuous outcomes or methods that are sometimes difficult to use in practice. We show how a simple formula can be used to judge the possible effect of unequal cluster sizes for various types of analyses and both continuous and binary outcomes. We explore the practical estimation of the coefficient of variation of cluster size required in this formula and demonstrate the formula's performance for a hypothetical but typical trial randomizing UK general practices. The simple formula provides a good estimate of sample size requirements for trials analysed using cluster-level analyses weighting by cluster size and a conservative estimate for other types of analyses. For trials randomizing UK general practices the coefficient of variation of cluster size depends on variation in practice list size, variation in incidence or prevalence of the medical condition under examination, and practice and patient recruitment strategies, and for many trials is expected to be approximately 0.65. Individual-level analyses can be noticeably more efficient than some cluster-level analyses in this context. When the coefficient of variation is <0.23, the effect of adjustment for variable cluster size on sample size is negligible. Most trials randomizing UK general practices and many other cluster randomized trials should account for variable cluster size in their sample size calculations.
Steenbergen, K G; Gaston, N
2014-02-14
Inspired by methods of remote sensing image analysis, we analyze structural variation in cluster molecular dynamics (MD) simulations through a unique application of the principal component analysis (PCA) and Pearson Correlation Coefficient (PCC). The PCA analysis characterizes the geometric shape of the cluster structure at each time step, yielding a detailed and quantitative measure of structural stability and variation at finite temperature. Our PCC analysis captures bond structure variation in MD, which can be used to both supplement the PCA analysis as well as compare bond patterns between different cluster sizes. Relying only on atomic position data, without requirement for a priori structural input, PCA and PCC can be used to analyze both classical and ab initio MD simulations for any cluster composition or electronic configuration. Taken together, these statistical tools represent powerful new techniques for quantitative structural characterization and isomer identification in cluster MD.
Cluster variational method for nuclear matter with the three-body force
Energy Technology Data Exchange (ETDEWEB)
Takano, M.; Togashi, H.; Yamamuro, S.; Nakazato, K.; Suzuki, H. [Research Institute for Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555 Japan and Department of Physics and Applied Physics, Waseda University, 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555 (Japan); Department of Physics and Applied Physics, Waseda University, 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555 (Japan); Department of Physics, Faculty of Science and Technology, Tokyo University of Science, Yamazaki 2641, Noda, Chiba 278-8510 (Japan)
2012-11-12
We report the current status of our project to construct a new nuclear equation of state (EOS), which may be used for supernova numerical simulations, based on the cluster variational method starting from the realistic nuclear Hamiltonian. We also take into account a higher-order correction to the energy of the nuclear three-body force (TBF). The nuclear EOSs with and without the higher-order TBF correction at zero temperature are very close to each other, when parameters are readjusted so as to reproduce the empirical saturation data.
Georgescu, Ionuţ; Mandelshtam, Vladimir A
2011-10-21
The variational Gaussian wavepacket (VGW) approximation provides an alternative to path integral Monte Carlo for the computation of thermodynamic properties of many-body systems at thermal equilibrium. It provides a direct access to the thermal density matrix and is particularly efficient for Monte Carlo approaches, as for an N-body system it operates in a non-inflated 3N-dimensional configuration space. Here, we greatly accelerate the VGW method by retaining only the relevant short-range correlations in the (otherwise full) 3N × 3N Gaussian width matrix without sacrificing the accuracy of the fully coupled VGW method. This results in the reduction of the original O(N(3)) scaling to O(N(2)). The fast-VGW method is then applied to quantum Lennard-Jones clusters with sizes up to N = 6500 atoms. Following Doye and Calvo [JCP 116, 8307 (2002)] we study the competition between the icosahedral and decahedral structural motifs in Ne(N) clusters as a function of N.
A new EEG measure using the 1D cluster variation method
Maren, Alianna J.; Szu, Harold H.
2015-05-01
A new information measure, drawing on the 1-D Cluster Variation Method (CVM), describes local pattern distributions (nearest-neighbor and next-nearest neighbor) in a binary 1-D vector in terms of a single interaction enthalpy parameter h for the specific case where the fractions of elements in each of two states are the same (x1=x2=0.5). An example application of this method would be for EEG interpretation in Brain-Computer Interfaces (BCIs), especially in the frontier of invariant biometrics based on distinctive and invariant individual responses to stimuli containing an image of a person with whom there is a strong affiliative response (e.g., to a person's grandmother). This measure is obtained by mapping EEG observed configuration variables (z1, z2, z3 for next-nearest neighbor triplets) to h using the analytic function giving h in terms of these variables at equilibrium. This mapping results in a small phase space region of resulting h values, which characterizes local pattern distributions in the source data. The 1-D vector with equal fractions of units in each of the two states can be obtained using the method for transforming natural images into a binarized equi-probability ensemble (Saremi & Sejnowski, 2014; Stephens et al., 2013). An intrinsically 2-D data configuration can be mapped to 1-D using the 1-D Peano-Hilbert space-filling curve, which has demonstrated a 20 dB lower baseline using the method compared with other approaches (cf. SPIE ICA etc. by Hsu & Szu, 2014). This CVM-based method has multiple potential applications; one near-term one is optimizing classification of the EEG signals from a COTS 1-D BCI baseball hat. This can result in a convenient 3-D lab-tethered EEG, configured in a 1-D CVM equiprobable binary vector, and potentially useful for Smartphone wireless display. Longer-range applications include interpreting neural assembly activations via high-density implanted soft, cellular-scale electrodes.
Cycle-Based Cluster Variational Method for Direct and Inverse Inference
Furtlehner, Cyril; Decelle, Aurélien
2016-08-01
Large scale inference problems of practical interest can often be addressed with help of Markov random fields. This requires to solve in principle two related problems: the first one is to find offline the parameters of the MRF from empirical data (inverse problem); the second one (direct problem) is to set up the inference algorithm to make it as precise, robust and efficient as possible. In this work we address both the direct and inverse problem with mean-field methods of statistical physics, going beyond the Bethe approximation and associated belief propagation algorithm. We elaborate on the idea that loop corrections to belief propagation can be dealt with in a systematic way on pairwise Markov random fields, by using the elements of a cycle basis to define regions in a generalized belief propagation setting. For the direct problem, the region graph is specified in such a way as to avoid feed-back loops as much as possible by selecting a minimal cycle basis. Following this line we are led to propose a two-level algorithm, where a belief propagation algorithm is run alternatively at the level of each cycle and at the inter-region level. Next we observe that the inverse problem can be addressed region by region independently, with one small inverse problem per region to be solved. It turns out that each elementary inverse problem on the loop geometry can be solved efficiently. In particular in the random Ising context we propose two complementary methods based respectively on fixed point equations and on a one-parameter log likelihood function minimization. Numerical experiments confirm the effectiveness of this approach both for the direct and inverse MRF inference. Heterogeneous problems of size up to 10^5 are addressed in a reasonable computational time, notably with better convergence properties than ordinary belief propagation.
Unconventional methods for clustering
Kotyrba, Martin
2016-06-01
Cluster analysis or clustering is a task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters). It is the main task of exploratory data mining and a common technique for statistical data analysis used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, and bioinformatics. The topic of this paper is one of the modern methods of clustering namely SOM (Self Organising Map). The paper describes the theory needed to understand the principle of clustering and descriptions of algorithm used with clustering in our experiments.
Variation in verb cluster interruption
Hendriks, Lotte
2014-01-01
Except for finite verbs in main clauses, verbs in Standard Dutch cluster together in a clause-final position. In certain Dutch dialects, non-verbal material can occur within this verb cluster (Verhasselt 1961; Koelmans 1965, among many others). These dialects vary with respect to which types of
Cluster-based exposure variation analysis.
Samani, Afshin; Mathiassen, Svend Erik; Madeleine, Pascal
2013-04-04
Static posture, repetitive movements and lack of physical variation are known risk factors for work-related musculoskeletal disorders, and thus needs to be properly assessed in occupational studies. The aims of this study were (i) to investigate the effectiveness of a conventional exposure variation analysis (EVA) in discriminating exposure time lines and (ii) to compare it with a new cluster-based method for analysis of exposure variation. For this purpose, we simulated a repeated cyclic exposure varying within each cycle between "low" and "high" exposure levels in a "near" or "far" range, and with "low" or "high" velocities (exposure change rates). The duration of each cycle was also manipulated by selecting a "small" or "large" standard deviation of the cycle time. Theses parameters reflected three dimensions of exposure variation, i.e. range, frequency and temporal similarity.Each simulation trace included two realizations of 100 concatenated cycles with either low (ρ = 0.1), medium (ρ = 0.5) or high (ρ = 0.9) correlation between the realizations. These traces were analyzed by conventional EVA, and a novel cluster-based EVA (C-EVA). Principal component analysis (PCA) was applied on the marginal distributions of 1) the EVA of each of the realizations (univariate approach), 2) a combination of the EVA of both realizations (multivariate approach) and 3) C-EVA. The least number of principal components describing more than 90% of variability in each case was selected and the projection of marginal distributions along the selected principal component was calculated. A linear classifier was then applied to these projections to discriminate between the simulated exposure patterns, and the accuracy of classified realizations was determined. C-EVA classified exposures more correctly than univariate and multivariate EVA approaches; classification accuracy was 49%, 47% and 52% for EVA (univariate and multivariate), and C-EVA, respectively (p analysis are the advantages
The Schwinger Variational Method
Huo, Winifred M.
1995-01-01
Variational methods have proven invaluable in theoretical physics and chemistry, both for bound state problems and for the study of collision phenomena. For collisional problems they can be grouped into two types: those based on the Schroedinger equation and those based on the Lippmann-Schwinger equation. The application of the Schwinger variational (SV) method to e-molecule collisions and photoionization has been reviewed previously. The present chapter discusses the implementation of the SV method as applied to e-molecule collisions.
The SMART CLUSTER METHOD - adaptive earthquake cluster analysis and declustering
Schaefer, Andreas; Daniell, James; Wenzel, Friedemann
2016-04-01
Earthquake declustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity with usual applications comprising of probabilistic seismic hazard assessments (PSHAs) and earthquake prediction methods. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation. Various methods have been developed to address this issue from other researchers. These have differing ranges of complexity ranging from rather simple statistical window methods to complex epidemic models. This study introduces the smart cluster method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal identification. Hereby, an adaptive search algorithm for data point clusters is adopted. It uses the earthquake density in the spatio-temporal neighbourhood of each event to adjust the search properties. The identified clusters are subsequently analysed to determine directional anisotropy, focussing on a strong correlation along the rupture plane and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010/2011 Darfield-Christchurch events, an adaptive classification procedure is applied to disassemble subsequent ruptures which may have been grouped into an individual cluster using near-field searches, support vector machines and temporal splitting. The steering parameters of the search behaviour are linked to local earthquake properties like magnitude of completeness, earthquake density and Gutenberg-Richter parameters. The method is capable of identifying and classifying earthquake clusters in space and time. It is tested and validated using earthquake data from California and New Zealand. As a result of the cluster identification process, each event in
Splines and variational methods
Prenter, P M
2008-01-01
One of the clearest available introductions to variational methods, this text requires only a minimal background in calculus and linear algebra. Its self-contained treatment explains the application of theoretic notions to the kinds of physical problems that engineers regularly encounter. The text's first half concerns approximation theoretic notions, exploring the theory and computation of one- and two-dimensional polynomial and other spline functions. Later chapters examine variational methods in the solution of operator equations, focusing on boundary value problems in one and two dimension
Progress in variational methods
Institute of Scientific and Technical Information of China (English)
2008-01-01
@@ The International Conference on Variational Methods (ICVAM) was held from May 20th to 26th in 2007 at the Chern Institute of Mathematics, Nankai University, Tianjin, China. Twenty eight invited speakers from ten countries and areas worldwide gave their lectures at the conference.
Cluster variation studies of the anisotropic exchange interaction model
King, T. C.; Chen, H. H.
The cluster variation method is applied to study critical properties of the Potts-like ferromagnetic anisotropic exchange interaction model. Phase transition temperatures, order parameter discontinuities and latent heats of the model on the triangular and the fcc lattices are determined by the triangle approximation; and those on the square and the sc lattices are determined by the square approximation.
Sanfilippo, Antonio [Richland, WA; Calapristi, Augustin J [West Richland, WA; Crow, Vernon L [Richland, WA; Hetzler, Elizabeth G [Kennewick, WA; Turner, Alan E [Kennewick, WA
2009-12-22
Document clustering methods, document cluster label disambiguation methods, document clustering apparatuses, and articles of manufacture are described. In one aspect, a document clustering method includes providing a document set comprising a plurality of documents, providing a cluster comprising a subset of the documents of the document set, using a plurality of terms of the documents, providing a cluster label indicative of subject matter content of the documents of the cluster, wherein the cluster label comprises a plurality of word senses, and selecting one of the word senses of the cluster label.
Semi-supervised clustering methods
Bair, Eric
2013-01-01
Cluster analysis methods seek to partition a data set into homogeneous subgroups. It is useful in a wide variety of applications, including document processing and modern genetics. Conventional clustering methods are unsupervised, meaning that there is no outcome variable nor is anything known about the relationship between the observations in the data set. In many situations, however, information about the clusters is available in addition to the values of the features. For example, the cluster labels of some observations may be known, or certain observations may be known to belong to the same cluster. In other cases, one may wish to identify clusters that are associated with a particular outcome variable. This review describes several clustering algorithms (known as “semi-supervised clustering” methods) that can be applied in these situations. The majority of these methods are modifications of the popular k-means clustering method, and several of them will be described in detail. A brief description of some other semi-supervised clustering algorithms is also provided. PMID:24729830
Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters
Energy Technology Data Exchange (ETDEWEB)
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
An improved variational method
Institute of Scientific and Technical Information of China (English)
ZENG Zhuo-Quan; SHEN Peng-Nian; DING Yi-Bing
2009-01-01
In order to improve the unitarity of the S-matrix, an improved variational formulism is derived by proposing new generating functionals and adopting proper asymptotic boundary conditions for trial relative wave functions. The formulas with the weighted line-column balance for the single-channel and multi-channel scatterings, where the non-central interaction is implicitly considered, are presented. A numerical check is performed with a soluble model in a four coupled channel scattering problem. The result shows that the high accuracy and the unitarity of the S-matrix are reached.
Niching method using clustering crowding
Institute of Scientific and Technical Information of China (English)
GUO Guan-qi; GUI Wei-hua; WU Min; YU Shou-yi
2005-01-01
This study analyzes drift phenomena of deterministic crowding and probabilistic crowding by using equivalence class model and expectation proportion equations. It is proved that the replacement errors of deterministic crowding cause the population converging to a single individual, thus resulting in premature stagnation or losing optional optima. And probabilistic crowding can maintain equilibrium multiple subpopulations as the population size is adequate large. An improved niching method using clustering crowding is proposed. By analyzing topology of fitness landscape using hill valley function and extending the search space for similarity analysis, clustering crowding determines the locality of search space more accurately, thus greatly decreasing replacement errors of crowding. The integration of deterministic and probabilistic replacement increases the capacity of both parallel local hill climbing and maintaining multiple subpopulations. The experimental results optimizing various multimodal functions show that,the performances of clustering crowding, such as the number of effective peaks maintained, average peak ratio and global optimum ratio are uniformly superior to those of the evolutionary algorithms using fitness sharing, simple deterministic crowding and probabilistic crowding.
Scoring methods used in cluster analysis
Sirota, Sergej
2014-01-01
The aim of the thesis is to compare methods of cluster analysis correctly classify objects in the dataset into groups, which are known. In the theoretical section first describes the steps needed to prepare a data file for cluster analysis. The next theoretical section is dedicated to the cluster analysis, which describes ways of measuring similarity of objects and clusters, and dedicated to description the methods of cluster analysis used in practical part of this thesis. In practical part a...
Convex Decomposition Based Cluster Labeling Method for Support Vector Clustering
Institute of Scientific and Technical Information of China (English)
Yuan Ping; Ying-Jie Tian; Ya-Jian Zhou; Yi-Xian Yang
2012-01-01
Support vector clustering (SVC) is an important boundary-based clustering algorithm in multiple applications for its capability of handling arbitrary cluster shapes. However,SVC's popularity is degraded by its highly intensive time complexity and poor label performance.To overcome such problems,we present a novel efficient and robust convex decomposition based cluster labeling (CDCL) method based on the topological property of dataset.The CDCL decomposes the implicit cluster into convex hulls and each one is comprised by a subset of support vectors (SVs).According to a robust algorithm applied in the nearest neighboring convex hulls,the adjacency matrix of convex hulls is built up for finding the connected components; and the remaining data points would be assigned the label of the nearest convex hull appropriately.The approach's validation is guaranteed by geometric proofs.Time complexity analysis and comparative experiments suggest that CDCL improves both the efficiency and clustering quality significantly.
Variational methods in molecular modeling
2017-01-01
This book presents tutorial overviews for many applications of variational methods to molecular modeling. Topics discussed include the Gibbs-Bogoliubov-Feynman variational principle, square-gradient models, classical density functional theories, self-consistent-field theories, phase-field methods, Ginzburg-Landau and Helfrich-type phenomenological models, dynamical density functional theory, and variational Monte Carlo methods. Illustrative examples are given to facilitate understanding of the basic concepts and quantitative prediction of the properties and rich behavior of diverse many-body systems ranging from inhomogeneous fluids, electrolytes and ionic liquids in micropores, colloidal dispersions, liquid crystals, polymer blends, lipid membranes, microemulsions, magnetic materials and high-temperature superconductors. All chapters are written by leading experts in the field and illustrated with tutorial examples for their practical applications to specific subjects. With emphasis placed on physical unders...
An Overview on Clustering Methods
Madhulatha, T Soni
2012-01-01
Clustering is a common technique for statistical data analysis, which is used in many fields, including machine learning, data mining, pattern recognition, image analysis and bioinformatics. Clustering is the process of grouping similar objects into different groups, or more precisely, the partitioning of a data set into subsets, so that the data in each subset according to some defined distance measure. This paper covers about clustering algorithms, benefits and its applications. Paper concludes by discussing some limitations.
A Continuous Clustering Method for Vector Fields
Garcke, H.; Preußer, T.; Rumpf, M.; Telea, A.; Weikard, U.; Wijk, J. van
2000-01-01
A new method for the simplification of flow fields is presented. It is based on continuous clustering. A well-known physical clustering model, the Cahn Hillard model which describes phase separation, is modified to reflect the properties of the data to be visualized. Clusters are defined implicitly
Single pass kernel -means clustering method
Indian Academy of Sciences (India)
T Hitendra Sarma; P Viswanath; B Eswara Reddy
2013-06-01
In unsupervised classiﬁcation, kernel -means clustering method has been shown to perform better than conventional -means clustering method in identifying non-isotropic clusters in a data set. The space and time requirements of this method are $O(n^2)$, where is the data set size. Because of this quadratic time complexity, the kernel -means method is not applicable to work with large data sets. The paper proposes a simple and faster version of the kernel -means clustering method, called single pass kernel k-means clustering method. The proposed method works as follows. First, a random sample $\\mathcal{S}$ is selected from the data set $\\mathcal{D}$. A partition $\\Pi_{\\mathcal{S}}$ is obtained by applying the conventional kernel -means method on the random sample $\\mathcal{S}$. The novelty of the paper is, for each cluster in $\\Pi_{\\mathcal{S}}$, the exact cluster center in the input space is obtained using the gradient descent approach. Finally, each unsampled pattern is assigned to its closest exact cluster center to get a partition of the entire data set. The proposed method needs to scan the data set only once and it is much faster than the conventional kernel -means method. The time complexity of this method is $O(s^2+t+nk)$ where is the size of the random sample $\\mathcal{S}$, is the number of clusters required, and is the time taken by the gradient descent method (to ﬁnd exact cluster centers). The space complexity of the method is $O(s^2)$. The proposed method can be easily implemented and is suitable for large data sets, like those in data mining applications. Experimental results show that, with a small loss of quality, the proposed method can signiﬁcantly reduce the time taken than the conventional kernel -means clustering method. The proposed method is also compared with other recent similar methods.
Homotopy Method for Variational Inequalities
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
@@Solving a finite-dimensional variational inequality is to find a vector x* ∈ X Rn such that where X is a nonempty, closed and convex subset of Rn and F is a mapping from Rn to itself,denoted by VI(X, F). The variational inequality problem (VIP) has had many successful practical applications in the last three decades. It has been used to formulate and investigate equilibrium models arising in economics, transportation, regional science and operations research. So far, a large number of existence conditions have been developed in the literature. Harker and Pang[1] gave excellent surveys of theories, methods and applications of VIPs.
Kernel method-based fuzzy clustering algorithm
Institute of Scientific and Technical Information of China (English)
Wu Zhongdong; Gao Xinbo; Xie Weixin; Yu Jianping
2005-01-01
The fuzzy C-means clustering algorithm(FCM) to the fuzzy kernel C-means clustering algorithm(FKCM) to effectively perform cluster analysis on the diversiform structures are extended, such as non-hyperspherical data, data with noise, data with mixture of heterogeneous cluster prototypes, asymmetric data, etc. Based on the Mercer kernel, FKCM clustering algorithm is derived from FCM algorithm united with kernel method. The results of experiments with the synthetic and real data show that the FKCM clustering algorithm is universality and can effectively unsupervised analyze datasets with variform structures in contrast to FCM algorithm. It is can be imagined that kernel-based clustering algorithm is one of important research direction of fuzzy clustering analysis.
Schaefer, Andreas M.; Daniell, James E.; Wenzel, Friedemann
2017-07-01
Earthquake clustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation for probabilistic seismic hazard assessment. This study introduces the Smart Cluster Method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal cluster identification. It utilises the magnitude-dependent spatio-temporal earthquake density to adjust the search properties, subsequently analyses the identified clusters to determine directional variation and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010-2011 Darfield-Christchurch sequence, a reclassification procedure is applied to disassemble subsequent ruptures using near-field searches, nearest neighbour classification and temporal splitting. The method is capable of identifying and classifying earthquake clusters in space and time. It has been tested and validated using earthquake data from California and New Zealand. A total of more than 1500 clusters have been found in both regions since 1980 with M m i n = 2.0. Utilising the knowledge of cluster classification, the method has been adjusted to provide an earthquake declustering algorithm, which has been compared to existing methods. Its performance is comparable to established methodologies. The analysis of earthquake clustering statistics lead to various new and updated correlation functions, e.g. for ratios between mainshock and strongest aftershock and general aftershock activity metrics.
Schaefer, Andreas M.; Daniell, James E.; Wenzel, Friedemann
2017-03-01
Earthquake clustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation for probabilistic seismic hazard assessment. This study introduces the Smart Cluster Method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal cluster identification. It utilises the magnitude-dependent spatio-temporal earthquake density to adjust the search properties, subsequently analyses the identified clusters to determine directional variation and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010-2011 Darfield-Christchurch sequence, a reclassification procedure is applied to disassemble subsequent ruptures using near-field searches, nearest neighbour classification and temporal splitting. The method is capable of identifying and classifying earthquake clusters in space and time. It has been tested and validated using earthquake data from California and New Zealand. A total of more than 1500 clusters have been found in both regions since 1980 with M m i n = 2.0. Utilising the knowledge of cluster classification, the method has been adjusted to provide an earthquake declustering algorithm, which has been compared to existing methods. Its performance is comparable to established methodologies. The analysis of earthquake clustering statistics lead to various new and updated correlation functions, e.g. for ratios between mainshock and strongest aftershock and general aftershock activity metrics.
Data Reduction Method for Categorical Data Clustering
Sánchez Garreta, José Salvador; Rendón, Eréndira; García, Rene A.; Abundez, Itzel; Gutiérrez, Citlalih; Gasca, Eduardo
2008-01-01
Categorical data clustering constitutes an important part of data mining; its relevance has recently drawn attention from several researchers. As a step in data mining, however, clustering encounters the problem of large amount of data to be processed. This article offers a solution for categorical clustering algorithms when working with high volumes of data by means of a method that summarizes the database. This is done using a structure called CM-tree. In order to test our metho...
The fungus Fusarium is an agricultural problem because it can cause disease on most crop plants and can contaminate crops with mycotoxins. There is considerable variation in the presence/absence and genomic location of gene clusters responsible for synthesis of mycotoxins and other secondary metabol...
Variational methods for field theories
Energy Technology Data Exchange (ETDEWEB)
Ben-Menahem, S.
1986-09-01
Four field theory models are studied: Periodic Quantum Electrodynamics (PQED) in (2 + 1) dimensions, free scalar field theory in (1 + 1) dimensions, the Quantum XY model in (1 + 1) dimensions, and the (1 + 1) dimensional Ising model in a transverse magnetic field. The last three parts deal exclusively with variational methods; the PQED part involves mainly the path-integral approach. The PQED calculation results in a better understanding of the connection between electric confinement through monopole screening, and confinement through tunneling between degenerate vacua. This includes a better quantitative agreement for the string tensions in the two approaches. Free field theory is used as a laboratory for a new variational blocking-truncation approximation, in which the high-frequency modes in a block are truncated to wave functions that depend on the slower background modes (Boron-Oppenheimer approximation). This ''adiabatic truncation'' method gives very accurate results for ground-state energy density and correlation functions. Various adiabatic schemes, with one variable kept per site and then two variables per site, are used. For the XY model, several trial wave functions for the ground state are explored, with an emphasis on the periodic Gaussian. A connection is established with the vortex Coulomb gas of the Euclidean path integral approach. The approximations used are taken from the realms of statistical mechanics (mean field approximation, transfer-matrix methods) and of quantum mechanics (iterative blocking schemes). In developing blocking schemes based on continuous variables, problems due to the periodicity of the model were solved. Our results exhibit an order-disorder phase transition. The transfer-matrix method is used to find a good (non-blocking) trial ground state for the Ising model in a transverse magnetic field in (1 + 1) dimensions.
PERFORMANCE OF SELECTED AGGLOMERATIVE HIERARCHICAL CLUSTERING METHODS
Directory of Open Access Journals (Sweden)
Nusa Erman
2015-01-01
Full Text Available A broad variety of different methods of agglomerative hierarchical clustering brings along problems how to choose the most appropriate method for the given data. It is well known that some methods outperform others if the analysed data have a specific structure. In the presented study we have observed the behaviour of the centroid, the median (Gower median method, and the average method (unweighted pair-group method with arithmetic mean – UPGMA; average linkage between groups. We have compared them with mostly used methods of hierarchical clustering: the minimum (single linkage clustering, the maximum (complete linkage clustering, the Ward, and the McQuitty (groups method average, weighted pair-group method using arithmetic averages - WPGMA methods. We have applied the comparison of these methods on spherical, ellipsoid, umbrella-like, “core-and-sphere”, ring-like and intertwined three-dimensional data structures. To generate the data and execute the analysis, we have used R statistical software. Results show that all seven methods are successful in finding compact, ball-shaped or ellipsoid structures when they are enough separated. Conversely, all methods except the minimum perform poor on non-homogenous, irregular and elongated ones. Especially challenging is a circular double helix structure; it is being correctly revealed only by the minimum method. We can also confirm formerly published results of other simulation studies, which usually favour average method (besides Ward method in cases when data is assumed to be fairly compact and well separated.
Cluster Monte Carlo methods for the FePt Hamiltonian
Energy Technology Data Exchange (ETDEWEB)
Lyberatos, A., E-mail: lyb@materials.uoc.gr [Materials Science and Technology Department, P.O. Box 2208, 71003 Heraklion (Greece); Parker, G.J. [HGST, A Western Digital Company, 3403 Yerba Buena Road, San Jose, CA 95135 (United States)
2016-02-15
Cluster Monte Carlo methods for the classical spin Hamiltonian of FePt with long range exchange interactions are presented. We use a combination of the Swendsen–Wang (or Wolff) and Metropolis algorithms that satisfies the detailed balance condition and ergodicity. The algorithms are tested by calculating the temperature dependence of the magnetization, susceptibility and heat capacity of L1{sub 0}-FePt nanoparticles in a range including the critical region. The cluster models yield numerical results in good agreement within statistical error with the standard single-spin flipping Monte Carlo method. The variation of the spin autocorrelation time with grain size is used to deduce the dynamic exponent of the algorithms. Our cluster models do not provide a more accurate estimate of the magnetic properties at equilibrium. - Highlights: • A new cluster Monte Carlo algorithm was applied to FePt nanoparticles. • Magnetic anisotropy imposes a restriction on cluster moves. • Inclusion of Metropolis steps is required to satisfy ergodicity. • In the critical region a percolating cluster occurs for any grain size. • Critical slowing down is not solved by the new cluster algorithms.
Histological image segmentation using fast mean shift clustering method
Wu, Geming; Zhao, Xinyan; Luo, Shuqian; Shi, Hongli
2015-01-01
Background Colour image segmentation is fundamental and critical for quantitative histological image analysis. The complexity of the microstructure and the approach to make histological images results in variable staining and illumination variations. And ultra-high resolution of histological images makes it is hard for image segmentation methods to achieve high-quality segmentation results and low computation cost at the same time. Methods Mean Shift clustering approach is employed for histol...
Document Clustering using Sequential Information Bottleneck Method
Gayathri, P J; Punithavalli, M
2010-01-01
This paper illustrates the Principal Direction Divisive Partitioning (PDDP) algorithm and describes its drawbacks and introduces a combinatorial framework of the Principal Direction Divisive Partitioning (PDDP) algorithm, then describes the simplified version of the EM algorithm called the spherical Gaussian EM (sGEM) algorithm and Information Bottleneck method (IB) is a technique for finding accuracy, complexity and time space. The PDDP algorithm recursively splits the data samples into two sub clusters using the hyper plane normal to the principal direction derived from the covariance matrix, which is the central logic of the algorithm. However, the PDDP algorithm can yield poor results, especially when clusters are not well separated from one another. To improve the quality of the clustering results problem, it is resolved by reallocating new cluster membership using the IB algorithm with different settings. IB Method gives accuracy but time consumption is more. Furthermore, based on the theoretical backgr...
Cluster Monte Carlo methods for the FePt Hamiltonian
Lyberatos, A.; Parker, G. J.
2016-02-01
Cluster Monte Carlo methods for the classical spin Hamiltonian of FePt with long range exchange interactions are presented. We use a combination of the Swendsen-Wang (or Wolff) and Metropolis algorithms that satisfies the detailed balance condition and ergodicity. The algorithms are tested by calculating the temperature dependence of the magnetization, susceptibility and heat capacity of L10-FePt nanoparticles in a range including the critical region. The cluster models yield numerical results in good agreement within statistical error with the standard single-spin flipping Monte Carlo method. The variation of the spin autocorrelation time with grain size is used to deduce the dynamic exponent of the algorithms. Our cluster models do not provide a more accurate estimate of the magnetic properties at equilibrium.
Variational Methods for Biomolecular Modeling
Wei, Guo-Wei
2016-01-01
Structure, function and dynamics of many biomolecular systems can be characterized by the energetic variational principle and the corresponding systems of partial differential equations (PDEs). This principle allows us to focus on the identification of essential energetic components, the optimal parametrization of energies, and the efficient computational implementation of energy variation or minimization. Given the fact that complex biomolecular systems are structurally non-uniform and their interactions occur through contact interfaces, their free energies are associated with various interfaces as well, such as solute-solvent interface, molecular binding interface, lipid domain interface, and membrane surfaces. This fact motivates the inclusion of interface geometry, particular its curvatures, to the parametrization of free energies. Applications of such interface geometry based energetic variational principles are illustrated through three concrete topics: the multiscale modeling of biomolecular electrosta...
Directory of Open Access Journals (Sweden)
Hao Dapeng
2012-05-01
Full Text Available Abstract Background A central idea in biology is the hierarchical organization of cellular processes. A commonly used method to identify the hierarchical modular organization of network relies on detecting a global signature known as variation of clustering coefficient (so-called modularity scaling. Although several studies have suggested other possible origins of this signature, it is still widely used nowadays to identify hierarchical modularity, especially in the analysis of biological networks. Therefore, a further and systematical investigation of this signature for different types of biological networks is necessary. Results We analyzed a variety of biological networks and found that the commonly used signature of hierarchical modularity is actually the reflection of spoke-like topology, suggesting a different view of network architecture. We proved that the existence of super-hubs is the origin that the clustering coefficient of a node follows a particular scaling law with degree k in metabolic networks. To study the modularity of biological networks, we systematically investigated the relationship between repulsion of hubs and variation of clustering coefficient. We provided direct evidences for repulsion between hubs being the underlying origin of the variation of clustering coefficient, and found that for biological networks having no anti-correlation between hubs, such as gene co-expression network, the clustering coefficient doesn’t show dependence of degree. Conclusions Here we have shown that the variation of clustering coefficient is neither sufficient nor exclusive for a network to be hierarchical. Our results suggest the existence of spoke-like modules as opposed to “deterministic model” of hierarchical modularity, and suggest the need to reconsider the organizational principle of biological hierarchy.
Hao, Dapeng; Ren, Cong; Li, Chuanxing
2012-05-01
A central idea in biology is the hierarchical organization of cellular processes. A commonly used method to identify the hierarchical modular organization of network relies on detecting a global signature known as variation of clustering coefficient (so-called modularity scaling). Although several studies have suggested other possible origins of this signature, it is still widely used nowadays to identify hierarchical modularity, especially in the analysis of biological networks. Therefore, a further and systematical investigation of this signature for different types of biological networks is necessary. We analyzed a variety of biological networks and found that the commonly used signature of hierarchical modularity is actually the reflection of spoke-like topology, suggesting a different view of network architecture. We proved that the existence of super-hubs is the origin that the clustering coefficient of a node follows a particular scaling law with degree k in metabolic networks. To study the modularity of biological networks, we systematically investigated the relationship between repulsion of hubs and variation of clustering coefficient. We provided direct evidences for repulsion between hubs being the underlying origin of the variation of clustering coefficient, and found that for biological networks having no anti-correlation between hubs, such as gene co-expression network, the clustering coefficient doesn't show dependence of degree. Here we have shown that the variation of clustering coefficient is neither sufficient nor exclusive for a network to be hierarchical. Our results suggest the existence of spoke-like modules as opposed to "deterministic model" of hierarchical modularity, and suggest the need to reconsider the organizational principle of biological hierarchy.
Karnbach, R.; Castex, M. C.; Keto, J. W.; Joppien, M.; Wörmer, J.; Zimmerer, G.; Möller, T.
1993-02-01
Excitation and decay processes in Kr N clusters ( N=2-10 4) were investigated via time- and energy-resolved fluorescence methods with synchrotron radiation excitation. In small clusters ( N<50) in addition to the well-known emission bands of condensed Kr another broad continuous emission is observed. It is assigned to a radiative decay of Kr excimers desorbing from the cluster surface. There are indications that the cluster size where the desorption rate becomes slow is related to a change in sign of the electron affinity of the cluster. Changes of spectral distribution of the fluorescence light with cluster size are interpreted as variations of the vibrational energy flow.
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.
The Development of Cluster and Histogram Methods
Swendsen, Robert H.
2003-11-01
This talk will review the history of both cluster and histogram methods for Monte Carlo simulations. Cluster methods are based on the famous exact mapping by Fortuin and Kasteleyn from general Potts models onto a percolation representation. I will discuss the Swendsen-Wang algorithm, as well as its improvement and extension to more general spin models by Wolff. The Replica Monte Carlo method further extended cluster simulations to deal with frustrated systems. The history of histograms is quite extensive, and can only be summarized briefly in this talk. It goes back at least to work by Salsburg et al. in 1959. Since then, it has been forgotten and rediscovered several times. The modern use of the method has exploited its ability to efficiently determine the location and height of peaks in various quantities, which is of prime importance in the analysis of critical phenomena. The extensions of this approach to the multiple histogram method and multicanonical ensembles have allowed information to be obtained over a broad range of parameters. Histogram simulations and analyses have become standard techniques in Monte Carlo simulations.
Unbiased methods for removing systematics from galaxy clustering measurements
Elsner, Franz; Peiris, Hiranya V
2015-01-01
Measuring the angular clustering of galaxies as a function of redshift is a powerful method for tracting information from the three-dimensional galaxy distribution. The precision of such measurements will dramatically increase with ongoing and future wide-field galaxy surveys. However, these are also increasingly sensitive to observational and astrophysical contaminants. Here, we study the statistical properties of three methods proposed for controlling such systematics - template subtraction, basic mode projection, and extended mode projection - all of which make use of externally supplied template maps, designed to characterise and capture the spatial variations of potential systematic effects. Based on a detailed mathematical analysis, and in agreement with simulations, we find that the template subtraction method in its original formulation returns biased estimates of the galaxy angular clustering. We derive closed-form expressions that should be used to correct results for this shortcoming. Turning to th...
Application of the Clustering Method in Molecular Dynamics Simulation of the Diffusion Coefficient
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Using molecular dynamics (MD) simulation, the diffusion of oxygen, methane, ammonia and carbon dioxide in water was simulated in the canonical NVT ensemble, and the diffusion coefficient was analyzed by the clustering method. By comparing to the conventional method (using the Einstein model) and the differentiation-interval variation method, we found that the results obtained by the clustering method used in this study are more close to the experimental values. This method proved to be more reasonable than the other two methods.
Mapping Cigarettes Similarities using Cluster Analysis Methods
Directory of Open Access Journals (Sweden)
Lorentz JÃƒÂ¤ntschi
2007-09-01
Full Text Available The aim of the research was to investigate the relationship and/or occurrences in and between chemical composition information (tar, nicotine, carbon monoxide, market information (brand, manufacturer, price, and public health information (class, health warning as well as clustering of a sample of cigarette data. A number of thirty cigarette brands have been analyzed. Six categorical (cigarette brand, manufacturer, health warnings, class and four continuous (tar, nicotine, carbon monoxide concentrations and package price variables were collected for investigation of chemical composition, market information and public health information. Multiple linear regression and two clusterization techniques have been applied. The study revealed interesting remarks. The carbon monoxide concentration proved to be linked with tar and nicotine concentration. The applied clusterization methods identified groups of cigarette brands that shown similar characteristics. The tar and carbon monoxide concentrations were the main criteria used in clusterization. An analysis of a largest sample could reveal more relevant and useful information regarding the similarities between cigarette brands.
Comparing the performance of biomedical clustering methods
DEFF Research Database (Denmark)
Wiwie, Christian; Baumbach, Jan; Röttger, Richard
2015-01-01
Identifying groups of similar objects is a popular first step in biomedical data analysis, but it is error-prone and impossible to perform manually. Many computational methods have been developed to tackle this problem. Here we assessed 13 well-known methods using 24 data sets ranging from gene......-ranging comparison we were able to develop a short guideline for biomedical clustering tasks. ClustEval allows biomedical researchers to pick the appropriate tool for their data type and allows method developers to compare their tool to the state of the art....
Time-dependent coupled-cluster method for atomic nuclei
Pigg, D A; Nam, H; Papenbrock, T
2012-01-01
We study time-dependent coupled-cluster theory in the framework of nuclear physics. Based on Kvaal's bi-variational formulation of this method [S. Kvaal, arXiv:1201.5548], we explicitly demonstrate that observables that commute with the Hamiltonian are conserved under time evolution. We explore the role of the energy and of the similarity-transformed Hamiltonian under real and imaginary time evolution and relate the latter to similarity renormalization group transformations. Proof-of-principle computations of He-4 and O-16 in small model spaces, and computations of the Lipkin model illustrate the capabilities of the method.
Quantum Monte Carlo methods and lithium cluster properties
Energy Technology Data Exchange (ETDEWEB)
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Recent advances in coupled-cluster methods
Bartlett, Rodney J
1997-01-01
Today, coupled-cluster (CC) theory has emerged as the most accurate, widely applicable approach for the correlation problem in molecules. Furthermore, the correct scaling of the energy and wavefunction with size (i.e. extensivity) recommends it for studies of polymers and crystals as well as molecules. CC methods have also paid dividends for nuclei, and for certain strongly correlated systems of interest in field theory.In order for CC methods to have achieved this distinction, it has been necessary to formulate new, theoretical approaches for the treatment of a variety of essential quantities
The polarizable embedding coupled cluster method
DEFF Research Database (Denmark)
Sneskov, Kristian; Schwabe, Tobias; Kongsted, Jacob
2011-01-01
We formulate a new combined quantum mechanics/molecular mechanics (QM/MM) method based on a self-consistent polarizable embedding (PE) scheme. For the description of the QM region, we apply the popular coupled cluster (CC) method detailing the inclusion of electrostatic and polarization effects...... all coupled to a polarizable MM environment. In the process, we identify CC densitylike intermediates that allow for a very efficient implementation retaining a computational low cost of the QM/MM terms even when the number of MM sites increases. The strengths of the new implementation are illustrated...
Variations in the lithium abundances of turn off stars in the globular cluster 47 Tuc
Bonifacio, Piercarlo; Molaro, Paolo; Carretta, Eugenio; François, Patrick; Gratton, Raffaele G; James, Gael; Sbordone, Luca; Spite, François; Zoccali, Manuela
2007-01-01
aims: Our aim is to determine Li abundances in TO stars of the Globular Cluster 47 Tuc and test theories about Li variations among TO stars. method: We make use of high resolution (R~ 43000), high signal-to-noise ratio (S/N=50--70) spectra of 4 turn off (TO) stars obtained with the UVES spectrograph at the 8.2m VLT Kueyen telescope. results: The four stars observed, span the range 1.6<~A(Li)} <~ 2.14, providing a mean A(Li) = 1.84 with a standard deviation of 0.25 dex. When coupled with data of other two TO stars of the cluster, available in the literature, the full range in Li abundances observed in this cluster is 1.6<~A(Li)<~ 2.3. The variation in A(Li) is at least 0.6 dex (0.7 dex considering also the data available in the literature) and the scatter is six times larger than what expected from the observational error. We claim that these variations are real. A(Li) seems to be anti-correlated with A(Na) exactly as observed in NGC 6752. No systematic error in our analysis could produce such an a...
Constraints on a possible variation of the fine structure constant from galaxy cluster data
Holanda, R F L; Alcaniz, J S; G., I E Sanchez; Busti, V C
2015-01-01
We propose a new method to probe a possible time evolution of the fine structure constant $\\alpha$ from X-ray and Sunyaev-Zeldovich measurements of the gas mass fraction ($f_{gas}$) in galaxy clusters. Taking into account a direct relation between variations of $\\alpha$ and violations of the distance-duality relation, we discuss constraints on $\\alpha$ for a class of dilaton runaway models. Although not yet competitive with bounds from high-$z$ quasar absorption systems, our constraints, considering a sample of 29 measurements of $f_{gas}$, in the redshift interval $0.14 < z < 0.89$, provide an independent estimate of $\\alpha$ variation at low and intermediate redshifts. Furthermore, current and planned surveys will provide a larger amount of data and thus allow to improve the limits on $\\alpha$ variation obtained in the present analysis.
Advanced cluster methods for correlated-electron systems
Energy Technology Data Exchange (ETDEWEB)
Fischer, Andre
2015-04-27
In this thesis, quantum cluster methods are used to calculate electronic properties of correlated-electron systems. A special focus lies in the determination of the ground state properties of a 3/4 filled triangular lattice within the one-band Hubbard model. At this filling, the electronic density of states exhibits a so-called van Hove singularity and the Fermi surface becomes perfectly nested, causing an instability towards a variety of spin-density-wave (SDW) and superconducting states. While chiral d+id-wave superconductivity has been proposed as the ground state in the weak coupling limit, the situation towards strong interactions is unclear. Additionally, quantum cluster methods are used here to investigate the interplay of Coulomb interactions and symmetry-breaking mechanisms within the nematic phase of iron-pnictide superconductors. The transition from a tetragonal to an orthorhombic phase is accompanied by a significant change in electronic properties, while long-range magnetic order is not established yet. The driving force of this transition may not only be phonons but also magnetic or orbital fluctuations. The signatures of these scenarios are studied with quantum cluster methods to identify the most important effects. Here, cluster perturbation theory (CPT) and its variational extention, the variational cluster approach (VCA) are used to treat the respective systems on a level beyond mean-field theory. Short-range correlations are incorporated numerically exactly by exact diagonalization (ED). In the VCA, long-range interactions are included by variational optimization of a fictitious symmetry-breaking field based on a self-energy functional approach. Due to limitations of ED, cluster sizes are limited to a small number of degrees of freedom. For the 3/4 filled triangular lattice, the VCA is performed for different cluster symmetries. A strong symmetry dependence and finite-size effects make a comparison of the results from different clusters difficult
Topologically clustering: a method for discarding mismatches
Wang, Yongtao; Zhang, Dazhi; Gao, Chenqiang; Tian, Jinwen
2007-11-01
Wide baseline stereo correspondence has become a challenging and attractive problem in computer vision and its related applications. Getting high correct ratio initial matches is a very important step of general wide baseline stereo correspondence algorithm. Ferrari et al. suggested a voting scheme called topological filter in [3] to discard mismatches from initial matches, but they didn't give theoretical analysis of their method. Furthermore, the parameter of their scheme was uncertain. In this paper, we improved Ferraris' method based on our theoretical analysis, and presented a novel scheme called topologically clustering to discard mismatches. The proposed method has been tested using many famous wide baseline image pairs and the experimental results showed that the developed method can efficiently extract high correct ratio matches from low correct ratio initial matches for wide baseline image pairs.
Fuzzy Clustering - Principles, Methods and Examples
DEFF Research Database (Denmark)
Kroszynski, Uri; Zhou, Jianjun
1998-01-01
One of the most remarkable advances in the field of identification and control of systems -in particular mechanical systems- whose behaviour can not be described by means of the usual mathematical models, has been achieved by the application of methods of fuzzy theory.In the framework of a study...... about identification of "black-box" properties by analysis of system input/output data sets, we have prepared an introductory note on the principles and the most popular data classification methods used in fuzzy modeling. This introductory note also includes some examples that illustrate the use...... of the methods. The examples were solved by hand and served as a test bench for exploration of the MATLAB capabilities included in the Fuzzy Control Toolbox. The fuzzy clustering methods described include Fuzzy c-means (FCM), Fuzzy c-lines (FCL) and Fuzzy c-elliptotypes (FCE)....
Breaking the hierarchy - a new cluster selection mechanism for hierarchical clustering methods
Directory of Open Access Journals (Sweden)
Zweig Katharina A
2009-10-01
Full Text Available Abstract Background Hierarchical clustering methods like Ward's method have been used since decades to understand biological and chemical data sets. In order to get a partition of the data set, it is necessary to choose an optimal level of the hierarchy by a so-called level selection algorithm. In 2005, a new kind of hierarchical clustering method was introduced by Palla et al. that differs in two ways from Ward's method: it can be used on data on which no full similarity matrix is defined and it can produce overlapping clusters, i.e., allow for multiple membership of items in clusters. These features are optimal for biological and chemical data sets but until now no level selection algorithm has been published for this method. Results In this article we provide a general selection scheme, the level independent clustering selection method, called LInCS. With it, clusters can be selected from any level in quadratic time with respect to the number of clusters. Since hierarchically clustered data is not necessarily associated with a similarity measure, the selection is based on a graph theoretic notion of cohesive clusters. We present results of our method on two data sets, a set of drug like molecules and set of protein-protein interaction (PPI data. In both cases the method provides a clustering with very good sensitivity and specificity values according to a given reference clustering. Moreover, we can show for the PPI data set that our graph theoretic cohesiveness measure indeed chooses biologically homogeneous clusters and disregards inhomogeneous ones in most cases. We finally discuss how the method can be generalized to other hierarchical clustering methods to allow for a level independent cluster selection. Conclusion Using our new cluster selection method together with the method by Palla et al. provides a new interesting clustering mechanism that allows to compute overlapping clusters, which is especially valuable for biological and
Finding Semirigid Domains in Biomolecules by Clustering Pair-Distance Variations
Directory of Open Access Journals (Sweden)
Michael Kenn
2014-01-01
Full Text Available Dynamic variations in the distances between pairs of atoms are used for clustering subdomains of biomolecules. We draw on a well-known target function for clustering and first show mathematically that the assignment of atoms to clusters has to be crisp, not fuzzy, as hitherto assumed. This reduces the computational load of clustering drastically, and we demonstrate results for several biomolecules relevant in immunoinformatics. Results are evaluated regarding the number of clusters, cluster size, cluster stability, and the evolution of clusters over time. Crisp clustering lends itself as an efficient tool to locate semirigid domains in the simulation of biomolecules. Such domains seem crucial for an optimum performance of subsequent statistical analyses, aiming at detecting minute motional patterns related to antigen recognition and signal transduction.
Finding semirigid domains in biomolecules by clustering pair-distance variations.
Kenn, Michael; Ribarics, Reiner; Ilieva, Nevena; Schreiner, Wolfgang
2014-01-01
Dynamic variations in the distances between pairs of atoms are used for clustering subdomains of biomolecules. We draw on a well-known target function for clustering and first show mathematically that the assignment of atoms to clusters has to be crisp, not fuzzy, as hitherto assumed. This reduces the computational load of clustering drastically, and we demonstrate results for several biomolecules relevant in immunoinformatics. Results are evaluated regarding the number of clusters, cluster size, cluster stability, and the evolution of clusters over time. Crisp clustering lends itself as an efficient tool to locate semirigid domains in the simulation of biomolecules. Such domains seem crucial for an optimum performance of subsequent statistical analyses, aiming at detecting minute motional patterns related to antigen recognition and signal transduction.
Mapping the Generator Coordinate Method to the Coupled Cluster Approach
Stuber, Jason L
2015-01-01
The generator coordinate method (GCM) casts the wavefunction as an integral over a weighted set of non-orthogonal single determinantal states. In principle this representation can be used like the configuration interaction (CI) or shell model to systematically improve the approximate wavefunction towards an exact solution. In practice applications have generally been limited to systems with less than three degrees of freedom. This bottleneck is directly linked to the exponential computational expense associated with the numerical projection of broken symmetry Hartree-Fock (HF) or Hartree-Fock-Bogoliubov (HFB) wavefunctions and to the use of a variational rather than a bi-variational expression for the energy. We circumvent these issues by choosing a hole-particle representation for the generator and applying algebraic symmetry projection, via the use of tensor operators and the invariant mean (operator average). The resulting GCM formulation can be mapped directly to the coupled cluster (CC) approach, leading...
MANNER OF STOCKS SORTING USING CLUSTER ANALYSIS METHODS
Directory of Open Access Journals (Sweden)
Jana Halčinová
2014-06-01
Full Text Available The aim of the present article is to show the possibility of using the methods of cluster analysis in classification of stocks of finished products. Cluster analysis creates groups (clusters of finished products according to similarity in demand i.e. customer requirements for each product. Manner stocks sorting of finished products by clusters is described a practical example. The resultants clusters are incorporated into the draft layout of the distribution warehouse.
Fuzzy Clustering Using C-Means Method
Directory of Open Access Journals (Sweden)
Georgi Krastev
2015-05-01
Full Text Available The cluster analysis of fuzzy clustering according to the fuzzy c-means algorithm has been described in this paper: the problem about the fuzzy clustering has been discussed and the general formal concept of the problem of the fuzzy clustering analysis has been presented. The formulation of the problem has been specified and the algorithm for solving it has been described.
Fast variation method for elastic strip calculation.
Biryukov, Sergey V
2002-05-01
A new, fast, variation method (FVM) for determining an elastic strip response to stresses arbitrarily distributed on the flat side of the strip is proposed. The remaining surface of the strip may have an arbitrary form, and it is free of stresses. The FVM, as well as the well-known finite element method (FEM), starts with the variational principle. However, it does not use the meshing of the strip. A comparison of FVM results with the exact analytical solution in the special case of shear stresses and a rectangular strip demonstrates an excellent agreement.
VARIATIONAL METHODS OF FORMING DEPRECIATION DEDUCTIONS
Evgeniy Aleksandrovich Filatov; Liliya Gennadyevna Rudykh; Yuvenaliy Anatolievich Kiryukhin
2014-01-01
Long-term planning of activity at all and the financial one in particular is one of the cornerstones of modern management. Using of author’s method to form depreciative policy allows to cut uncertainty while making decisions connecting with commercial organizations’ development. The article is devoted to finding optimal strategies for depreciation calculation by comparative analysis of the straight-line and proposed by the authors methods. It presents a new method of variational c...
Constraints on a possible variation of the fine structure constant from galaxy cluster data
Holanda, R. F. L.; Landau, S. J.; Alcaniz, J. S.; Sánchez G., I. E.; Busti, V. C.
2016-05-01
We propose a new method to probe a possible time evolution of the fine structure constant α from X-ray and Sunyaev-Zel'dovich measurements of the gas mass fraction (fgas) in galaxy clusters. Taking into account a direct relation between variations of α and violations of the distance-duality relation, we discuss constraints on α for a class of dilaton runaway models. Although not yet competitive with bounds from high-z quasar absorption systems, our constraints, considering a sample of 29 measurements of fgas, in the redshift interval 0.14 intermediate redshifts. Furthermore, current and planned surveys will provide a larger amount of data and thus allow to improve the limits on α variation obtained in the present analysis.
Integrated management of thesis using clustering method
Astuti, Indah Fitri; Cahyadi, Dedy
2017-02-01
Thesis is one of major requirements for student in pursuing their bachelor degree. In fact, finishing the thesis involves a long process including consultation, writing manuscript, conducting the chosen method, seminar scheduling, searching for references, and appraisal process by the board of mentors and examiners. Unfortunately, most of students find it hard to match all the lecturers' free time to sit together in a seminar room in order to examine the thesis. Therefore, seminar scheduling process should be on the top of priority to be solved. Manual mechanism for this task no longer fulfills the need. People in campus including students, staffs, and lecturers demand a system in which all the stakeholders can interact each other and manage the thesis process without conflicting their timetable. A branch of computer science named Management Information System (MIS) could be a breakthrough in dealing with thesis management. This research conduct a method called clustering to distinguish certain categories using mathematics formulas. A system then be developed along with the method to create a well-managed tool in providing some main facilities such as seminar scheduling, consultation and review process, thesis approval, assessment process, and also a reliable database of thesis. The database plays an important role in present and future purposes.
Discrete range clustering using Monte Carlo methods
Chatterji, G. B.; Sridhar, B.
1993-01-01
For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.
A Variational Level Set Model Combined with FCMS for Image Clustering Segmentation
Directory of Open Access Journals (Sweden)
Liming Tang
2014-01-01
Full Text Available The fuzzy C means clustering algorithm with spatial constraint (FCMS is effective for image segmentation. However, it lacks essential smoothing constraints to the cluster boundaries and enough robustness to the noise. Samson et al. proposed a variational level set model for image clustering segmentation, which can get the smooth cluster boundaries and closed cluster regions due to the use of level set scheme. However it is very sensitive to the noise since it is actually a hard C means clustering model. In this paper, based on Samson’s work, we propose a new variational level set model combined with FCMS for image clustering segmentation. Compared with FCMS clustering, the proposed model can get smooth cluster boundaries and closed cluster regions due to the use of level set scheme. In addition, a block-based energy is incorporated into the energy functional, which enables the proposed model to be more robust to the noise than FCMS clustering and Samson’s model. Some experiments on the synthetic and real images are performed to assess the performance of the proposed model. Compared with some classical image segmentation models, the proposed model has a better performance for the images contaminated by different noise levels.
Directory of Open Access Journals (Sweden)
Cooper James B
2010-03-01
Full Text Available Abstract Background Clustering the information content of large high-dimensional gene expression datasets has widespread application in "omics" biology. Unfortunately, the underlying structure of these natural datasets is often fuzzy, and the computational identification of data clusters generally requires knowledge about cluster number and geometry. Results We integrated strategies from machine learning, cartography, and graph theory into a new informatics method for automatically clustering self-organizing map ensembles of high-dimensional data. Our new method, called AutoSOME, readily identifies discrete and fuzzy data clusters without prior knowledge of cluster number or structure in diverse datasets including whole genome microarray data. Visualization of AutoSOME output using network diagrams and differential heat maps reveals unexpected variation among well-characterized cancer cell lines. Co-expression analysis of data from human embryonic and induced pluripotent stem cells using AutoSOME identifies >3400 up-regulated genes associated with pluripotency, and indicates that a recently identified protein-protein interaction network characterizing pluripotency was underestimated by a factor of four. Conclusions By effectively extracting important information from high-dimensional microarray data without prior knowledge or the need for data filtration, AutoSOME can yield systems-level insights from whole genome microarray expression studies. Due to its generality, this new method should also have practical utility for a variety of data-intensive applications, including the results of deep sequencing experiments. AutoSOME is available for download at http://jimcooperlab.mcdb.ucsb.edu/autosome.
A Latent Variable Clustering Method for Wireless Sensor Networks
DEFF Research Database (Denmark)
Vasilev, Vladislav; Iliev, Georgi; Poulkov, Vladimir
2016-01-01
In this paper we derive a clustering method based on the Hidden Conditional Random Field (HCRF) model in order to maximizes the performance of a wireless sensor. Our novel approach to clustering in this paper is in the application of an index invariant graph that we defined in a previous work...... obtain by running simulations of a time dynamic sensor network. The performance of the proposed method outperforms the existing clustering methods, such as the Girvan-Newmans algorithm, the Kargers algorithm and the Spectral Clustering method, in terms of packet acceptance probability and delay....
A Variational Formulation of Dissipative Quasicontinuum Methods
Rokoš, Ondřej; Zeman, Jan; Peerlings, Ron H J
2016-01-01
Lattice systems and discrete networks with dissipative interactions are successfully employed as meso-scale models of heterogeneous solids. As the application scale generally is much larger than that of the discrete links, physically relevant simulations are computationally expensive. The QuasiContinuum (QC) method is a multiscale approach that reduces the computational cost of direct numerical simulations by fully resolving complex phenomena only in regions of interest while coarsening elsewhere. In previous work (Beex et al., J. Mech. Phys. Solids 64, 154-169, 2014), the originally conservative QC methodology was generalized to a virtual-power-based QC approach that includes local dissipative mechanisms. In this contribution, the virtual-power-based QC method is reformulated from a variational point of view, by employing the energy-based variational framework for rate-independent processes (Mielke and Roub\\'{i}\\v{c}ek, Rate-Independent Systems: Theory and Application, Springer-Verlag, 2015). By construction...
Variational Bayesian Approximation methods for inverse problems
Mohammad-Djafari, Ali
2012-09-01
Variational Bayesian Approximation (VBA) methods are recent tools for effective Bayesian computations. In this paper, these tools are used for inverse problems where the prior models include hidden variables and where where the estimation of the hyper parameters has also to be addressed. In particular two specific prior models (Student-t and mixture of Gaussian models) are considered and details of the algorithms are given.
Large-scale SNP analysis reveals clustered and continuous patterns of human genetic variation
Directory of Open Access Journals (Sweden)
Shriver Mark D
2005-06-01
Full Text Available Abstract Understanding the distribution of human genetic variation is an important foundation for research into the genetics of common diseases. Some of the alleles that modify common disease risk are themselves likely to be common and, thus, amenable to identification using gene-association methods. A problem with this approach is that the large sample sizes required for sufficient statistical power to detect alleles with moderate effect make gene-association studies susceptible to false-positive findings as the result of population stratification 12. Such type I errors can be eliminated by using either family-based association tests or methods that sufficiently adjust for population stratification 345. These methods require the availability of genetic markers that can detect and, thus, control for sources of genetic stratification among populations. In an effort to investigate population stratification and identify appropriate marker panels, we have analysed 11,555 single nucleotide polymorphisms in 203 individuals from 12 diverse human populations. Individuals in each population cluster to the exclusion of individuals from other populations using two clustering methods. Higher-order branching and clustering of the populations are consistent with the geographic origins of populations and with previously published genetic analyses. These data provide a valuable resource for the definition of marker panels to detect and control for population stratification in population-based gene identification studies. Using three US resident populations (European-American, African-American and Puerto Rican, we demonstrate how such studies can proceed, quantifying proportional ancestry levels and detecting significant admixture structure in each of these populations.
Fuzzy Clustering Methods and their Application to Fuzzy Modeling
DEFF Research Database (Denmark)
Kroszynski, Uri; Zhou, Jianjun
1999-01-01
Fuzzy modeling techniques based upon the analysis of measured input/output data sets result in a set of rules that allow to predict system outputs from given inputs. Fuzzy clustering methods for system modeling and identification result in relatively small rule-bases, allowing fast, yet accurate...... prediction of outputs. This article presents an overview of some of the most popular clustering methods, namely Fuzzy Cluster-Means (FCM) and its generalizations to Fuzzy C-Lines and Elliptotypes. The algorithms for computing cluster centers and principal directions from a training data-set are described....... A method to obtain an optimized number of clusters is outlined. Based upon the cluster's characteristics, a behavioural model is formulated in terms of a rule-base and an inference engine. The article reviews several variants for the model formulation. Some limitations of the methods are listed...
A New Feature Selection Method for Text Clustering
Institute of Scientific and Technical Information of China (English)
XU Junling; XU Baowen; ZHANG Weifeng; CUI Zifeng; ZHANG Wei
2007-01-01
Feature selection methods have been successfully applied to text categorization but seldom applied to text clustering due to the unavailability of class label information. In this paper, a new feature selection method for text clustering based on expectation maximization and cluster validity is proposed. It uses supervised feature selection method on the intermediate clustering result which is generated during iterative clustering to do feature selection for text clustering; meanwhile, the Davies-Bouldin's index is used to evaluate the intermediate feature subsets indirectly. Then feature subsets are selected according to the curve of the DaviesBouldin's index. Experiment is carried out on several popular datasets and the results show the advantages of the proposed method.
Fuzzy Clustering Method for Web User Based on Pages Classification
Institute of Scientific and Technical Information of China (English)
ZHAN Li-qiang; LIU Da-xin
2004-01-01
A new method for Web users fuzzy clustering based on analysis of user interest characteristic is proposed in this article.The method first defines page fuzzy categories according to the links on the index page of the site, then computes fuzzy degree of cross page through aggregating on data of Web log.After that, by using fuzzy comprehensive evaluation method, the method constructs user interest vectors according to page viewing times and frequency of hits, and derives the fuzzy similarity matrix from the interest vectors for the Web users.Finally, it gets the clustering result through the fuzzy clustering method.The experimental results show the effectiveness of the method.
CCM: A Text Classification Method by Clustering
DEFF Research Database (Denmark)
Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock
2011-01-01
In this paper, a new Cluster based Classification Model (CCM) for suspicious email detection and other text classification tasks, is presented. Comparative experiments of the proposed model against traditional classification models and the boosting algorithm are also discussed. Experimental results...... show that the CCM outperforms traditional classification models as well as the boosting algorithm for the task of suspicious email detection on terrorism domain email dataset and topic categorization on the Reuters-21578 and 20 Newsgroups datasets. The overall finding is that applying a cluster based...
A graph clustering method for community detection in complex networks
Zhou, HongFang; Li, Jin; Li, JunHuai; Zhang, FaCun; Cui, YingAn
2017-03-01
Information mining from complex networks by identifying communities is an important problem in a number of research fields, including the social sciences, biology, physics and medicine. First, two concepts are introduced, Attracting Degree and Recommending Degree. Second, a graph clustering method, referred to as AR-Cluster, is presented for detecting community structures in complex networks. Third, a novel collaborative similarity measure is adopted to calculate node similarities. In the AR-Cluster method, vertices are grouped together based on calculated similarity under a K-Medoids framework. Extensive experimental results on two real datasets show the effectiveness of AR-Cluster.
VARIATIONAL METHODS OF FORMING DEPRECIATION DEDUCTIONS
Directory of Open Access Journals (Sweden)
Evgeniy Aleksandrovich Filatov
2014-01-01
Full Text Available Long-term planning of activity at all and the financial one in particular is one of the cornerstones of modern management. Using of author’s method to form depreciative policy allows to cut uncertainty while making decisions connecting with commercial organizations’ development. The article is devoted to finding optimal strategies for depreciation calculation by comparative analysis of the straight-line and proposed by the authors methods. It presents a new method of variational calculations of depreciation policy based on the handling of the introduced by the authors coefficients (linear, step and correction coefficients, allowing to an economic entity to reasonably form and distribute the amortization fund in accordance with the market situation.
Optimal Variational Method for Truly Nonlinear Oscillators
Directory of Open Access Journals (Sweden)
Vasile Marinca
2013-01-01
Full Text Available The Optimal Variational Method (OVM is introduced and applied for calculating approximate periodic solutions of “truly nonlinear oscillators”. The main advantage of this procedure consists in that it provides a convenient way to control the convergence of approximate solutions in a very rigorous way and allows adjustment of convergence regions where necessary. This approach does not depend upon any small or large parameters. A very good agreement was found between approximate and numerical solution, which proves that OVM is very efficient and accurate.
Areias, C; Briz, T; Nunes, C
2015-11-01
Portugal, a medium- to low-level endemic country (21·6 cases/100 000 population in 2012), has one of the highest European Union tuberculosis (TB) incidences. Although incidence is declining progressively, the country's heterogeneity in both regional endemics and their evolution suggests the importance of a better understanding of subnational epidemiology to customize TB control efforts. We aimed to update knowledge on municipality-years pulmonary TB incidence clustering, identify areas with different time trends, and show the potential of combining complementary clustering methods in control of infectious diseases. We used national surveillance municipality-level data (mainland Portugal, 2000-2010). Space-time clustering and spatial variation in temporal trends methods were applied. Space-time critical clusters identified (P < 0·001) were still the Lisbon and Oporto regions. The global incidence declined at a 5·81% mean annual percentage change, with high space-time heterogeneity and distinct time trend clusters (P < 0·001). Municipalities with incidences declining more rapidly belonged to critical areas. In particular, the Oporto trend cluster had a consistent -8·98% mean annual percentage change. Large space-time heterogeneities were identified, with critical incidences in the greater Lisbon and Oporto regions, but declining more rapidly in these regions. Oporto showed a consistent, steeper decrease and could represent a good example of local control strategy. Combining results from these approaches gives promise for prospects for infectious disease control and the design of more effective, focused interventions.
New Constraints on Spatial Variations of the Fine Structure Constant from Clusters of Galaxies
Directory of Open Access Journals (Sweden)
Ivan De Martino
2016-12-01
Full Text Available We have constrained the spatial variation of the fine structure constant using multi-frequency measurements of the thermal Sunyaev-Zeldovich effect of 618 X-ray selected clusters. Although our results are not competitive with the ones from quasar absorption lines, we improved by a factor 10 and ∼2.5 previous results from Cosmic Microwave Background power spectrum and from galaxy clusters, respectively.
Iris segmentation using variational level set method
Roy, Kaushik; Bhattacharya, Prabir; Suen, Ching Y.
2011-04-01
Continuous efforts have been made to process degraded iris images for enhancement of the iris recognition performance in unconstrained situations. Recently, many researchers have focused on developing the iris segmentation techniques, which can deal with iris images in a non-cooperative environment where the probability of acquiring unideal iris images is very high due to gaze deviation, noise, blurring, and occlusion by eyelashes, eyelids, glasses, and hair. Although there have been many iris segmentation methods, most focus primarily on the accurate detection of iris images captured in a closely controlled environment. The novelty of this research effort is that we propose to apply a variational level set-based curve evolution scheme that uses a significantly larger time step to numerically solve the evolution partial differential equation (PDE) for segmentation of an unideal iris image accurately, and thereby, speeding up the curve evolution process drastically. The iris boundary represented by the variational level set may break and merge naturally during evolution, and thus, the topological changes are handled automatically. The proposed variational model is also robust against poor localization and weak iris/sclera boundaries. In order to solve the size irregularities occurring due to arbitrary shapes of the extracted iris/pupil regions, a simple method is applied based on connection of adjacent contour points. Furthermore, to reduce the noise effect, we apply a pixel-wise adaptive 2D Wiener filter. The verification and identification performance of the proposed scheme is validated on three challenging iris image datasets, namely, the ICE 2005, the WVU Unideal, and the UBIRIS Version 1.
On Comparison of Clustering Methods for Pharmacoepidemiological Data.
Feuillet, Fanny; Bellanger, Lise; Hardouin, Jean-Benoit; Victorri-Vigneau, Caroline; Sébille, Véronique
2015-01-01
The high consumption of psychotropic drugs is a public health problem. Rigorous statistical methods are needed to identify consumption characteristics in post-marketing phase. Agglomerative hierarchical clustering (AHC) and latent class analysis (LCA) can both provide clusters of subjects with similar characteristics. The objective of this study was to compare these two methods in pharmacoepidemiology, on several criteria: number of clusters, concordance, interpretation, and stability over time. From a dataset on bromazepam consumption, the two methods present a good concordance. AHC is a very stable method and it provides homogeneous classes. LCA is an inferential approach and seems to allow identifying more accurately extreme deviant behavior.
Rainfall variation by geostatistical interpolation method
Directory of Open Access Journals (Sweden)
Glauber Epifanio Loureiro
2013-08-01
Full Text Available This article analyses the variation of rainfall in the Tocantins-Araguaia hydrographic region in the last two decades, based upon the rain gauge stations of the ANA (Brazilian National Water Agency HidroWeb database for the years 1983, 1993 and 2003. The information was systemized and treated with Hydrologic methods such as method of contour and interpolation for ordinary kriging. The treatment considered the consistency of the data, the density of the space distribution of the stations and the periods of study. The results demonstrated that the total volume of water precipitated annually did not change significantly in the 20 years analyzed. However, a significant variation occurred in its spatial distribution. By analyzing the isohyet it was shown that there is a displacement of the precipitation at Tocantins Baixo (TOB of approximately 10% of the total precipitated volume. This displacement can be caused by global change, by anthropogenic activities or by regional natural phenomena. However, this paper does not explore possible causes of the displacement.
Progeny Clustering: A Method to Identify Biological Phenotypes
Hu, Chenyue W.; Kornblau, Steven M.; Slater, John H.; Qutub, Amina A.
2015-01-01
Estimating the optimal number of clusters is a major challenge in applying cluster analysis to any type of dataset, especially to biomedical datasets, which are high-dimensional and complex. Here, we introduce an improved method, Progeny Clustering, which is stability-based and exceptionally efficient in computing, to find the ideal number of clusters. The algorithm employs a novel Progeny Sampling method to reconstruct cluster identity, a co-occurrence probability matrix to assess the clustering stability, and a set of reference datasets to overcome inherent biases in the algorithm and data space. Our method was shown successful and robust when applied to two synthetic datasets (datasets of two-dimensions and ten-dimensions containing eight dimensions of pure noise), two standard biological datasets (the Iris dataset and Rat CNS dataset) and two biological datasets (a cell phenotype dataset and an acute myeloid leukemia (AML) reverse phase protein array (RPPA) dataset). Progeny Clustering outperformed some popular clustering evaluation methods in the ten-dimensional synthetic dataset as well as in the cell phenotype dataset, and it was the only method that successfully discovered clinically meaningful patient groupings in the AML RPPA dataset. PMID:26267476
UPRE method for total variation parameter selection
Energy Technology Data Exchange (ETDEWEB)
Wohlberg, Brendt [Los Alamos National Laboratory; Lin, Youzuo [Los Alamos National Laboratory
2008-01-01
Total Variation (TV) Regularization is an important method for solving a wide variety of inverse problems in image processing. In order to optimize the reconstructed image, it is important to choose the optimal regularization parameter. The Unbiased Predictive Risk Estimator (UPRE) has been shown to give a very good estimate of this parameter for Tikhonov Regularization. In this paper we propose an approach to extend UPRE method to the TV problem. However, applying the extended UPRE is impractical in the case of inverse problems such as de blurring, due to the large scale of the associated linear problem. We also propose an approach to reducing the large scale problem to a small problem, significantly reducing computational requirements while providing a good approximation to the original problem.
Using an Improved Clustering Method to Detect Anomaly Activities
Institute of Scientific and Technical Information of China (English)
LI Han; ZHANG Nan; BAO Lihui
2006-01-01
In this paper, an improved k-means based clustering method (IKCM) is proposed. By refining the initial cluster centers and adjusting the number of clusters by splitting and merging procedures, it can avoid the algorithm resulting in the situation of locally optimal solution and reduce the number of clusters dependency. The IKCM has been implemented and tested. We perform experiments on KDD-99 data set. The comparison experiments with H-means+also have been conducted. The results obtained in this study are very encouraging.
A Latent Variable Clustering Method for Wireless Sensor Networks
DEFF Research Database (Denmark)
Vasilev, Vladislav; Mihovska, Albena Dimitrova; Poulkov, Vladimir
2016-01-01
In this paper we derive a clustering method based on the Hidden Conditional Random Field (HCRF) model in order to maximizes the performance of a wireless sensor. Our novel approach to clustering in this paper is in the application of an index invariant graph that we defined in a previous work and...
Clustering Methods Application for Customer Segmentation to Manage Advertisement Campaign
Directory of Open Access Journals (Sweden)
Maciej Kutera
2010-10-01
Full Text Available Clustering methods are recently so advanced elaborated algorithms for large collection data analysis that they have been already included today to data mining methods. Clustering methods are nowadays larger and larger group of methods, very quickly evolving and having more and more various applications. In the article, our research concerning usefulness of clustering methods in customer segmentation to manage advertisement campaign is presented. We introduce results obtained by using four selected methods which have been chosen because their peculiarities suggested their applicability to our purposes. One of the analyzed method – k-means clustering with random selected initial cluster seeds gave very good results in customer segmentation to manage advertisement campaign and these results were presented in details in the article. In contrast one of the methods (hierarchical average linkage was found useless in customer segmentation. Further investigations concerning benefits of clustering methods in customer segmentation to manage advertisement campaign is worth continuing, particularly that finding solutions in this field can give measurable profits for marketing activity.
Cluster Variation Study of Ordering in FCC Solid Solutions.
1980-07-01
despite the fact that the CVM represents a remarkable improvement over other approximate methods such as the molecular field and quasichemical...Brasileiro de Pesquisas Fisicas , Rio de Janeiro, Brazil, August 10, 1979. -43- Sanchez, J. N., "Classical Approach to Order-Disbrder", Instituto de Fisica
Object-Oriented Image Clustering Method Using UAS Photogrammetric Imagery
Lin, Y.; Larson, A.; Schultz-Fellenz, E. S.; Sussman, A. J.; Swanson, E.; Coppersmith, R.
2016-12-01
Unmanned Aerial Systems (UAS) have been used widely as an imaging modality to obtain remotely sensed multi-band surface imagery, and are growing in popularity due to their efficiency, ease of use, and affordability. Los Alamos National Laboratory (LANL) has employed the use of UAS for geologic site characterization and change detection studies at a variety of field sites. The deployed UAS equipped with a standard visible band camera to collect imagery datasets. Based on the imagery collected, we use deep sparse algorithmic processing to detect and discriminate subtle topographic features created or impacted by subsurface activities. In this work, we develop an object-oriented remote sensing imagery clustering method for land cover classification. To improve the clustering and segmentation accuracy, instead of using conventional pixel-based clustering methods, we integrate the spatial information from neighboring regions to create super-pixels to avoid salt-and-pepper noise and subsequent over-segmentation. To further improve robustness of our clustering method, we also incorporate a custom digital elevation model (DEM) dataset generated using a structure-from-motion (SfM) algorithm together with the red, green, and blue (RGB) band data for clustering. In particular, we first employ an agglomerative clustering to create an initial segmentation map, from where every object is treated as a single (new) pixel. Based on the new pixels obtained, we generate new features to implement another level of clustering. We employ our clustering method to the RGB+DEM datasets collected at the field site. Through binary clustering and multi-object clustering tests, we verify that our method can accurately separate vegetation from non-vegetation regions, and are also able to differentiate object features on the surface.
Sequential Combination Methods forData Clustering Analysis
Institute of Scientific and Technical Information of China (English)
钱 涛; Ching Y.Suen; 唐远炎
2002-01-01
This paper proposes the use of more than one clustering method to improve clustering performance. Clustering is an optimization procedure based on a specific clustering criterion. Clustering combination can be regardedasatechnique that constructs and processes multiple clusteringcriteria.Sincetheglobalandlocalclusteringcriteriaarecomplementary rather than competitive, combining these two types of clustering criteria may enhance theclustering performance. In our past work, a multi-objective programming based simultaneous clustering combination algorithmhasbeenproposed, which incorporates multiple criteria into an objective function by a weighting method, and solves this problem with constrained nonlinear optimization programming. But this algorithm has high computationalcomplexity.Hereasequential combination approach is investigated, which first uses the global criterion based clustering to produce an initial result, then uses the local criterion based information to improve the initial result with aprobabilisticrelaxation algorithm or linear additive model.Compared with the simultaneous combination method, sequential combination haslow computational complexity. Results on some simulated data and standard test data arereported.Itappearsthatclustering performance improvement can be achieved at low cost through sequential combination.
Topological methods for variational problems with symmetries
Bartsch, Thomas
1993-01-01
Symmetry has a strong impact on the number and shape of solutions to variational problems. This has been observed, for instance, in the search for periodic solutions of Hamiltonian systems or of the nonlinear wave equation; when one is interested in elliptic equations on symmetric domains or in the corresponding semiflows; and when one is looking for "special" solutions of these problems. This book is concerned with Lusternik-Schnirelmann theory and Morse-Conley theory for group invariant functionals. These topological methods are developed in detail with new calculations of the equivariant Lusternik-Schnirelmann category and versions of the Borsuk-Ulam theorem for very general classes of symmetry groups. The Morse-Conley theory is applied to bifurcation problems, in particular to the bifurcation of steady states and hetero-clinic orbits of O(3)-symmetric flows; and to the existence of periodic solutions nearequilibria of symmetric Hamiltonian systems. Some familiarity with the usualminimax theory and basic a...
Urban Fire Risk Clustering Method Based on Fire Statistics
Institute of Scientific and Technical Information of China (English)
WU Lizhi; REN Aizhu
2008-01-01
Fire statistics and fire analysis have become important ways for us to understand the law of fire,prevent the occurrence of fire, and improve the ability to control fire. According to existing fire statistics, the weighted fire risk calculating method characterized by the number of fire occurrence, direct economic losses,and fire casualties was put forward. On the basis of this method, meanwhile having improved K-mean clus-tering arithmetic, this paper established fire dsk K-mean clustering model, which could better resolve the automatic classifying problems towards fire risk. Fire risk cluster should be classified by the absolute dis-tance of the target instead of the relative distance in the traditional cluster arithmetic. Finally, for applying the established model, this paper carded out fire risk clustering on fire statistics from January 2000 to December 2004 of Shenyang in China. This research would provide technical support for urban fire management.
An Effective Method of Producing Small Neutral Carbon Clusters
Institute of Scientific and Technical Information of China (English)
XIA Zhu-Hong; CHEN Cheng-Chu; HSU Yen-Chu
2007-01-01
An effective method of producing small neutral carbon clusters Cn (n = 1-6) is described. The small carbon clusters (positive or negative charge or neutral) are formed by plasma which are produced by a high power 532nm pulse laser ablating the surface of the metal Mn rod to react with small hydrocarbons supplied by a pulse valve, then the neutral carbon clusters are extracted and photo-ionized by another laser (266nm or 355nm) in the ionization region of a linear time-of-flight mass spectrometer. The distributions of the initial neutral carbon clusters are analysed with the ionic species appeared in mass spectra. It is observed that the yield of small carbon clusters with the present method is about 10 times than that of the traditional widely used technology of laser vaporization of graphite.
D'Orazi, Valentina; Lugaro, Maria; Gratton, Raffaele G; Angelou, George; Bragaglia, Angela; Carretta, Eugenio; Alves-Brito, Alan; Ivans, Inese I; Masseron, Thomas; Mucciarelli, Alessio
2012-01-01
Observed chemical (anti)correlations in proton-capture elements among globular cluster stars are presently recognised as the signature of self-enrichment from now extinct, previous generations of stars. This defines the multiple population scenario. Since fluorine is also affected by proton captures, determining its abundance in globular clusters provides new and complementary clues regarding the nature of these previous generations, and supplies strong observational constraints to the chemical enrichment timescales. In this paper we present our results on near-infrared CRIRES spectroscopic observations of six cool giant stars in NGC 6656 (M22): the main objective is to derive the F content and its internal variation in this peculiar cluster, which exhibits significant changes in both light and heavy element abundances. We detected F variations across our sample beyond the measurement uncertainties and found that the F abundances are positively correlated with O and anticorrelated with Na, as expected accordi...
Variable cluster analysis method for building neural network model
Institute of Scientific and Technical Information of China (English)
王海东; 刘元东
2004-01-01
To address the problems that input variables should be reduced as much as possible and explain output variables fully in building neural network model of complicated system, a variable selection method based on cluster analysis was investigated. Similarity coefficient which describes the mutual relation of variables was defined. The methods of the highest contribution rate, part replacing whole and variable replacement are put forwarded and deduced by information theory. The software of the neural network based on cluster analysis, which can provide many kinds of methods for defining variable similarity coefficient, clustering system variable and evaluating variable cluster, was developed and applied to build neural network forecast model of cement clinker quality. The results show that all the network scale, training time and prediction accuracy are perfect. The practical application demonstrates that the method of selecting variables for neural network is feasible and effective.
A dynamic fuzzy clustering method based on genetic algorithm
Institute of Scientific and Technical Information of China (English)
ZHENG Yan; ZHOU Chunguang; LIANG Yanchun; GUO Dongwei
2003-01-01
A dynamic fuzzy clustering method is presented based on the genetic algorithm. By calculating the fuzzy dissimilarity between samples the essential associations among samples are modeled factually. The fuzzy dissimilarity between two samples is mapped into their Euclidean distance, that is, the high dimensional samples are mapped into the two-dimensional plane. The mapping is optimized globally by the genetic algorithm, which adjusts the coordinates of each sample, and thus the Euclidean distance, to approximate to the fuzzy dissimilarity between samples gradually. A key advantage of the proposed method is that the clustering is independent of the space distribution of input samples, which improves the flexibility and visualization. This method possesses characteristics of a faster convergence rate and more exact clustering than some typical clustering algorithms. Simulated experiments show the feasibility and availability of the proposed method.
New resampling method for evaluating stability of clusters
Directory of Open Access Journals (Sweden)
Neuhaeuser Markus
2008-01-01
Full Text Available Abstract Background Hierarchical clustering is a widely applied tool in the analysis of microarray gene expression data. The assessment of cluster stability is a major challenge in clustering procedures. Statistical methods are required to distinguish between real and random clusters. Several methods for assessing cluster stability have been published, including resampling methods such as the bootstrap. We propose a new resampling method based on continuous weights to assess the stability of clusters in hierarchical clustering. While in bootstrapping approximately one third of the original items is lost, continuous weights avoid zero elements and instead allow non integer diagonal elements, which leads to retention of the full dimensionality of space, i.e. each variable of the original data set is represented in the resampling sample. Results Comparison of continuous weights and bootstrapping using real datasets and simulation studies reveals the advantage of continuous weights especially when the dataset has only few observations, few differentially expressed genes and the fold change of differentially expressed genes is low. Conclusion We recommend the use of continuous weights in small as well as in large datasets, because according to our results they produce at least the same results as conventional bootstrapping and in some cases they surpass it.
DNA splice site sequences clustering method for conservativeness analysis
Institute of Scientific and Technical Information of China (English)
Quanwei Zhang; Qinke Peng; Tao Xu
2009-01-01
DNA sequences that are near to splice sites have remarkable conservativeness,and many researchers have contributed to the prediction of splice site.In order to mine the underlying biological knowledge,we analyze the conservativeness of DNA splice site adjacent sequences by clustering.Firstly,we propose a kind of DNA splice site sequences clustering method which is based on DBSCAN,and use four kinds of dissimilarity calculating methods.Then,we analyze the conservative feature of the clustering results and the experimental data set.
Color Image Segmentation Method Based on Improved Spectral Clustering Algorithm
Dong Qin
2014-01-01
Contraposing to the features of image data with high sparsity of and the problems on determination of clustering numbers, we try to put forward an color image segmentation algorithm, combined with semi-supervised machine learning technology and spectral graph theory. By the research of related theories and methods of spectral clustering algorithms, we introduce information entropy conception to design a method which can automatically optimize the scale parameter value. So it avoids the unstab...
Evidence for Cluster to Cluster Variations in Low-mass Stellar Rotational Evolution
Coker, Carl T.; Pinsonneault, Marc; Terndrup, Donald M.
2016-12-01
The concordance model for angular momentum evolution postulates that star-forming regions and clusters are an evolutionary sequence that can be modeled with assumptions about protostar-disk coupling, angular momentum loss from magnetized winds that saturates in a mass-dependent fashion at high rotation rates, and core-envelope decoupling for solar analogs. We test this approach by combining established data with the large h Per data set from the MONITOR project and new low-mass Pleiades data. We confirm prior results that young low-mass stars can be used to test star-disk coupling and angular momentum loss independent of the treatment of internal angular momentum transport. For slow rotators, we confirm the need for star-disk interactions to evolve the ONC to older systems, using h Per (age 13 Myr) as our natural post-disk case. There is no evidence for extremely long-lived disks as an alternative to core-envelope decoupling. However, our wind models cannot evolve rapid rotators from h Per to older systems consistently, and we find that this result is robust with respect to the choice of angular momentum loss prescription. We outline two possible solutions: either there is cosmic variance in the distribution of stellar rotation rates in different clusters or there are substantially enhanced torques in low-mass rapid rotators. We favor the former explanation and discuss observational tests that could be used to distinguish them. If the distribution of initial conditions depends on environment, models that test parameters by assuming a universal underlying distribution of initial conditions will need to be re-evaluated.
Clustering method based on data division and partition
Institute of Scientific and Technical Information of China (English)
卢志茂; 刘晨; 张春祥; 王蕾
2014-01-01
Many classical clustering algorithms do good jobs on their prerequisite but do not scale well when being applied to deal with very large data sets (VLDS). In this work, a novel division and partition clustering method (DP) was proposed to solve the problem. DP cut the source data set into data blocks, and extracted the eigenvector for each data block to form the local feature set. The local feature set was used in the second round of the characteristics polymerization process for the source data to find the global eigenvector. Ultimately according to the global eigenvector, the data set was assigned by criterion of minimum distance. The experimental results show that it is more robust than the conventional clusterings. Characteristics of not sensitive to data dimensions, distribution and number of nature clustering make it have a wide range of applications in clustering VLDS.
An Examination of Three Spatial Event Cluster Detection Methods
Directory of Open Access Journals (Sweden)
Hensley H. Mariathas
2015-03-01
Full Text Available In spatial disease surveillance, geographic areas with large numbers of disease cases are to be identified, so that targeted investigations can be pursued. Geographic areas with high disease rates are called disease clusters and statistical cluster detection tests are used to identify geographic areas with higher disease rates than expected by chance alone. In some situations, disease-related events rather than individuals are of interest for geographical surveillance, and methods to detect clusters of disease-related events are called event cluster detection methods. In this paper, we examine three distributional assumptions for the events in cluster detection: compound Poisson, approximate normal and multiple hypergeometric (exact. The methods differ on the choice of distributional assumption for the potentially multiple correlated events per individual. The methods are illustrated on emergency department (ED presentations by children and youth (age < 18 years because of substance use in the province of Alberta, Canada, during 1 April 2007, to 31 March 2008. Simulation studies are conducted to investigate Type I error and the power of the clustering methods.
A Clustering Method Based on the Maximum Entropy Principle
Directory of Open Access Journals (Sweden)
Edwin Aldana-Bobadilla
2015-01-01
Full Text Available Clustering is an unsupervised process to determine which unlabeled objects in a set share interesting properties. The objects are grouped into k subsets (clusters whose elements optimize a proximity measure. Methods based on information theory have proven to be feasible alternatives. They are based on the assumption that a cluster is one subset with the minimal possible degree of “disorder”. They attempt to minimize the entropy of each cluster. We propose a clustering method based on the maximum entropy principle. Such a method explores the space of all possible probability distributions of the data to find one that maximizes the entropy subject to extra conditions based on prior information about the clusters. The prior information is based on the assumption that the elements of a cluster are “similar” to each other in accordance with some statistical measure. As a consequence of such a principle, those distributions of high entropy that satisfy the conditions are favored over others. Searching the space to find the optimal distribution of object in the clusters represents a hard combinatorial problem, which disallows the use of traditional optimization techniques. Genetic algorithms are a good alternative to solve this problem. We benchmark our method relative to the best theoretical performance, which is given by the Bayes classifier when data are normally distributed, and a multilayer perceptron network, which offers the best practical performance when data are not normal. In general, a supervised classification method will outperform a non-supervised one, since, in the first case, the elements of the classes are known a priori. In what follows, we show that our method’s effectiveness is comparable to a supervised one. This clearly exhibits the superiority of our method.
Evidence for Cluster to Cluster Variations in Low-Mass Stellar Rotational Evolution
Coker, Carl T; Terndrup, Donald M
2016-01-01
A concordance model for angular momentum evolution has been developed by multiple investigators. This approach postulates that star forming regions and clusters are an evolutionary sequence which can be modeled with assumptions about the coupling between protostars and accretion disks, angular momentum loss from magnetized winds that saturates in a mass-dependent fashion at high rotation rates, and core-envelope decoupling for solar analogs. We test this approach by combining established data with the large h Per dataset from the MONITOR project and new low-mass Pleiades data. We confirm prior results that young low-mass stars can be used to test star-disk coupling and angular momentum loss independent of the treatment of internal angular momentum transport. For slow rotators, we confirm the need for star-disk interactions to evolve the ONC to older systems, using h Per (age 13~Myr) as our natural post-disk case. Further interactions are not required to evolve slow rotators from h Per to older systems, implyi...
A New Method for Medical Image Clustering Using Genetic Algorithm
Directory of Open Access Journals (Sweden)
Akbar Shahrzad Khashandarag
2013-01-01
Full Text Available Segmentation is applied in medical images when the brightness of the images becomes weaker so that making different in recognizing the tissues borders. Thus, the exact segmentation of medical images is an essential process in recognizing and curing an illness. Thus, it is obvious that the purpose of clustering in medical images is the recognition of damaged areas in tissues. Different techniques have been introduced for clustering in different fields such as engineering, medicine, data mining and so on. However, there is no standard technique of clustering to present ideal results for all of the imaging applications. In this paper, a new method combining genetic algorithm and k-means algorithm is presented for clustering medical images. In this combined technique, variable string length genetic algorithm (VGA is used for the determination of the optimal cluster centers. The proposed algorithm has been compared with the k-means clustering algorithm. The advantage of the proposed method is the accuracy in selecting the optimal cluster centers compared with the above mentioned technique.
A new method to prepare colloids of size-controlled clusters from a matrix assembly cluster source
Cai, Rongsheng; Jian, Nan; Murphy, Shane; Bauer, Karl; Palmer, Richard E.
2017-05-01
A new method for the production of colloidal suspensions of physically deposited clusters is demonstrated. A cluster source has been used to deposit size-controlled clusters onto water-soluble polymer films, which are then dissolved to produce colloidal suspensions of clusters encapsulated with polymer molecules. This process has been demonstrated using different cluster materials (Au and Ag) and polymers (polyvinylpyrrolidone, polyvinyl alcohol, and polyethylene glycol). Scanning transmission electron microscopy of the clusters before and after colloidal dispersion confirms that the polymers act as stabilizing agents. We propose that this method is suitable for the production of biocompatible colloids of ultraprecise clusters.
A liquid drop model for embedded atom method cluster energies
Finley, C. W.; Abel, P. B.; Ferrante, J.
1996-01-01
Minimum energy configurations for homonuclear clusters containing from two to twenty-two atoms of six metals, Ag, Au, Cu, Ni, Pd, and Pt have been calculated using the Embedded Atom Method (EAM). The average energy per atom as a function of cluster size has been fit to a liquid drop model, giving estimates of the surface and curvature energies. The liquid drop model gives a good representation of the relationship between average energy and cluster size. As a test the resulting surface energies are compared to EAM surface energy calculations for various low-index crystal faces with reasonable agreement.
CHANGE DETECTION BY FUSING ADVANTAGES OF THRESHOLD AND CLUSTERING METHODS
Directory of Open Access Journals (Sweden)
M. Tan
2017-09-01
Full Text Available In change detection (CD of medium-resolution remote sensing images, the threshold and clustering methods are two kinds of the most popular ones. It is found that the threshold method of the expectation maximum (EM algorithm usually generates a CD map including many false alarms but almost detecting all changes, and the fuzzy local information c-means algorithm (FLICM obtains a homogeneous CD map but with some missed detections. Therefore, we aim to design a framework to improve CD results by fusing the advantages of threshold and clustering methods. Experimental results indicate the effectiveness of the proposed method.
Structural variation of the ribosomal gene cluster within the class Insecta
Energy Technology Data Exchange (ETDEWEB)
Mukha, D.V.; Sidorenko, A.P.; Lazebnaya, I.V. [Vavilov Institute of General Genetics, Moscow (Russian Federation)] [and others
1995-09-01
General estimation of ribosomal DNA variation within the class Insecta is presented. It is shown that, using blot-hybridization, one can detect differences in the structure of the ribosomal gene cluster not only between genera within an order, but also between species within a genera, including sibling species. Structure of the ribosomal gene cluster of the Coccinellidae family (ladybirds) is analyzed. It is shown that cloned highly conservative regions of ribosomal DNA of Tetrahymena pyriformis can be used as probes for analyzing ribosomal genes in insects. 24 refs., 4 figs.
A short remark on fractional variational iteration method
Energy Technology Data Exchange (ETDEWEB)
He, Ji-Huan, E-mail: hejihuan@suda.edu.cn [National Engineering Laboratory for Modern Silk, College of Textile and Engineering, Soochow University, 199 Ren-ai Road, Suzhou 215123 (China)
2011-09-05
This Letter compares the classical variational iteration method with the fractional variational iteration method. The fractional complex transform is introduced to convert a fractional differential equation to its differential partner, so that its variational iteration algorithm can be simply constructed. -- Highlights: → The variational iteration method and its fractional modification are compared. → The demerits arising are overcome by the fractional complex transform. → The Letter provides a powerful tool to solving fractional differential equations.
Denissenkov, Pavel; Hartwick, David; Herwig, Falk; Weiss, Achim; Paxton, Bill
2014-01-01
Abundances of the proton-capture elements and their isotopes in globular-cluster stars correlate with each other in such a manner as if their variations were produced in high-temperature hydrogen burning at the same time in the past. In addition to these primordial abundance variations, the RGB stars in globular clusters, like their field counterparts, show the evolutionary variations of the C and N abundances and 12C/13C isotopic ratio. The latter are caused by extra mixing operating in the RGB star's radiative zone that separates the H-burning shell from the bottom of its convective envelope. We demonstrate that among the potential sources of the primordial abundance variations in globular-cluster stars proposed so far, such as the hot-bottom burning in massive AGB stars and H burning in the convective cores of supermassive and fast-rotating massive MS stars, only the supermassive MS stars with M > 10,000 Msun can explain all the abundance correlations without any fine-tuning of free parameters. We use our ...
Bhattacharya, Anindya; De, Rajat K
2010-08-01
Distance based clustering algorithms can group genes that show similar expression values under multiple experimental conditions. They are unable to identify a group of genes that have similar pattern of variation in their expression values. Previously we developed an algorithm called divisive correlation clustering algorithm (DCCA) to tackle this situation, which is based on the concept of correlation clustering. But this algorithm may also fail for certain cases. In order to overcome these situations, we propose a new clustering algorithm, called average correlation clustering algorithm (ACCA), which is able to produce better clustering solution than that produced by some others. ACCA is able to find groups of genes having more common transcription factors and similar pattern of variation in their expression values. Moreover, ACCA is more efficient than DCCA with respect to the time of execution. Like DCCA, we use the concept of correlation clustering concept introduced by Bansal et al. ACCA uses the correlation matrix in such a way that all genes in a cluster have the highest average correlation values with the genes in that cluster. We have applied ACCA and some well-known conventional methods including DCCA to two artificial and nine gene expression datasets, and compared the performance of the algorithms. The clustering results of ACCA are found to be more significantly relevant to the biological annotations than those of the other methods. Analysis of the results show the superiority of ACCA over some others in determining a group of genes having more common transcription factors and with similar pattern of variation in their expression profiles. Availability of the software: The software has been developed using C and Visual Basic languages, and can be executed on the Microsoft Windows platforms. The software may be downloaded as a zip file from http://www.isical.ac.in/~rajat. Then it needs to be installed. Two word files (included in the zip file) need to
Performance Analysis of Unsupervised Clustering Methods for Brain Tumor Segmentation
Directory of Open Access Journals (Sweden)
Tushar H Jaware
2013-10-01
Full Text Available Medical image processing is the most challenging and emerging field of neuroscience. The ultimate goal of medical image analysis in brain MRI is to extract important clinical features that would improve methods of diagnosis & treatment of disease. This paper focuses on methods to detect & extract brain tumour from brain MR images. MATLAB is used to design, software tool for locating brain tumor, based on unsupervised clustering methods. K-Means clustering algorithm is implemented & tested on data base of 30 images. Performance evolution of unsupervised clusteringmethods is presented.
Visualization methods for statistical analysis of microarray clusters
Directory of Open Access Journals (Sweden)
Li Kai
2005-05-01
Full Text Available Abstract Background The most common method of identifying groups of functionally related genes in microarray data is to apply a clustering algorithm. However, it is impossible to determine which clustering algorithm is most appropriate to apply, and it is difficult to verify the results of any algorithm due to the lack of a gold-standard. Appropriate data visualization tools can aid this analysis process, but existing visualization methods do not specifically address this issue. Results We present several visualization techniques that incorporate meaningful statistics that are noise-robust for the purpose of analyzing the results of clustering algorithms on microarray data. This includes a rank-based visualization method that is more robust to noise, a difference display method to aid assessments of cluster quality and detection of outliers, and a projection of high dimensional data into a three dimensional space in order to examine relationships between clusters. Our methods are interactive and are dynamically linked together for comprehensive analysis. Further, our approach applies to both protein and gene expression microarrays, and our architecture is scalable for use on both desktop/laptop screens and large-scale display devices. This methodology is implemented in GeneVAnD (Genomic Visual ANalysis of Datasets and is available at http://function.princeton.edu/GeneVAnD. Conclusion Incorporating relevant statistical information into data visualizations is key for analysis of large biological datasets, particularly because of high levels of noise and the lack of a gold-standard for comparisons. We developed several new visualization techniques and demonstrated their effectiveness for evaluating cluster quality and relationships between clusters.
Directory of Open Access Journals (Sweden)
Guo Junqiao
2008-09-01
Full Text Available Abstract Background The effects of climate variations on bacillary dysentery incidence have gained more recent concern. However, the multi-collinearity among meteorological factors affects the accuracy of correlation with bacillary dysentery incidence. Methods As a remedy, a modified method to combine ridge regression and hierarchical cluster analysis was proposed for investigating the effects of climate variations on bacillary dysentery incidence in northeast China. Results All weather indicators, temperatures, precipitation, evaporation and relative humidity have shown positive correlation with the monthly incidence of bacillary dysentery, while air pressure had a negative correlation with the incidence. Ridge regression and hierarchical cluster analysis showed that during 1987–1996, relative humidity, temperatures and air pressure affected the transmission of the bacillary dysentery. During this period, all meteorological factors were divided into three categories. Relative humidity and precipitation belonged to one class, temperature indexes and evaporation belonged to another class, and air pressure was the third class. Conclusion Meteorological factors have affected the transmission of bacillary dysentery in northeast China. Bacillary dysentery prevention and control would benefit from by giving more consideration to local climate variations.
A novel clustering and supervising users' profiles method
Institute of Scientific and Technical Information of China (English)
Zhu Mingfu; Zhang Hongbin; Song Fangyun
2005-01-01
To better understand different users' accessing intentions, a novel clustering and supervising method based on accessing path is presented. This method divides users' interest space to express the distribution of users' interests, and directly to instruct the constructing process of web pages indexing for advanced performance.
Methods for analyzing cost effectiveness data from cluster randomized trials
Directory of Open Access Journals (Sweden)
Clark Allan
2007-09-01
Full Text Available Abstract Background Measurement of individuals' costs and outcomes in randomized trials allows uncertainty about cost effectiveness to be quantified. Uncertainty is expressed as probabilities that an intervention is cost effective, and confidence intervals of incremental cost effectiveness ratios. Randomizing clusters instead of individuals tends to increase uncertainty but such data are often analysed incorrectly in published studies. Methods We used data from a cluster randomized trial to demonstrate five appropriate analytic methods: 1 joint modeling of costs and effects with two-stage non-parametric bootstrap sampling of clusters then individuals, 2 joint modeling of costs and effects with Bayesian hierarchical models and 3 linear regression of net benefits at different willingness to pay levels using a least squares regression with Huber-White robust adjustment of errors, b a least squares hierarchical model and c a Bayesian hierarchical model. Results All five methods produced similar results, with greater uncertainty than if cluster randomization was not accounted for. Conclusion Cost effectiveness analyses alongside cluster randomized trials need to account for study design. Several theoretically coherent methods can be implemented with common statistical software.
New clustering methods for population comparison on paternal lineages.
Juhász, Z; Fehér, T; Bárány, G; Zalán, A; Németh, E; Pádár, Z; Pamjav, H
2015-04-01
The goal of this study is to show two new clustering and visualising techniques developed to find the most typical clusters of 18-dimensional Y chromosomal haplogroup frequency distributions of 90 Western Eurasian populations. The first technique called "self-organizing cloud (SOC)" is a vector-based self-learning method derived from the Self Organising Map and non-metric Multidimensional Scaling algorithms. The second technique is a new probabilistic method called the "maximal relation probability" (MRP) algorithm, based on a probability function having its local maximal values just in the condensation centres of the input data. This function is calculated immediately from the distance matrix of the data and can be interpreted as the probability that a given element of the database has a real genetic relation with at least one of the remaining elements. We tested these two new methods by comparing their results to both each other and the k-medoids algorithm. By means of these new algorithms, we determined 10 clusters of populations based on the similarity of haplogroup composition. The results obtained represented a genetically, geographically and historically well-interpretable picture of 10 genetic clusters of populations mirroring the early spread of populations from the Fertile Crescent to the Caucasus, Central Asia, Arabia and Southeast Europe. The results show that a parallel clustering of populations using SOC and MRP methods can be an efficient tool for studying the demographic history of populations sharing common genetic footprints.
Directory of Open Access Journals (Sweden)
Alexander K Seewald
Full Text Available Isogenic populations of animals still show a surprisingly large amount of phenotypic variation between individuals. Using a GFP reporter that has been shown to predict longevity and resistance to stress in isogenic populations of the nematode Caenorhabditis elegans, we examined residual variation in expression of this GFP reporter. We found that when we separated the populations into brightest 3% and dimmest 3% we also saw variation in relative expression patterns that distinguished the bright and dim worms. Using a novel image processing method which is capable of directly analyzing worm images, we found that bright worms (after normalization to remove variation between bright and dim worms had expression patterns that correlated with other bright worms but that dim worms fell into two distinct expression patterns. We have analysed a small set of worms with confocal microscopy to validate these findings, and found that the activity loci in these clusters are caused by extremely bright intestine cells. We also found that the vast majority of the fluorescent signal for all worms came from intestinal cells as well, which may indicate that the activity of intestinal cells is responsible for the observed patterns. Phenotypic variation in C. elegans is still not well understood but our proposed novel method to analyze complex expression patterns offers a way to enable a better understanding.
Vinayaka : A Semi-Supervised Projected Clustering Method Using Differential Evolution
Satish Gajawada; Durga Toshniwal
2012-01-01
Differential Evolution (DE) is an algorithm for evolutionary optimization. Clustering problems have beensolved by using DE based clustering methods but these methods may fail to find clusters hidden insubspaces of high dimensional datasets. Subspace and projected clustering methods have been proposed inliterature to find subspace clusters that are present in subspaces of dataset. In this paper we proposeVINAYAKA, a semi-supervised projected clustering method based on DE. In this method DE opt...
de Martino, I; Ebeling, H; Kocevski, D
2016-01-01
We propose an improved methodology to constrain spatial variations of the fine structure constant using clusters of galaxies. We use the {\\it Planck} 2013 data to measure the thermal Sunyaev-Zeldovich effect at the location of 618 X-ray selected clusters. We then use a Monte Carlo Markov Chain algorithm to obtain the temperature of the Cosmic Microwave Background at the location of each galaxy cluster. When fitting three different phenomenological parameterizations allowing for monopole and dipole amplitudes in the value of the fine structure constant we improve the results of earlier analysis involving clusters and CMB power spectrum, and we also found that the best-fit direction of a hypothetical dipole is compatible with the direction of other known anomalies. Although the constraining power of our current datasets do not allow us to test the indications of a dipole obtained though high-resolution optical/UV spectroscopy, our results do highlight that clusters of galaxies will be a very powerful tool to pr...
Report of a Workshop on Parallelization of Coupled Cluster Methods
Energy Technology Data Exchange (ETDEWEB)
Rodney J. Bartlett Erik Deumens
2008-05-08
The benchmark, ab initio quantum mechanical methods for molecular structure and spectra are now recognized to be coupled-cluster theory. To benefit from the transiiton to tera- and petascale computers, such coupled-cluster methods must be created to run in a scalable fashion. This Workshop, held as a aprt of the 48th annual Sanibel meeting, at St. Simns, Island, GA, addressed that issue. Representatives of all the principal scientific groups who are addressing this topic were in attendance, to exchange information about the problem and to identify what needs to be done in the future. This report summarized the conclusions of the workshop.
Variation and Commonality in Phenomenographic Research Methods
Akerlind, Gerlese S.
2012-01-01
This paper focuses on the data analysis stage of phenomenographic research, elucidating what is involved in terms of both commonality and variation in accepted practice. The analysis stage of phenomenographic research is often not well understood. This paper helps to clarify the process, initially by collecting together in one location the more…
Variation and Commonality in Phenomenographic Research Methods
Akerlind, Gerlese S.
2012-01-01
This paper focuses on the data analysis stage of phenomenographic research, elucidating what is involved in terms of both commonality and variation in accepted practice. The analysis stage of phenomenographic research is often not well understood. This paper helps to clarify the process, initially by collecting together in one location the more…
Agent-based method for distributed clustering of textual information
Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN
2010-09-28
A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.
Cluster-in-molecule local correlation method for large systems
Institute of Scientific and Technical Information of China (English)
LI Wei; LI ShuHua
2014-01-01
A linear scaling local correlation method,cluster-in-molecule（CIM）method,was developed in the last decade for large systems.The basic idea of the CIM method is that the electron correlation energy of a large system,within the M ller-Plesset perturbation theory（MP）or coupled cluster（CC）theory,can be approximately obtained from solving the corresponding MP or CC equations of various clusters.Each of such clusters consists of a subset of localized molecular orbitals（LMOs）of the target system,and can be treated independently at various theory levels.In the present article,the main idea of the CIM method is reviewed,followed by brief descriptions of some recent developments,including its multilevel extension and different ways of constructing clusters.Then,some applications for large systems are illustrated.The CIM method is shown to be an efficient and reliable method for electron correlation calculations of large systems,including biomolecules and supramolecular complexes.
Zhang, Shu-Hua; Zhao, Ru-Xia; Li, He-Ping; Ge, Cheng-Min; Li, Gui; Huang, Qiu-Ping; Zou, Hua-Hong
2014-08-01
Using the solvothermal method, we present the comparative preparation of {[Co3Na(dmaep)3(ehbd)(N3)3]·DMF}n (1) and [Co2Na2(hmbd)4(N3)2(DMF)2] (2), where Hehbd is 3-ethoxy-2-hydroxy-benzaldehyde, Hhmbd is 3-methoxy-2-hydroxy-benzaldehyde, and Hdmaep is 2-dimethylaminomethyl-6-ethoxy-phenol, which was synthesized by an in-situ reaction. Complexes 1 and 2 were characterized by elemental analysis, IR spectroscopy, and X-ray single-crystal diffraction. Complex 1 is a novel heterometallic cluster-based 1-D chain and 2 is a heterometallic tetranuclear cluster. The {Co3IINa} and {Co2IINa2} cores display dominant ferromagnetic interaction from the nature of the binding modes through μ1,1,1-N3- (end-on, EO).
Energy Technology Data Exchange (ETDEWEB)
Mészáros, Szabolcs [ELTE Gothard Astrophysical Observatory, H-9704 Szombathely, Szent Imre Herceg st. 112 (Hungary); Martell, Sarah L. [Department of Astrophysics, School of Physics, University of New South Wales, Sydney, NSW 2052 (Australia); Shetrone, Matthew [University of Texas at Austin, McDonald Observatory, Fort Davis, TX 79734 (United States); Lucatello, Sara [INAF-Osservatorio Astronomico di Padova, vicolo dell Osservatorio 5, I-35122 Padova (Italy); Troup, Nicholas W.; Pérez, Ana E. García; Majewski, Steven R. [Department of Astronomy, University of Virginia, Charlottesville, VA 22904-4325 (United States); Bovy, Jo [Institute for Advanced Study, Einstein Drive, Princeton, NJ 08540 (United States); Cunha, Katia [University of Arizona, Tucson, AZ 85719 (United States); García-Hernández, Domingo A.; Prieto, Carlos Allende [Instituto de Astrofísica de Canarias (IAC), E-38200 La Laguna, Tenerife (Spain); Overbeek, Jamie C. [Department of Astronomy, Indiana University, Bloomington, IN 47405 (United States); Beers, Timothy C. [Department of Physics and JINA Center for the Evolution of the Elements, University of Notre Dame, Notre Dame, IN 46556 (United States); Frinchaboy, Peter M. [Texas Christian University, Fort Worth, TX 76129 (United States); Hearty, Fred R.; Schneider, Donald P. [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Holtzman, Jon [New Mexico State University, Las Cruces, NM 88003 (United States); Nidever, David L. [Department of Astronomy, University of Michigan, Ann Arbor, MI 48109 (United States); Schiavon, Ricardo P. [Astrophysics Research Institute, IC2, Liverpool Science Park, Liverpool John Moores University, 146 Brownlow Hill, Liverpool, L3 5RF (United Kingdom); and others
2015-05-15
We investigate the light-element behavior of red giant stars in northern globular clusters (GCs) observed by the SDSS-III Apache Point Observatory Galactic Evolution Experiment. We derive abundances of 9 elements (Fe, C, N, O, Mg, Al, Si, Ca, and Ti) for 428 red giant stars in 10 GCs. The intrinsic abundance range relative to measurement errors is examined, and the well-known C–N and Mg–Al anticorrelations are explored using an extreme-deconvolution code for the first time in a consistent way. We find that Mg and Al drive the population membership in most clusters, except in M107 and M71, the two most metal-rich clusters in our study, where the grouping is most sensitive to N. We also find a diversity in the abundance distributions, with some clusters exhibiting clear abundance bimodalities (for example M3 and M53) while others show extended distributions. The spread of Al abundances increases significantly as cluster average metallicity decreases as previously found by other works, which we take as evidence that low metallicity, intermediate mass AGB polluters were more common in the more metal-poor clusters. The statistically significant correlation of [Al/Fe] with [Si/Fe] in M15 suggests that {sup 28}Si leakage has occurred in this cluster. We also present C, N, and O abundances for stars cooler than 4500 K and examine the behavior of A(C+N+O) in each cluster as a function of temperature and [Al/Fe]. The scatter of A(C+N+O) is close to its estimated uncertainty in all clusters and independent of stellar temperature. A(C+N+O) exhibits small correlations and anticorrelations with [Al/Fe] in M3 and M13, but we cannot be certain about these relations given the size of our abundance uncertainties. Star-to-star variations of α-element (Si, Ca, Ti) abundances are comparable to our estimated errors in all clusters.
Non-hierarchical clustering methods on factorial subspaces
Tortora, Cristina
2011-01-01
Cluster analysis (CA) aims at finding homogeneous group of individuals, where homogeneous is referred to individuals that present similar characteristics. Many CA techniques already exist, among the non-hierarchical ones the most known, thank to its simplicity and computational property, is k-means method. However, the method is unstable when the number of variables is large and when variables are correlated. This problem leads to the development of two-step methods, they perform a linear tra...
A multigrid method for variational inequalities
Energy Technology Data Exchange (ETDEWEB)
Oliveira, S.; Stewart, D.E.; Wu, W.
1996-12-31
Multigrid methods have been used with great success for solving elliptic partial differential equations. Penalty methods have been successful in solving finite-dimensional quadratic programs. In this paper these two techniques are combined to give a fast method for solving obstacle problems. A nonlinear penalized problem is solved using Newton`s method for large values of a penalty parameter. Multigrid methods are used to solve the linear systems in Newton`s method. The overall numerical method developed is based on an exterior penalty function, and numerical results showing the performance of the method have been obtained.
Select and Cluster: A Method for Finding Functional Networks of Clustered Voxels in fMRI
DonGiovanni, Danilo
2016-01-01
Extracting functional connectivity patterns among cortical regions in fMRI datasets is a challenge stimulating the development of effective data-driven or model based techniques. Here, we present a novel data-driven method for the extraction of significantly connected functional ROIs directly from the preprocessed fMRI data without relying on a priori knowledge of the expected activations. This method finds spatially compact groups of voxels which show a homogeneous pattern of significant connectivity with other regions in the brain. The method, called Select and Cluster (S&C), consists of two steps: first, a dimensionality reduction step based on a blind multiresolution pairwise correlation by which the subset of all cortical voxels with significant mutual correlation is selected and the second step in which the selected voxels are grouped into spatially compact and functionally homogeneous ROIs by means of a Support Vector Clustering (SVC) algorithm. The S&C method is described in detail. Its performance assessed on simulated and experimental fMRI data is compared to other methods commonly used in functional connectivity analyses, such as Independent Component Analysis (ICA) or clustering. S&C method simplifies the extraction of functional networks in fMRI by identifying automatically spatially compact groups of voxels (ROIs) involved in whole brain scale activation networks. PMID:27656202
Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G
2015-10-01
Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.
Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.
2016-01-01
Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969
Analysis of protein profiles using fuzzy clustering methods
DEFF Research Database (Denmark)
Karemore, Gopal Raghunath; Ukendt, Sujatha; Rai, Lavanya
clustering methods for their classification followed by various validation measures. The clustering algorithms used for the study were K- means, K- medoid, Fuzzy C-means, Gustafson-Kessel, and Gath-Geva. The results presented in this study conclude that the protein profiles of tissue...... samples recorded by using the HPLC- LIF system and the data analyzed by clustering algorithms quite successfully classifies them as belonging from normal and malignant conditions....
A PROBABILISTIC EMBEDDING CLUSTERING METHOD FOR URBAN STRUCTURE DETECTION
Directory of Open Access Journals (Sweden)
X. Lin
2017-09-01
Full Text Available Urban structure detection is a basic task in urban geography. Clustering is a core technology to detect the patterns of urban spatial structure, urban functional region, and so on. In big data era, diverse urban sensing datasets recording information like human behaviour and human social activity, suffer from complexity in high dimension and high noise. And unfortunately, the state-of-the-art clustering methods does not handle the problem with high dimension and high noise issues concurrently. In this paper, a probabilistic embedding clustering method is proposed. Firstly, we come up with a Probabilistic Embedding Model (PEM to find latent features from high dimensional urban sensing data by “learning” via probabilistic model. By latent features, we could catch essential features hidden in high dimensional data known as patterns; with the probabilistic model, we can also reduce uncertainty caused by high noise. Secondly, through tuning the parameters, our model could discover two kinds of urban structure, the homophily and structural equivalence, which means communities with intensive interaction or in the same roles in urban structure. We evaluated the performance of our model by conducting experiments on real-world data and experiments with real data in Shanghai (China proved that our method could discover two kinds of urban structure, the homophily and structural equivalence, which means clustering community with intensive interaction or under the same roles in urban space.
Clustering Method in Data Mining%数据挖掘中的聚类方法
Institute of Scientific and Technical Information of China (English)
王实; 高文
2000-01-01
In this paper we introduce clustering method at Data Mining.Clustering has been studied very deeply.In the field of Data Mining,clustering is facing the new situation.We summarize the major clustering methods and introduce four kinds of clustering method that have been used broadly in Data Mitring.Finally we draw a conclusion that the partitional clustering method based on distance in data mining is a typical two phase iteration process:1)appoint cluster;2)update the center of cluster.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Shu-Hua, E-mail: zsh720108@163.com [College of Chemistry and Bioengineering, Guilin University of Technology, Guilin 541004 (China); Zhao, Ru-Xia; Li, He-Ping; Ge, Cheng-Min; Li, Gui; Huang, Qiu-Ping [College of Chemistry and Bioengineering, Guilin University of Technology, Guilin 541004 (China); Zou, Hua-Hong, E-mail: zouhuahong@163.com [School of Chemistry and Pharmaceutical Sciences, Guangxi Normal University, Guilin 541004 (China)
2014-08-15
Using the solvothermal method, we present the comparative preparation of ([Co{sub 3}Na(dmaep){sub 3}(ehbd)(N{sub 3}){sub 3}]·DMF){sub n} (1) and [Co{sub 2}Na{sub 2}(hmbd){sub 4}(N{sub 3}){sub 2}(DMF){sub 2}] (2), where Hehbd is 3-ethoxy-2-hydroxy-benzaldehyde, Hhmbd is 3-methoxy-2-hydroxy-benzaldehyde, and Hdmaep is 2-dimethylaminomethyl-6-ethoxy-phenol, which was synthesized by an in-situ reaction. Complexes 1 and 2 were characterized by elemental analysis, IR spectroscopy, and X-ray single-crystal diffraction. Complex 1 is a novel heterometallic cluster-based 1-D chain and 2 is a heterometallic tetranuclear cluster. The (Co{sub 3}{sup II}Na) and (Co{sub 2}{sup II}Na{sub 2}) cores display dominant ferromagnetic interaction from the nature of the binding modes through μ{sub 1,1,1}-N{sub 3}{sup –} (end-on, EO). - Graphical abstract: Two novel cobalt complexes have been prepared. Compound 1 consists of tetranuclear (Co{sub 3}{sup II}Na) units, which further formed a 1-D chain. Compound 2 is heterometallic tetranuclear cluster. Two complexes display dominant ferromagnetic interaction. - Highlights: • Two new heterometallic complexes have been synthesized by solvothermal method. • The stereospecific blockade of the ligands in the synthesis system seems to be the most important synthetic parameter. • The magnetism studies show that 1 and 2 exhibit ferromagnetic interactions. • Complex 1 shows slowing down of magnetization and not blocking of magnetization.
Collection of problems proposed at International Conference on Variational Methods
Institute of Scientific and Technical Information of China (English)
2008-01-01
This collection of problems is based on the Problem Section held on May 24, 2007 during the International Conference on Variational Methods. These problems reflect various aspects of variational methods and are due to Professors Victor Bangert, Alain Chenciner, Ivar Ekeland, Nassif Ghoussoub, Zhaoli Liu, Paul Rabinowitz and Hans-Bert Rademacher.
On Self-Adaptive Method for General Mixed Variational Inequalities
Directory of Open Access Journals (Sweden)
Abdellah Bnouhachem
2008-01-01
Full Text Available We suggest and analyze a new self-adaptive method for solving general mixed variational inequalities, which can be viewed as an improvement of the method of (Noor 2003. Global convergence of the new method is proved under the same assumptions as Noor's method. Some preliminary computational results are given to illustrate the efficiency of the proposed method. Since the general mixed variational inequalities include general variational inequalities, quasivariational inequalities, and nonlinear (implicit complementarity problems as special cases, results proved in this paper continue to hold for these problems.
Distinguishing Functional DNA Words; A Method for Measuring Clustering Levels
Moghaddasi, Hanieh; Khalifeh, Khosrow; Darooneh, Amir Hossein
2017-01-01
Functional DNA sub-sequences and genome elements are spatially clustered through the genome just as keywords in literary texts. Therefore, some of the methods for ranking words in texts can also be used to compare different DNA sub-sequences. In analogy with the literary texts, here we claim that the distribution of distances between the successive sub-sequences (words) is q-exponential which is the distribution function in non-extensive statistical mechanics. Thus the q-parameter can be used as a measure of words clustering levels. Here, we analyzed the distribution of distances between consecutive occurrences of 16 possible dinucleotides in human chromosomes to obtain their corresponding q-parameters. We found that CG as a biologically important two-letter word concerning its methylation, has the highest clustering level. This finding shows the predicting ability of the method in biology. We also proposed that chromosome 18 with the largest value of q-parameter for promoters of genes is more sensitive to dietary and lifestyle. We extended our study to compare the genome of some selected organisms and concluded that the clustering level of CGs increases in higher evolutionary organisms compared to lower ones. PMID:28128320
Distinguishing Functional DNA Words; A Method for Measuring Clustering Levels
Moghaddasi, Hanieh; Khalifeh, Khosrow; Darooneh, Amir Hossein
2017-01-01
Functional DNA sub-sequences and genome elements are spatially clustered through the genome just as keywords in literary texts. Therefore, some of the methods for ranking words in texts can also be used to compare different DNA sub-sequences. In analogy with the literary texts, here we claim that the distribution of distances between the successive sub-sequences (words) is q-exponential which is the distribution function in non-extensive statistical mechanics. Thus the q-parameter can be used as a measure of words clustering levels. Here, we analyzed the distribution of distances between consecutive occurrences of 16 possible dinucleotides in human chromosomes to obtain their corresponding q-parameters. We found that CG as a biologically important two-letter word concerning its methylation, has the highest clustering level. This finding shows the predicting ability of the method in biology. We also proposed that chromosome 18 with the largest value of q-parameter for promoters of genes is more sensitive to dietary and lifestyle. We extended our study to compare the genome of some selected organisms and concluded that the clustering level of CGs increases in higher evolutionary organisms compared to lower ones.
An improved unsupervised clustering-based intrusion detection method
Hai, Yong J.; Wu, Yu; Wang, Guo Y.
2005-03-01
Practical Intrusion Detection Systems (IDSs) based on data mining are facing two key problems, discovering intrusion knowledge from real-time network data, and automatically updating them when new intrusions appear. Most data mining algorithms work on labeled data. In order to set up basic data set for mining, huge volumes of network data need to be collected and labeled manually. In fact, it is rather difficult and impractical to label intrusions, which has been a big restrict for current IDSs and has led to limited ability of identifying all kinds of intrusion types. An improved unsupervised clustering-based intrusion model working on unlabeled training data is introduced. In this model, center of a cluster is defined and used as substitution of this cluster. Then all cluster centers are adopted to detect intrusions. Testing on data sets of KDDCUP"99, experimental results demonstrate that our method has good performance in detection rate. Furthermore, the incremental-learning method is adopted to detect those unknown-type intrusions and it decreases false positive rate.
Quark-gluon plasma phase transition using cluster expansion method
Syam Kumar, A. M.; Prasanth, J. P.; Bannur, Vishnu M.
2015-08-01
This study investigates the phase transitions in QCD using Mayer's cluster expansion method. The inter quark potential is modified Cornell potential. The equation of state (EoS) is evaluated for a homogeneous system. The behaviour is studied by varying the temperature as well as the number of Charm Quarks. The results clearly show signs of phase transition from Hadrons to Quark-Gluon Plasma (QGP).
Translationally-invariant coupled-cluster method for finite systems
Guardiola, R; Navarro, J; Portesi, M
1998-01-01
The translational invariant formulation of the coupled-cluster method is presented here at the complete SUB(2) level for a system of nucleons treated as bosons. The correlation amplitudes are solution of a non-linear coupled system of equations. These equations have been solved for light and medium systems, considering the central but still semi-realistic nucleon-nucleon S3 interaction.
A variational method for spectral functions
Harris, Tim; Robaina, Daniel
2016-01-01
The Generalized Eigenvalue Problem (GEVP) has been used extensively in the past in order to reliably extract energy levels from time-dependent Euclidean correlators calculated in Lattice QCD. We propose a formulation of the GEVP in frequency space. Our approach consists of applying the model-independent Backus-Gilbert method to a set of Euclidean two-point functions with common quantum numbers. A GEVP analysis in frequency space is then applied to a matrix of estimators that allows us, among other things, to obtain particular linear combinations of the initial set of operators that optimally overlap to different local regions in frequency. We apply this method to lattice data from NRQCD. This approach can be interesting both for vacuum physics as well as for finite-temperature problems.
Modelling asteroid brightness variations. I - Numerical methods
Karttunen, H.
1989-01-01
A method for generating lightcurves of asteroid models is presented. The effects of the shape of the asteroid and the scattering law of a surface element are distinctly separable, being described by chosen functions that can easily be changed. The shape is specified by means of two functions that yield the length of the radius vector and the normal vector of the surface at a given point. The general shape must be convex, but spherical concavities producing macroscopic shadowing can also be modeled.
Hybrid Steepest-Descent Methods for Triple Hierarchical Variational Inequalities
Directory of Open Access Journals (Sweden)
L. C. Ceng
2015-01-01
Full Text Available We introduce and analyze a relaxed iterative algorithm by combining Korpelevich’s extragradient method, hybrid steepest-descent method, and Mann’s iteration method. We prove that, under appropriate assumptions, the proposed algorithm converges strongly to a common element of the fixed point set of infinitely many nonexpansive mappings, the solution set of finitely many generalized mixed equilibrium problems (GMEPs, the solution set of finitely many variational inclusions, and the solution set of general system of variational inequalities (GSVI, which is just a unique solution of a triple hierarchical variational inequality (THVI in a real Hilbert space. In addition, we also consider the application of the proposed algorithm for solving a hierarchical variational inequality problem with constraints of finitely many GMEPs, finitely many variational inclusions, and the GSVI. The results obtained in this paper improve and extend the corresponding results announced by many others.
Regularization and Iterative Methods for Monotone Variational Inequalities
Directory of Open Access Journals (Sweden)
Xiubin Xu
2010-01-01
Full Text Available We provide a general regularization method for monotone variational inequalities, where the regularizer is a Lipschitz continuous and strongly monotone operator. We also introduce an iterative method as discretization of the regularization method. We prove that both regularization and iterative methods converge in norm.
Adapted G-mode Clustering Method applied to Asteroid Taxonomy
Hasselmann, Pedro H.; Carvano, Jorge M.; Lazzaro, D.
2013-11-01
The original G-mode was a clustering method developed by A. I. Gavrishin in the late 60's for geochemical classification of rocks, but was also applied to asteroid photometry, cosmic rays, lunar sample and planetary science spectroscopy data. In this work, we used an adapted version to classify the asteroid photometry from SDSS Moving Objects Catalog. The method works by identifying normal distributions in a multidimensional space of variables. The identification starts by locating a set of points with smallest mutual distance in the sample, which is a problem when data is not planar. Here we present a modified version of the G-mode algorithm, which was previously written in FORTRAN 77, in Python 2.7 and using NumPy, SciPy and Matplotlib packages. The NumPy was used for array and matrix manipulation and Matplotlib for plot control. The Scipy had a import role in speeding up G-mode, Scipy.spatial.distance.mahalanobis was chosen as distance estimator and Numpy.histogramdd was applied to find the initial seeds from which clusters are going to evolve. Scipy was also used to quickly produce dendrograms showing the distances among clusters. Finally, results for Asteroids Taxonomy and tests for different sample sizes and implementations are presented.
Super pixel density based clustering automatic image classification method
Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu
2015-12-01
The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.
Segmentation of MRI Volume Data Based on Clustering Method
Directory of Open Access Journals (Sweden)
Ji Dongsheng
2016-01-01
Full Text Available Here we analyze the difficulties of segmentation without tag line of left ventricle MR images, and propose an algorithm for automatic segmentation of left ventricle (LV internal and external profiles. Herein, we propose an Incomplete K-means and Category Optimization (IKCO method. Initially, using Hough transformation to automatically locate initial contour of the LV, the algorithm uses a simple approach to complete data subsampling and initial center determination. Next, according to the clustering rules, the proposed algorithm finishes MR image segmentation. Finally, the algorithm uses a category optimization method to improve segmentation results. Experiments show that the algorithm provides good segmentation results.
A Comparison of Methods for Player Clustering via Behavioral Telemetry
DEFF Research Database (Denmark)
Drachen, Anders; Thurau, Christian; Sifa, Rafet
2013-01-01
can be exceptionally complex, with features recorded for a varying population of users over a temporal segment that can reach years in duration. Categorization of behaviors, whether through descriptive methods (e.g. segmentation) or unsupervised/supervised learning techniques, is valuable for finding...... patterns in the behavioral data, and developing profiles that are actionable to game developers. There are numerous methods for unsupervised clustering of user behavior, e.g. k-means/c-means, Nonnegative Matrix Factorization, or Principal Component Analysis. Although all yield behavior categorizations...
Efficient Cluster Head Selection Methods for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Jong-Shin Chen
2010-08-01
Full Text Available The past few years have witnessed increased in the potential use of wireless sensor network (WSN such as disaster management, combat field reconnaissance, border protection and security surveillance. Sensors in these applications are expected to be remotely deployed in large numbers and to operate autonomously in unattended environments. Since a WSN is composed of nodes with nonreplenishable energy resource, elongating the network lifetime is the main concern. To support scalability, nodes are often grouped into disjoint clusters. Each cluster would have a leader, often referred as cluster head (CH. A CH is responsible for not only the general request but also assisting the general nodes to route the sensed data to the target nodes. The power-consumption of a CH is higher then of a general (non-CH node. Therefore, the CH selection will affect the lifetime of a WSN. However, the application scenario contexts of WSNs that determine the definitions of lifetime will impact to achieve the objective of elongating lifetime. In this study, we classify the lifetime into different types and give the corresponding CH selection method to achieve the life-time extension objective. Simulation results demonstrate our study can enlarge the life-time for different requests of the sensor networks.
The Integral- and Intermediate-Screened Coupled-Cluster Method
Sørensen, L K
2016-01-01
We present the formulation and implementation of the integral- and intermediate-screened coupled-cluster method (ISSCC). The IISCC method gives a simple and rigorous integral and intermediate screening (IIS) of the coupled-cluster method and will significantly reduces the scaling for all orders of the CC hierarchy exactly like seen for the integral-screened configuration-interaction method (ISCI). The rigorous IIS in the IISCC gives a robust and adjustable error control which should allow for the possibility of converging the energy without any loss of accuracy while retaining low or linear scaling at the same time. The derivation of the IISCC is performed in a similar fashion as in the ISCI where we show that the tensor contractions for the nested commutators are separable up to an overall sign and that this separability can lead to a rigorous IIS. In the nested commutators the integrals are screened in the first tensor contraction and the intermediates are screened in all successive tensor contractions. The...
Optimal sensor placement using FRFs-based clustering method
Li, Shiqi; Zhang, Heng; Liu, Shiping; Zhang, Zhe
2016-12-01
The purpose of this work is to develop an optimal sensor placement method by selecting the most relevant degrees of freedom as actual measure position. Based on observation matrix of a structure's frequency response, two optimal criteria are used to avoid the information redundancy of the candidate degrees of freedom. By using principal component analysis, the frequency response matrix can be decomposed into principal directions and their corresponding singular. A relatively small number of principal directions will maintain a system's dominant response information. According to the dynamic similarity of each degree of freedom, the k-means clustering algorithm is designed to classify the degrees of freedom, and effective independence method deletes the sensors which are redundant of each cluster. Finally, two numerical examples and a modal test are included to demonstrate the efficient of the derived method. It is shown that the proposed method provides a way to extract sub-optimal sets and the selected sensors are well distributed on the whole structure.
Comparing Methods for segmentation of Microcalcification Clusters in Digitized Mammograms
Moradmand, Hajar; Targhi, Hossein Khazaei
2012-01-01
The appearance of microcalcifications in mammograms is one of the early signs of breast cancer. So, early detection of microcalcification clusters (MCCs) in mammograms can be helpful for cancer diagnosis and better treatment of breast cancer. In this paper a computer method has been proposed to support radiologists in detection MCCs in digital mammography. First, in order to facilitate and improve the detection step, mammogram images have been enhanced with wavelet transformation and morphology operation. Then for segmentation of suspicious MCCs, two methods have been investigated. The considered methods are: adaptive threshold and watershed segmentation. Finally, the detected MCCs areas in different algorithms will be compared to find out which segmentation method is more appropriate for extracting MCCs in mammograms.
A cluster-based method for marine sensitive object extraction and representation
Xue, Cunjin; Dong, Qing; Qin, Lijuan
2015-08-01
Within the context of global change, marine sensitive factors or Marine Essential Climate Variables have been defined by many projects, and their sensitive spatial regions and time phases play significant roles in regional sea-air interactions and better understanding of their dynamic process. In this paper, we propose a cluster-based method for marine sensitive region extraction and representation. This method includes a kernel expansion algorithm for extracting marine sensitive regions, and a field-object triple form, integration of object-oriented and field-based model, for representing marine sensitive objects. Firstly, this method recognizes ENSO-related spatial patterns using empirical orthogonal decomposition of long term marine sensitive factors and correlation analysis with multiple ENSO index. The cluster kernel, defined by statistics of spatial patterns, is initialized to carry out spatial expansion and cluster mergence with spatial neighborhoods recursively, then all the related lattices with similar behavior are merged into marine sensitive regions. After this, the Field-object triple form of is used to represent the marine sensitive objects, both with the discrete object with a precise extend and boundary, and the continuous field with variations dependent on spatial locations. Finally, the marine sensitive objects about sea surface temperature are extracted, represented and analyzed as a case of study, which proves the effectiveness and the efficiency of the proposed method.
Phase Diagram of the Frustrated Square-Lattice Hubbard Model: Variational Cluster Approach
Misumi, Kazuma; Kaneko, Tatsuya; Ohta, Yukinori
2016-06-01
The variational cluster approximation is used to study the frustrated Hubbard model at half filling defined on the two-dimensional square lattice with anisotropic next-nearest-neighbor hopping parameters. We calculate the ground-state phase diagrams of the model in a wide parameter space for a variety of lattice geometries, including square, crossed-square, and triangular lattices. We examine the Mott metal-insulator transition and show that, in the Mott insulating phase, magnetic phases with Néel, collinear, and spiral orders appear in relevant parameter regions, and in an intermediate region between these phases, a nonmagnetic insulating phase caused by the quantum fluctuations in the geometrically frustrated spin degrees of freedom emerges.
Implicit particle methods and their connection with variational data assimilation
Atkins, Ethan; Chorin, Alexandre J
2012-01-01
The implicit particle filter is a sequential Monte Carlo method for data assimilation that guides the particles to the high-probability regions via a sequence of steps that includes minimizations. We present a new and more general derivation of this approach and extend the method to particle smoothing as well as to data assimilation for perfect models. We show that the minimizations required by implicit particle methods are similar to the ones one encounters in variational data assimilation and explore the connection of implicit particle methods with variational data assimilation. In particular, we argue that existing variational codes can be converted into implicit particle methods at a low cost, often yielding better estimates, that are also equipped with quantitative measures of the uncertainty. A detailed example is presented.
Discrete Direct Methods in the Fractional Calculus of Variations
Pooseh, Shakoor; Almeida, Ricardo; Torres, Delfim F. M.
2012-01-01
Finite differences, as a subclass of direct methods in the calculus of variations, consist in discretizing the objective functional using appropriate approximations for derivatives that appear in the problem. This article generalizes the same idea for fractional variational problems. We consider a minimization problem with a Lagrangian that depends on the left Riemann– Liouville fractional derivative. Using the Gr¨unwald–Letnikov definition, we approximate the objective functional in...
Variational Principles and Methods in Theoretical Physics and Chemistry
Nesbet, Robert K.
2005-07-01
Preface; Part I. Classical Mathematics and Physics: 1. History of variational theory; 2. Classical mechanics; 3. Applied mathematics; Part II. Bound States in Quantum Mechanics: 4. Time-independent quantum mechanics; 5. Independent-electron models; 6. Time-dependent theory and linear response; Part III. Continuum States and Scattering Theory: 7. Multiple scattering theory for molecules and solids; 8. Variational methods for continuum states; 9. Electron-impact rovibrational excitation of molecules; Part IV. Field Theories: 10. Relativistic Lagrangian theories.
Nucleon matrix elements using the variational method in lattice QCD
Dragos, Jack; Kamleh, Waseem; Leinweber, Derek B; Nakamura, Yoshifumi; Rakow, Paul E L; Schierholz, Gerrit; Young, Ross D; Zanotti, James M
2016-01-01
The extraction of hadron matrix elements in lattice QCD using the standard two- and three-point correlator functions demands careful attention to systematic uncertainties. One of the most commonly studied sources of systematic error is contamination from excited states. We apply the variational method to calculate the axial vector current $g_{A}$, the scalar current $g_{S}$ and the quark momentum fraction $\\left$ of the nucleon and we compare the results to the more commonly used summation and two-exponential fit methods. The results demonstrate that the variational approach offers a more efficient and robust method for the determination of nucleon matrix elements.
Directory of Open Access Journals (Sweden)
Veloso Germany Gonçalves
2001-01-01
Full Text Available Episodic paroxysmal hemicrania (EPH is a rare disorder characterized by frequent, daily attacks of short-lived, unilateral headache with accompanying ipsilateral autonomic features. EPH has attack periods which last weeks to months separated by remission intervals lasting months to years, however, a seasonal variation has never been reported in EPH. We report a new case of EPH with a clear seasonal pattern: a 32-year-old woman with a right-sided headache for 17 years. Pain occurred with a seasonal variation, with bouts lasting one month (usually in the first months of the year and remission periods lasting around 11 months. During these periods she had headache from three to five times per day, lasting from 15 to 30 minutes, without any particular period preference. There were no precipitating or aggravating factors. Tearing and conjunctival injection accompanied ipsilaterally the pain. Previous treatments provided no pain relief. She completely responded to indomethacin 75 mg daily. After three years, the pain recurred with longer attack duration and was just relieved with prednisone. We also propose a new hypothesis: the EPH-cluster headache continuum.
Some Implicit Methods for Solving Harmonic Variational Inequalities
Directory of Open Access Journals (Sweden)
Muhammad Aslam Noor
2016-08-01
Full Text Available In this paper, we use the auxiliary principle technique to suggest an implicit method for solving the harmonic variational inequalities. It is shown that the convergence of the proposed method only needs pseudo monotonicity of the operator, which is a weaker condition than monotonicity.
Tu, Xiaoguang; Gao, Jingjing; Zhu, Chongjing; Cheng, Jie-Zhi; Ma, Zheng; Dai, Xin; Xie, Mei
2016-12-01
Though numerous segmentation algorithms have been proposed to segment brain tissue from magnetic resonance (MR) images, few of them consider combining the tissue segmentation and bias field correction into a unified framework while simultaneously removing the noise. In this paper, we present a new unified MR image segmentation algorithm whereby tissue segmentation, bias correction and noise reduction are integrated within the same energy model. Our method is presented by a total variation term introduced to the coherent local intensity clustering criterion function. To solve the nonconvex problem with respect to membership functions, we add auxiliary variables in the energy function such as Chambolle's fast dual projection method can be used and the optimal segmentation and bias field estimation can be achieved simultaneously throughout the reciprocal iteration. Experimental results show that the proposed method has a salient advantage over the other three baseline methods on either tissue segmentation or bias correction, and the noise is significantly reduced via its applications on highly noise-corrupted images. Moreover, benefiting from the fast convergence of the proposed solution, our method is less time-consuming and robust to parameter setting.
A Comparison of Methods for Player Clustering via Behavioral Telemetry
DEFF Research Database (Denmark)
Drachen, Anders; Thurau, Christian; Sifa, Rafet;
2013-01-01
The analysis of user behavior in digital games has been aided by the introduction of user telemetry in game development, which provides unprecedented access to quantitative data on user behavior from the installed game clients of the entire population of players. Player behavior telemetry datasets...... can be exceptionally complex, with features recorded for a varying population of users over a temporal segment that can reach years in duration. Categorization of behaviors, whether through descriptive methods (e.g. segmentation) or unsupervised/supervised learning techniques, is valuable for finding...... patterns in the behavioral data, and developing profiles that are actionable to game developers. There are numerous methods for unsupervised clustering of user behavior, e.g. k-means/c-means, Nonnegative Matrix Factorization, or Principal Component Analysis. Although all yield behavior categorizations...
Relativistic extended coupled cluster method for magnetic hyperfine structure constant
Sasmal, Sudip; Nayak, Malaya K; Vaval, Nayana; Pal, Sourav
2015-01-01
This article deals with the general implementation of 4-component spinor relativistic extended coupled cluster (ECC) method to calculate first order property of atoms and molecules in their open-shell ground state configuration. The implemented relativistic ECC is employed to calculate hyperfine structure (HFS) constant of alkali metals (Li, Na, K, Rb and Cs), singly charged alkaline earth metal atoms (Be+, Mg+, Ca+ and Sr+) and molecules (BeH, MgF and CaH). We have compared our ECC results with the calculations based on restricted active space configuration interaction (RAS-CI) method. Our results are in better agreement with the available experimental values than those of the RAS-CI values.
Directory of Open Access Journals (Sweden)
Fu Yuhua
2016-08-01
Full Text Available By using Neutrosophy and Quad-stage Method, the expansions of comparative literature include: comparative social sciences clusters, comparative natural sciences clusters, comparative interdisciplinary sciences clusters, and so on. Among them, comparative social sciences clusters include: comparative literature, comparative history, comparative philosophy, and so on; comparative natural sciences clusters include: comparative mathematics, comparative physics, comparative chemistry, comparative medicine, comparative biology, and so on.
A Method for Clustering Web Attacks Using Edit Distance
Petrovic, Slobodan; Alvarez, Gonzalo
2003-01-01
Cluster analysis often serves as the initial step in the process of data classification. In this paper, the problem of clustering different length input data is considered. The edit distance as the minimum number of elementary edit operations needed to transform one vector into another is used. A heuristic for clustering unequal length vectors, analogue to the well known k-means algorithm is described and analyzed. This heuristic determines cluster centroids expanding shorter vectors to the l...
Methods of regional innovative clusters forming and development programs elaboration
Marchuk, Olha
2013-01-01
The aim of the article is to select programmes for the formation and development of innovative cluster structures. The analysis of the backgrounds of formation of innovative clusters was made in the regions of Ukraine. Two types of programmes were suggested for the implamentation of cluster policy at the regional level.
The Additional Interpolators Method for Variational Analysis in Lattice QCD
Schiel, Rainer W
2015-01-01
In this paper, I describe the Additional Interpolators Method, a new technique for variational analysis in lattice QCD. It is shown to be an excellent method which uses additional interpolators to remove backward in time running states that would otherwise contaminate the signal. The proof of principle, which also makes use of the Time-Shift Trick (Generalized Pencil-of-Functions method), will be delivered at an example on a $64^4$ lattice close to the physical pion mass.
Gould, S H
2012-01-01
Purely mathematical treatment offers simple exposition of general theory of variational methods with special reference to the vibrating plate. No math beyond basic calculus. Includes exercises. 1957 edition.
A Grouping Method of Distribution Substations Using Cluster Analysis
Ohtaka, Toshiya; Iwamoto, Shinichi
Recently, it has been considered to group distribution substations together for evaluating the reinforcement planning of distribution systems. However, the grouping is carried out by the knowledge and experience of an expert who is in charge of distribution systems, and a subjective feeling of a human being causes ambiguous grouping at the moment. Therefore, a method for imitating the grouping by the expert has been desired in order to carry out a systematic grouping which has numerical corroboration. In this paper, we propose a grouping method of distribution substations using cluster analysis based on the interconnected power between the distribution substations. Moreover, we consider the geographical constraints such as rivers, roads, business office boundaries and branch boundaries, and also examine a method for adjusting the interconnected power. Simulations are carried out to verify the validity of the proposed method using an example system. From the simulation results, we can find that the imitation of the grouping by the expert becomes possible due to considering the geographical constraints and adjusting the interconnected power, and also the calculation time and iterations can be greatly reduced by introducing the local and tabu search methods.
Lestari, D.; Raharjo, D.; Bustamam, A.; Abdillah, B.; Widhianto, W.
2017-07-01
Dengue virus consists of 10 different constituent proteins and are classified into 4 major serotypes (DEN 1 - DEN 4). This study was designed to perform clustering against 30 protein sequences of dengue virus taken from Virus Pathogen Database and Analysis Resource (VIPR) using Regularized Markov Clustering (R-MCL) algorithm and then we analyze the result. By using Python program 3.4, R-MCL algorithm produces 8 clusters with more than one centroid in several clusters. The number of centroid shows the density level of interaction. Protein interactions that are connected in a tissue, form a complex protein that serves as a specific biological process unit. The analysis of result shows the R-MCL clustering produces clusters of dengue virus family based on the similarity role of their constituent protein, regardless of serotypes.
The Local Maximum Clustering Method and Its Application in Microarray Gene Expression Data Analysis
Directory of Open Access Journals (Sweden)
Chen Yidong
2004-01-01
Full Text Available An unsupervised data clustering method, called the local maximum clustering (LMC method, is proposed for identifying clusters in experiment data sets based on research interest. A magnitude property is defined according to research purposes, and data sets are clustered around each local maximum of the magnitude property. By properly defining a magnitude property, this method can overcome many difficulties in microarray data clustering such as reduced projection in similarities, noises, and arbitrary gene distribution. To critically evaluate the performance of this clustering method in comparison with other methods, we designed three model data sets with known cluster distributions and applied the LMC method as well as the hierarchic clustering method, the -mean clustering method, and the self-organized map method to these model data sets. The results show that the LMC method produces the most accurate clustering results. As an example of application, we applied the method to cluster the leukemia samples reported in the microarray study of Golub et al. (1999.
A Data Cleansing Method for Clustering Large-scale Transaction Databases
Loh, Woong-Kee; Kang, Jun-Gyu
2010-01-01
In this paper, we emphasize the need for data cleansing when clustering large-scale transaction databases and propose a new data cleansing method that improves clustering quality and performance. We evaluate our data cleansing method through a series of experiments. As a result, the clustering quality and performance were significantly improved by up to 165% and 330%, respectively.
A dynamic hierarchical clustering method for trajectory-based unusual video event detection.
Jiang, Fan; Wu, Ying; Katsaggelos, Aggelos K
2009-04-01
The proposed unusual video event detection method is based on unsupervised clustering of object trajectories, which are modeled by hidden Markov models (HMM). The novelty of the method includes a dynamic hierarchical process incorporated in the trajectory clustering algorithm to prevent model overfitting and a 2-depth greedy search strategy for efficient clustering.
SST data assimilation experiments using an adaptive variational method
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
An adaptive variational data assimilation method is proposed by Zhu and Kamachi[1]. This method can adaptively adjust the model state without knowing explicitly the model error covariance matrix. The method enables very flexible ways to form some reduced order problems. A proper reduced order problem not only reduces computational burden but also leads to corrections that are more consistent with the model dynamics that trends to produce better forecast. These features make the adaptive variational method a good candidate for SST data assimilation because the model error of an ocean model is usually difficult to estimate. We applied this method to an SST data assimilation problem using the LOTUS data sets and an ocean mixed layer model (Mellor-Yamada level 2.5). Results of assimilation experiments showed good skill of improvement subsurface temperatures by assimilating surface observation alone.
Image Clustering Method Based on Density Maps Derived from Self-Organizing Mapping: SOM
Directory of Open Access Journals (Sweden)
Kohei Arai
2012-07-01
Full Text Available A new method for image clustering with density maps derived from Self-Organizing Maps (SOM is proposed together with a clarification of learning processes during a construction of clusters. It is found that the proposed SOM based image clustering method shows much better clustered result for both simulation and real satellite imagery data. It is also found that the separability among clusters of the proposed method is 16% longer than the existing k-mean clustering. It is also found that the separability among clusters of the proposed method is 16% longer than the existing k-mean clustering. In accordance with the experimental results with Landsat-5 TM image, it takes more than 20000 of iteration for convergence of the SOM learning processes.
Computer vision analysis of image motion by variational methods
Mitiche, Amar
2014-01-01
This book presents a unified view of image motion analysis under the variational framework. Variational methods, rooted in physics and mechanics, but appearing in many other domains, such as statistics, control, and computer vision, address a problem from an optimization standpoint, i.e., they formulate it as the optimization of an objective function or functional. The methods of image motion analysis described in this book use the calculus of variations to minimize (or maximize) an objective functional which transcribes all of the constraints that characterize the desired motion variables. The book addresses the four core subjects of motion analysis: Motion estimation, detection, tracking, and three-dimensional interpretation. Each topic is covered in a dedicated chapter. The presentation is prefaced by an introductory chapter which discusses the purpose of motion analysis. Further, a chapter is included which gives the basic tools and formulae related to curvature, Euler Lagrange equations, unconstrained de...
VARIATION METHOD FOR ACOUSTIC WAVE IMAGING OF TWO DIMENSIONAL TARGETS
Institute of Scientific and Technical Information of China (English)
冯文杰; 邹振祝
2003-01-01
A new way of acoustic wave imaging was investigated. By using the Green function theory a system of integral equations, which linked wave number perturbation function with wave field, was firstly deduced. By taking variation on these integral equations an inversion equation, which reflected the relation between the little variation of wave number perturbation function and that of scattering field, was further obtained. Finally, the perturbation functions of some identical targets were reconstructed, and some properties of the novel method including converging speed, inversion accuracy and the abilities to resist random noise and identify complex targets were discussed. Results of numerical simulation show that the method based on the variation principle has great theoretical and applicable value to quantitative nondestructive evaluation.
Displacement of Building Cluster Using Field Analysis Method
Institute of Scientific and Technical Information of China (English)
Al Tinghua
2003-01-01
This paper presents a field based method to deal with the displacement of building cluster,which is driven by the street widening. The compress of street boundary results in the force to push the building moving inside and the force propagation is a decay process. To describe the phenomenon above, the field theory is introduced with the representation model of isoline. On the basis of the skeleton of Delaunay triangulation,the displacement field is built in which the propagation force is related to the adjacency degree with respect to the street boundary. The study offers the computation of displacement direction and offset distance for the building displacement. The vector operation is performed on the basis of grade and other field concepts.
Directory of Open Access Journals (Sweden)
Mahapatra Rajendra
2011-06-01
Full Text Available Abstract Background Public health interventions are increasingly evaluated using cluster-randomised trials in which groups rather than individuals are allocated randomly to treatment and control arms. Outcomes for individuals within the same cluster are often more correlated than outcomes for individuals in different clusters. This needs to be taken into account in sample size estimations for planned trials, but most estimates of intracluster correlation for perinatal health outcomes come from hospital-based studies and may therefore not reflect outcomes in the community. In this study we report estimates for perinatal health outcomes from community-based trials to help researchers plan future evaluations. Methods We estimated the intracluster correlation and the coefficient of variation for a range of outcomes using data from five community-based cluster randomised controlled trials in three low-income countries: India, Bangladesh and Malawi. We also performed a simulation exercise to investigate the impact of cluster size and number of clusters on the reliability of estimates of the coefficient of variation for rare outcomes. Results Estimates of intracluster correlation for mortality outcomes were lower than those for process outcomes, with narrower confidence intervals throughout for trials with larger numbers of clusters. Estimates of intracluster correlation for maternal mortality were particularly variable with large confidence intervals. Stratified randomisation had the effect of reducing estimates of intracluster correlation. The simulation exercise showed that estimates of intracluster correlation are much less reliable for rare outcomes such as maternal mortality. The size of the cluster had a greater impact than the number of clusters on the reliability of estimates for rare outcomes. Conclusions The breadth of intracluster correlation estimates reported here in terms of outcomes and contexts will help researchers plan future
Das, T. P.; Pink, R. H.; Dubey, Archana; Scheicher, R. H.; Chow, Lee
2011-03-01
As part of our continuing test of accuracy of the variational methods, Variational Hartree-Fock Many Body Perturbation Theory (VHFMBPT) and Variational Density Functional Theory (VDFT) for study of energy and wave-function dependent properties in molecular and solid state systems we are studying the magnetic hyperfine interactions in the ground state of sodium atom for comparison by these methods with the available results from experiment 1 and the linked cluster many-body many body perturbation theory (LCMBPT) for atoms 2 , which has provided very accurate results for the one-electron and many-electron contributions and total hyperfine constants in atomic systems. Comparison will also be made with the corresponding results obtained already from the (VHFMBPT) and (VDFT) methods in lithium 3 to draw general conclusions about the nature of possible improvements needed for the variational methods. 1. M. Arditi and R. T. Carver, Phys. Rev. 109, 1012 (1958); 2. T. Lee, N.C. Dutta, and T.P. Das, Hyperfine Structure of Sodium, Phys. Rev. A 1, 995 (1970); 3. Third Joint HFI-NQI International Conference on Hyperfine Interactions, CERN, Geneva, September 2010.
Energy Technology Data Exchange (ETDEWEB)
Simmerer, Jennifer; Ivans, Inese I.; Filler, Dan [Department of Physics and Astronomy, University of Utah, Salt Lake City, UT 84112 (United States); Francois, Patrick [Paris-Meudon Observatory, France and Universite de Picardie Jules Verne, F-80080 Amiens (France); Charbonnel, Corinne [Geneva Observatory, University of Geneva, Chemin des Maillettes 51, CH-1290 Versoix (Switzerland); Monier, Richard [Laboratoire Hippolyte Fizeau, Universite Nice Sophia Antipolis, Parc Valrose, F-06000 Nice (France); James, Gaeel, E-mail: jennifer@physics.utah.edu, E-mail: iii@physics.utah.edu, E-mail: dan.filler@utah.edu, E-mail: patrick.francois@obspm.fr, E-mail: corinne.charbonnel@unige.ch, E-mail: richard.monier@unice.fr, E-mail: gjames@eso.org [European Southern Observatory, Karl-Schwarzschild-Strasse 2, D-85748 Garching bei Munchen (Germany)
2013-02-10
We present the metallicity as traced by the abundance of iron in the retrograde globular cluster NGC 3201, measured from high-resolution, high signal-to-noise spectra of 24 red giant branch stars. A spectroscopic analysis reveals a spread in [Fe/H] in the cluster stars at least as large as 0.4 dex. Star-to-star metallicity variations are supported both through photometry and through a detailed examination of spectra. We find no correlation between iron abundance and distance from the cluster core, as might be inferred from recent photometric studies. NGC 3201 is the lowest mass halo cluster to date to contain stars with significantly different [Fe/H] values.
Study on Application of Hypervirial Therem in the Variational Method
Institute of Scientific and Technical Information of China (English)
DINGYi－Bing; LIXue－Qian; 等
2002-01-01
We discuss a methodology problem which is crocially important for solving the Schodinger equation in terms of the variational method.We present a complete analysis on the application of the hypervirial theorem for Judging the quality of the trial wavefunction without invoking the precise solutions.
Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.
Baranwal, Vipul K; Pandey, Ram K; Singh, Om P
2014-01-01
We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.
Discrete gradient methods for solving variational image regularisation models
Grimm, V.; McLachlan, Robert I.; McLaren, David I.; Quispel, G. R. W.; Schönlieb, C.-B.
2017-07-01
Discrete gradient methods are well-known methods of geometric numerical integration, which preserve the dissipation of gradient systems. In this paper we show that this property of discrete gradient methods can be interesting in the context of variational models for image processing, that is where the processed image is computed as a minimiser of an energy functional. Numerical schemes for computing minimisers of such energies are desired to inherit the dissipative property of the gradient system associated to the energy and consequently guarantee a monotonic decrease of the energy along iterations, avoiding situations in which more computational work might lead to less optimal solutions. Under appropriate smoothness assumptions on the energy functional we prove that discrete gradient methods guarantee a monotonic decrease of the energy towards stationary states, and we promote their use in image processing by exhibiting experiments with convex and non-convex variational models for image deblurring, denoising, and inpainting.
Structure of raft-model membrane by using the inverse contrast variation neutron scattering method
Energy Technology Data Exchange (ETDEWEB)
Hirai, Mitsuhiro [Department of Physics, Gunma University, Maebashi 371-8510 (Japan); Hirai, Harutaka [Department of Physics, Gunma University, Maebashi 371-8510 (Japan); Koizumi, Masaharu [Department of Physics, Gunma University, Maebashi 371-8510 (Japan); Kasahara, Kohji [Tokyo Metropolitan Institute of Medical Science, Tokyo 113-8613 (Japan); Yuyama, Kohei [Tokyo Metropolitan Institute of Medical Science, Tokyo 113-8613 (Japan); Suzuki, Naoko [Tokyo Metropolitan Institute of Medical Science, Tokyo 113-8613 (Japan)
2006-11-15
By means of the inverse contrast variation method in small-angle neutron scattering, we have studied the structure of a small unilamellar vesicle (SUV) composed of ganglioside, cholesterol and dipalmitoyl-phosphocholine. The SUV treated has a similar lipid composition as in a plasma membrane with microdomains, so-called rafts. The present results indicate an asymmetric distribution of lipid components within the bilayer of the vesicle, that is, a predominant distribution of ganglioside and cholesterol at the outer leaflet of the vesicle bilayer. The deviation from the linearity in a pseudo-Stuhrmannplot strongly suggests the presence of a large heterogeneity of lipid composition in a bilayer, namely a clustering of ganglioside and cholesterol molecules. This deviation is enhanced by temperature elevation, meaning that ganglioside-cholesterol clusters become larger with holding liquid-ordered (L{sub o}) phase.
Cluster analysis of diurnal variations in BC concentration from Multi-Angle Absorption Photometer
Han, Y.; KIM, C.; Park, J.; Choi, Y.; Ghim, Y.
2013-12-01
Black carbon (BC) is emitted from incomplete combustion of carbon-containing fuels, such as fossil fuels (diesel and coal) and biomass burning (forest fires and burning of agricultural waste). We have measured BC concentration using MAAP (Multi-Angle Absorption Photometer, Model 5012, Thermo Scientific) during the past few years. The measurement site is on the rooftop of the five-story building on the hill (37.02 °N, 127.16 °E, 167 m above sea level), about 35 km southeast of Seoul; there are no major emission sources nearby except a 4-lane road running about 1.4 km to the west. Previous studies reveal that the effects of vehicle emissions are not as direct as urban sites but those of biomass burning are general. Diurnal variations of BC concentration are classified using cluster analysis. Typical patterns are determined to identify the primary emissions and their effects on the concentration level. High concentration episodes are discriminated and major factors that influence the evolution of the episodes are investigated.
THE VARIATIONAL PRINCIPLE AND APPLICATION OF NUMERICAL MANIFOLD METHOD
Institute of Scientific and Technical Information of China (English)
骆少明; 张湘伟; 蔡永昌
2001-01-01
The physical-cover-oriented variational principle of numerical manifold method (NMM) for the analysis of linear elastic static problems was put forward according to the displacement model and the characters of numerical manifold method. The theoretical calculating formulations and the controlling equation of NMM were derived. As an example,the plate with a hole in the center is calculated and the results show that the solution precision and efficiency of NMM are agreeable.
Coupled-cluster methods for core-hole dynamics
Picon, Antonio; Cheng, Lan; Hammond, Jeff R.; Stanton, John F.; Southworth, Stephen H.
2014-05-01
Coupled cluster (CC) is a powerful numerical method used in quantum chemistry in order to take into account electron correlation with high accuracy and size consistency. In the CC framework, excited, ionized, and electron-attached states can be described by the equation of motion (EOM) CC technique. However, bringing CC methods to describe molecular dynamics induced by x rays is challenging. X rays have the special feature of interacting with core-shell electrons that are close to the nucleus. Core-shell electrons can be ionized or excited to a valence shell, leaving a core-hole that will decay very fast (e.g. 2.4 fs for K-shell of Ne) by emitting photons (fluorescence process) or electrons (Auger process). Both processes are a clear manifestation of a many-body effect, involving electrons in the continuum in the case of Auger processes. We review our progress of developing EOM-CC methods for core-hole dynamics. Results of the calculations will be compared with measurements on core-hole decays in atomic Xe and molecular XeF2. This work is funded by the Office of Basic Energy Sciences, Office of Science, U.S. Department of Energy, under Contract No. DE-AC02-06CH11357.
Generalized Method of Variational Analysis for 3-D Flow
Institute of Scientific and Technical Information of China (English)
兰伟仁; 黄思训; 项杰
2004-01-01
The generalized method of variational analysis (GMVA) suggested for 2-D wind observations by Huang et al. is extended to 3-D cases. Just as in 2-D cases, the regularization idea is applied. But due to the complexity of the 3-D cases, the vertical vorticity is taken as a stable functional. The results indicate that wind observations can be both variationally optimized and filtered. The efficiency of GMVA is also checked in a numerical test. Finally, 3-D wind observations with random disturbances are manipulated by GMVA after being filtered.
Elastic scattering of positronium: Application of the confined variational method
Zhang, Junyi
2012-08-01
We demonstrate for the first time that the phase shift in elastic positronium-atom scattering can be precisely determined by the confined variational method, in spite of the fact that the Hamiltonian includes an unphysical confining potential acting on the center of mass of the positron and one of the atomic electrons. As an example, we study the S-wave elastic scattering for the positronium-hydrogen scattering system, where the existing 4% discrepancy between the Kohn variational calculation and the R-matrix calculation is resolved. © Copyright EPLA, 2012.
Minimizers with discontinuous velocities for the electromagnetic variational method
de Luca, Jayme
2010-08-01
The electromagnetic two-body problem has neutral differential delay equations of motion that, for generic boundary data, can have solutions with discontinuous derivatives. If one wants to use these neutral differential delay equations with arbitrary boundary data, solutions with discontinuous derivatives must be expected and allowed. Surprisingly, Wheeler-Feynman electrodynamics has a boundary value variational method for which minimizer trajectories with discontinuous derivatives are also expected, as we show here. The variational method defines continuous trajectories with piecewise defined velocities and accelerations, and electromagnetic fields defined by the Euler-Lagrange equations on trajectory points. Here we use the piecewise defined minimizers with the Liénard-Wierchert formulas to define generalized electromagnetic fields almost everywhere (but on sets of points of zero measure where the advanced/retarded velocities and/or accelerations are discontinuous). Along with this generalization we formulate the generalized absorber hypothesis that the far fields vanish asymptotically almost everywhere and show that localized orbits with far fields vanishing almost everywhere must have discontinuous velocities on sewing chains of breaking points. We give the general solution for localized orbits with vanishing far fields by solving a (linear) neutral differential delay equation for these far fields. We discuss the physics of orbits with discontinuous derivatives stressing the differences to the variational methods of classical mechanics and the existence of a spinorial four-current associated with the generalized variational electrodynamics.
Motion estimation using point cluster method and Kalman filter.
Senesh, M; Wolf, A
2009-05-01
The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal
Nucleon matrix elements using the variational method in lattice QCD
Dragos, J.; Horsley, R.; Kamleh, W.; Leinweber, D. B.; Nakamura, Y.; Rakow, P. E. L.; Schierholz, G.; Young, R. D.; Zanotti, J. M.
2016-10-01
The extraction of hadron matrix elements in lattice QCD using the standard two- and three-point correlator functions demands careful attention to systematic uncertainties. One of the most commonly studied sources of systematic error is contamination from excited states. We apply the variational method to calculate the axial vector current gA, the scalar current gS, the scalar current gT and the quark momentum fraction ⟨x ⟩ of the nucleon and we compare the results to the more commonly used summation and two-exponential fit methods. The results demonstrate that the variational approach offers a more efficient and robust method for the determination of nucleon matrix elements.
Popescu, Bogdan
2013-01-01
A growing fraction of Simple Stellar Population (SSP) models, in an aim to create more realistic simulations capable of including stochastic variation in their outputs, begin their simulations with a distribution of discrete stars following a power-law function of masses. Careful attention is needed to create a correctly sampled Initial Mass Function (IMF) and in this contribution we provide a solid mathematical method called MASSCLEAN IMF Sampling for doing so. We then use our method to perform $10$ $million$ MASSCLEAN Monte Carlo stellar cluster simulations to determine the most massive star in a mass distribution as a function of the total mass of the cluster. We find a maximum mass range is predicted, not a single maximum mass. This maximum mass range is (a) dependent on the total mass of the cluster and (b) independent of an upper stellar mass limit, $M_{limit}$, for $unsaturated$ clusters and comes out naturally using our IMF sampling method. We then turn our analysis around, now starting with our new $...
Efficiency of a Multi-Reference Coupled Cluster method
Giner, Emmanuel; Scemama, Anthony; Malrieu, Jean Paul
2015-01-01
The multi-reference Coupled Cluster method first proposed by Meller et al (J. Chem. Phys. 1996) has been implemented and tested. Guess values of the amplitudes of the single and double excitations (the ${\\hat T}$ operator) on the top of the references are extracted from the knowledge of the coefficients of the Multi Reference Singles and Doubles Configuration Interaction (MRSDCI) matrix. The multiple parentage problem is solved by scaling these amplitudes on the interaction between the references and the Singles and Doubles. Then one proceeds to a dressing of the MRSDCI matrix under the effect of the Triples and Quadruples, the coefficients of which are estimated from the action of ${\\hat T}^2$. This dressing follows the logics of the intermediate effective Hamiltonian formalism. The dressed MRSDCI matrix is diagonalized and the process is iterated to convergence. The method is tested on a series of benchmark systems from Complete Active Spaces (CAS) involving 2 or 4 active electrons up to bond breakings. The...
A COMPARISON OF DIFFERENT CONTRACTION METHODS FOR MONOTONE VARIATIONAL INEQUALITIES
Institute of Scientific and Technical Information of China (English)
Bingsheng He; Xiang Wang; Junfeng Yang
2009-01-01
It is interesting to compare the efficiency of two methods when their computational loads in each iteration are equal. In this paper, two classes of contraction methods for monotone variational inequalities are studied in a unified framework. The methods of both classes can be viewed as prediction-correction methods, which generate the same test vector in the prediction step and adopt the same step-size rule in the correction step. The only difference is that they use different search directions. The computational loads of each iteration of the different classes are equal. Our analysis explains theoretically why one class of the contraction methods usually outperforms the other class. It is demonstrated that many known methods belong to these two classes of methods. Finally, the presented numerical results demonstrate the validity of our analysis.
Multilevel Analysis Methods for Partially Nested Cluster Randomized Trials
Sanders, Elizabeth A.
2011-01-01
This paper explores multilevel modeling approaches for 2-group randomized experiments in which a treatment condition involving clusters of individuals is compared to a control condition involving only ungrouped individuals, otherwise known as partially nested cluster randomized designs (PNCRTs). Strategies for comparing groups from a PNCRT in the…
Hudjimartsu, S. A.; Djatna, T.; Ambarwari, A.; Apriliantono
2017-01-01
The forest fires in Indonesia occurs frequently in the dry season. Almost all the causes of forest fires are caused by the human activity itself. The impact of forest fires is the loss of biodiversity, pollution hazard and harm the economy of surrounding communities. To prevent fires required the method, one of them with spatial temporal clustering. Spatial temporal clustering formed grouping data so that the results of these groupings can be used as initial information on fire prevention. To analyze the fires, used hotspot data as early indicator of fire spot. Hotspot data consists of spatial and temporal dimensions can be processed using the Spatial Temporal Clustering with Kulldorff Scan Statistic (KSS). The result of this research is to the effectiveness of KSS method to cluster spatial hotspot in a case within Riau Province and produces two types of clusters, most cluster and secondary cluster. This cluster can be used as an early fire warning information.
Variational denoising method for electronic speckle pattern interferometry
Institute of Scientific and Technical Information of China (English)
Fang Zhang; Wenyao Liu; Chen Tang; Jinjiang Wang; Li Ren
2008-01-01
Traditional speckle fringe patterns by electronic speckle pattern interferometry (ESPI) are inherently noisy and of limited visibility, so denoising is the key problem in ESPI. We present the variational denoising method for ESPI. This method transforms the image denosing to minimizing an appropriate penalized energy function and solving a partial differential equation. We test the proposed method on computer-simulated and experimental speckle correlation fringes, respectively. The results show that this technique is capable of significantly improving the quality of fringe patterns. It works well as a pre-processing for the fringe patterns by ESPI.
Directory of Open Access Journals (Sweden)
Susan Worner
2013-09-01
Full Text Available For greater preparedness, pest risk assessors are required to prioritise long lists of pest species with potential to establish and cause significant impact in an endangered area. Such prioritization is often qualitative, subjective, and sometimes biased, relying mostly on expert and stakeholder consultation. In recent years, cluster based analyses have been used to investigate regional pest species assemblages or pest profiles to indicate the risk of new organism establishment. Such an approach is based on the premise that the co-occurrence of well-known global invasive pest species in a region is not random, and that the pest species profile or assemblage integrates complex functional relationships that are difficult to tease apart. In other words, the assemblage can help identify and prioritise species that pose a threat in a target region. A computational intelligence method called a Kohonen self-organizing map (SOM, a type of artificial neural network, was the first clustering method applied to analyse assemblages of invasive pests. The SOM is a well known dimension reduction and visualization method especially useful for high dimensional data that more conventional clustering methods may not analyse suitably. Like all clustering algorithms, the SOM can give details of clusters that identify regions with similar pest assemblages, possible donor and recipient regions. More important, however SOM connection weights that result from the analysis can be used to rank the strength of association of each species within each regional assemblage. Species with high weights that are not already established in the target region are identified as high risk. However, the SOM analysis is only the first step in a process to assess risk to be used alongside or incorporated within other measures. Here we illustrate the application of SOM analyses in a range of contexts in invasive species risk assessment, and discuss other clustering methods such as k
Swarm: robust and fast clustering method for amplicon-based studies
Directory of Open Access Journals (Sweden)
Frédéric Mahé
2014-09-01
Full Text Available Popular de novo amplicon clustering methods suffer from two fundamental flaws: arbitrary global clustering thresholds, and input-order dependency induced by centroid selection. Swarm was developed to address these issues by first clustering nearly identical amplicons iteratively using a local threshold, and then by using clusters’ internal structure and amplicon abundances to refine its results. This fast, scalable, and input-order independent approach reduces the influence of clustering parameters and produces robust operational taxonomic units.
Variational grand-canonical electronic structure method for open systems.
Jacobi, Shlomit; Baer, Roi
2005-07-22
An ab initio method is developed for variational grand-canonical molecular electronic structure of open systems based on the Gibbs-Peierls-Boguliobov inequality. We describe the theory and a practical method for performing the calculations within standard quantum chemistry codes using Gaussian basis sets. The computational effort scales similarly to the ground-state Hartree-Fock method. The quality of the approximation is studied on a hydrogen molecule by comparing to the exact Gibbs free energy, computed using full configuration-interaction calculations. We find the approximation quite accurate, with errors similar to those of the Hartree-Fock method for ground-state (zero-temperature) calculations. A further demonstration is given of the temperature effects on the bending potential curve for water. Some future directions and applications of the method are discussed. Several appendices give the mathematical and algorithmic details of the method.
THE CONTROL VARIATIONAL METHOD FOR ELASTIC CONTACT PROBLEMS
Directory of Open Access Journals (Sweden)
Mircea Sofonea
2010-07-01
Full Text Available We consider a multivalued equation of the form Ay + F(y = fin a real Hilbert space, where A is a linear operator and F represents the (Clarke subdifferential of some function. We prove existence and uniqueness results of the solution by using the control variational method. The main idea in this method is to minimize the energy functional associated to the nonlinear equation by arguments of optimal control theory. Then we consider a general mathematical model describing the contact between a linearly elastic body and an obstacle which leads to a variational formulation as above, for the displacement field. We apply the abstract existence and uniqueness results to prove the unique weak solvability of the corresponding contact problem. Finally, we present examples of contact and friction laws for which our results work.
A variational method in out-of-equilibrium physical systems.
Pinheiro, Mario J
2013-12-09
We propose a new variational principle for out-of-equilibrium dynamic systems that are fundamentally based on the method of Lagrange multipliers applied to the total entropy of an ensemble of particles. However, we use the fundamental equation of thermodynamics on differential forms, considering U and S as 0-forms. We obtain a set of two first order differential equations that reveal the same formal symplectic structure shared by classical mechanics, fluid mechanics and thermodynamics. From this approach, a topological torsion current emerges of the form , where Aj and ωk denote the components of the vector potential (gravitational and/or electromagnetic) and where ω denotes the angular velocity of the accelerated frame. We derive a special form of the Umov-Poynting theorem for rotating gravito-electromagnetic systems. The variational method is then applied to clarify the working mechanism of particular devices.
Minimizers with discontinuous velocities for the electromagnetic variational method
De Luca, Jayme
2010-01-01
The electromagnetic two-body problem has \\emph{neutral-delay} equations of motion that, for generic boundary data, can have solutions with \\emph{% discontinuous} derivatives. If one wants to use these neutral-delay equations with \\emph{arbitrary} boundary data, solutions with discontinuous derivatives must be expected and allowed. Surprisingly, the Wheeler-Feynman electrodynamics has a variational method with mixed-type boundary conditions for which minimizer trajectories with discontinuous derivatives are also expected, as we show here. The variational method defines continuous trajectories with piecewise-defined velocities and accelerations, with electromagnetic fields defined \\emph{by} the equations of motion \\emph{on} trajectory points. Here we use the piecewise-defined minimizers with the Li{% \\'{e}}nard-Wierchert formulas to define generalized electromagnetic fields almost everywhere (but on sets of points of zero measure where the advanced/retarded velocities and/or accelerations are discontinuous). Al...
Spurious singularities in the generalized Newton variational method
Energy Technology Data Exchange (ETDEWEB)
Apagyi, B.; Levay, P. (Quantum Theory Group, Institute of Physics, Technical University of Budapest, H-1521 Budapest (Hungary)); Ladanyi, K. (Institute for Theoretical Physics, Roland Eoetvoes University, H-1088 Budapest (Hungary))
1991-12-01
The generalized Newton variational method is applied to the static-exchange approximation of the electron--hydrogen-atom scattering. Slater-type basis functions are employed to expand the amplitude density. Spurious singularities are encountered in both scattering processes. The width of the unphysical singularities is broader in the case of singlet scattering. Anomalous poles appear in narrow regions of the scale parameter and are in evident correlation with the zeros of the determinant of the free-particle Green's operator. As a by-product, simple least-squares extension of the generalized Newton variational method is developed in order to avoid spurious singularities and to recognize whether or not the convergence is of secondary nature.
Directory of Open Access Journals (Sweden)
T. Karlsson
2004-07-01
Full Text Available Cluster multipoint measurements of the electric and magnetic fields from a crossing of auroral field lines at an altitude of 4R_{E} are used to show that it is possible to resolve the ambiguity of temporal versus spatial variations in the fields. We show that the largest electric fields (of the order of 300mV/m when mapped down to the ionosphere are of a quasi-static nature, unipolar, associated with upward electron beams, stable on a time scale of at least half a minute, and located in two regions of downward current. We conclude that they are the high-altitude analogues of the intense return current/black auroral electric field structures observed at lower altitudes by Freja and FAST. In between these structures there are temporal fluctuations, which are shown to likely be downward travelling Alfvén waves. The periods of these waves are 20-40s, which is not consistent with periods associated with either the Alfvénic ionospheric resonator, typical field line resonances or substorm onset related Pi2 oscillations. The multipoint measurements enable us to estimate a lower limit to the perpendicular wavelength of the Alfvén waves to be of the order of 120km, which suggests that the perpendicular wavelength is similar to the dimension of the region between the two quasi-static structures. This might indicate that the Alfvén waves are ducted within a wave guide, where the quasi-static structures are associated with the gradients making up this waveguide.
Variations in CCL3L gene cluster sequence and non-specific gene copy numbers
Directory of Open Access Journals (Sweden)
Edberg Jeffrey C
2010-03-01
Full Text Available Abstract Background Copy number variations (CNVs of the gene CC chemokine ligand 3-like1 (CCL3L1 have been implicated in HIV-1 susceptibility, but the association has been inconsistent. CCL3L1 shares homology with a cluster of genes localized to chromosome 17q12, namely CCL3, CCL3L2, and, CCL3L3. These genes are involved in host defense and inflammatory processes. Several CNV assays have been developed for the CCL3L1 gene. Findings Through pairwise and multiple alignments of these genes, we have shown that the homology between these genes ranges from 50% to 99% in complete gene sequences and from 70-100% in the exonic regions, with CCL3L1 and CCL3L3 being identical. By use of MEGA 4 and BioEdit, we aligned sense primers, anti-sense primers, and probes used in several previously described assays against pre-multiple alignments of all four chemokine genes. Each set of probes and primers aligned and matched with overlapping sequences in at least two of the four genes, indicating that previously utilized RT-PCR based CNV assays are not specific for only CCL3L1. The four available assays measured median copies of 2 and 3-4 in European and African American, respectively. The concordance between the assays ranged from 0.44-0.83 suggesting individual discordant calls and inconsistencies with the assays from the expected gene coverage from the known sequence. Conclusions This indicates that some of the inconsistencies in the association studies could be due to assays that provide heterogenous results. Sequence information to determine CNV of the three genes separately would allow to test whether their association with the pathogenesis of a human disease or phenotype is affected by an individual gene or by a combination of these genes.
Rayleigh-Ritz variation method and connected-moments expansions
Energy Technology Data Exchange (ETDEWEB)
Amore, Paolo [Facultad de Ciencias, Universidad de Colima, Bernal DIaz del Castillo 340, Colima (Mexico); Fernandez, Francisco M [INIFTA (UNLP, CCT La Plata-CONICET), Division Quimica Teorica, Blvd 113 S/N, Sucursal 4, Casilla de Correo 16, 1900 La Plata (Argentina)], E-mail: fernande@quimica.unlp.edu.ar
2009-11-15
We compare the connected-moments expansion (CMX) with the Rayleigh-Ritz variational method in the Krylov space (RRK). As a benchmark model we choose the same two-dimensional anharmonic oscillator already treated earlier by means of the CMX. Our results show that the RRK converges more smoothly than the CMX. We also discuss the fact that the CMX is size consistent while the RRK is not.
A variational sinc collocation method for strong-coupling problems
Energy Technology Data Exchange (ETDEWEB)
Amore, Paolo [Facultad de Ciencias, Universidad de Colima, Bernal Diaz del Castillo 340, Colima (Mexico)
2006-06-02
We have devised a variational sinc collocation method (VSCM) which can be used to obtain accurate numerical solutions to many strong-coupling problems. Sinc functions with an optimal grid spacing are used to solve the linear and nonlinear Schroedinger equations and a lattice {phi}{sup 4} model in (1 + 1). Our results indicate that errors decrease exponentially with the number of grid points and that a limited numerical effort is needed to reach high precision. (letter to the editor)
Solutions of fractional diffusion equations by variation of parameters method
Directory of Open Access Journals (Sweden)
Mohyud-Din Syed Tauseef
2015-01-01
Full Text Available This article is devoted to establish a novel analytical solution scheme for the fractional diffusion equations. Caputo’s formulation followed by the variation of parameters method has been employed to obtain the analytical solutions. Following the derived analytical scheme, solution of the fractional diffusion equation for several initial functions has been obtained. Graphs are plotted to see the physical behavior of obtained solutions.
Progress and applications of the variational nodal method
Energy Technology Data Exchange (ETDEWEB)
Carrico, C.B. [Argonne National Lab., IL (United States); Palmiotti, G. [CEA Centre d`Etudes Nucleaires de Cadarache, 13 - Saint-Paul-lez-Durance (France); Lewis, E.E. [Northwestern Univ., Evanston, IL (United States). Dept. of Mechanical Engineering
1995-07-01
This paper summarizes current progress and developments with the variational nodal method(VNM) and its implementaion within the DIF3D code suite. After a brief development of the mathematical basis for the VNM, results from two three-dimensional benchmarks are presented for a variety of computers. Then current applications of the VNM are discussed including diffusion theory calculations, burnup calculations, highly heterogeneous cores, higher-order spherical harmonics approximations, perturbation theory and heterogeneous nodes.
Unified CFD Methods Via Flowfield-Dependent Variation Theory
Chung, T. J.; Schunk, Greg; Canabal, Francisco; Heard, Gary
1999-01-01
This paper addresses the flowfield-dependent variation (FDV) methods in which complex physical phenomena are taken into account in the final form of partial differential equations to be solved so that finite difference methods (FDM) or finite element methods (FEM) themselves will not dictate the physics, but rather are no more than simply the options how to discretize between adjacent nodal points or within an element. The variation parameters introduced in the formulation are calculated from the current flowfield based on changes of Mach numbers, Reynolds numbers, Peclet numbers, and Damkohler numbers between adjacent nodal points, which play many significant roles such as adjusting the governing equations (hyperbolic, parabolic, and/or e!liptic), resolving various physical phenomena, and controlling the accuracy and stability of the numerical solution. The theory is verified by a number of example problems addressing the physical implications of the variation parameters which resemble the flowfield itself, shock capturing mechanism, transitions and interactions between inviscid/viscous, compressibility/incompressibility, and laminar/turbulent flows.
Total-variation-based methods for gravitational wave denoising
Torres, Alejandro; Font, José A; Ibáñez, José M
2014-01-01
We describe new methods for denoising and detection of gravitational waves embedded in additive Gaussian noise. The methods are based on Total Variation denoising algorithms. These algorithms, which do not need any a priori information about the signals, have been originally developed and fully tested in the context of image processing. To illustrate the capabilities of our methods we apply them to two different types of numerically-simulated gravitational wave signals, namely bursts produced from the core collapse of rotating stars and waveforms from binary black hole mergers. We explore the parameter space of the methods to find the set of values best suited for denoising gravitational wave signals under different conditions such as waveform type and signal-to-noise ratio. Our results show that noise from gravitational wave signals can be successfully removed with our techniques, irrespective of the signal morphology or astrophysical origin. We also combine our methods with spectrograms and show how those c...
Observations on variational and projector Monte Carlo methods.
Umrigar, C J
2015-10-28
Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.
Directory of Open Access Journals (Sweden)
Kohei Arai
2013-07-01
Full Text Available Cluster analysis aims at identifying groups of similar objects and, therefore helps to discover distribution of patterns and interesting correlations in the data sets. In this paper, we propose to provide a consistent partitioning of a dataset which allows identifying any shape of cluster patterns in case of numerical clustering, convex or non-convex. The method is based on layered structure representation that be obtained from measurement distance and angle of numerical data to the centroid data and based on the iterative clustering construction utilizing a nearest neighbor distance between clusters to merge. Encourage result show the effectiveness of the proposed technique.
Guo, Jingyu; Tian, Dehua; McKinney, Brett A.; Hartman, John L.
2010-06-01
Interactions between genetic and/or environmental factors are ubiquitous, affecting the phenotypes of organisms in complex ways. Knowledge about such interactions is becoming rate-limiting for our understanding of human disease and other biological phenomena. Phenomics refers to the integrative analysis of how all genes contribute to phenotype variation, entailing genome and organism level information. A systems biology view of gene interactions is critical for phenomics. Unfortunately the problem is intractable in humans; however, it can be addressed in simpler genetic model systems. Our research group has focused on the concept of genetic buffering of phenotypic variation, in studies employing the single-cell eukaryotic organism, S. cerevisiae. We have developed a methodology, quantitative high throughput cellular phenotyping (Q-HTCP), for high-resolution measurements of gene-gene and gene-environment interactions on a genome-wide scale. Q-HTCP is being applied to the complete set of S. cerevisiae gene deletion strains, a unique resource for systematically mapping gene interactions. Genetic buffering is the idea that comprehensive and quantitative knowledge about how genes interact with respect to phenotypes will lead to an appreciation of how genes and pathways are functionally connected at a systems level to maintain homeostasis. However, extracting biologically useful information from Q-HTCP data is challenging, due to the multidimensional and nonlinear nature of gene interactions, together with a relative lack of prior biological information. Here we describe a new approach for mining quantitative genetic interaction data called recursive expectation-maximization clustering (REMc). We developed REMc to help discover phenomic modules, defined as sets of genes with similar patterns of interaction across a series of genetic or environmental perturbations. Such modules are reflective of buffering mechanisms, i.e., genes that play a related role in the maintenance
Ing, Alex; Schwarzbauer, Christian
2014-01-01
Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods – the cluster size statistic (CSS) and cluster mass statistic (CMS) – are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity. PMID:24906136
Ing, Alex; Schwarzbauer, Christian
2014-01-01
Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods--the cluster size statistic (CSS) and cluster mass statistic (CMS)--are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity.
Wieling, Martijn; Shackleton, Robert G.; Nerbonne, John
This study explores the linguistic application of bipartite spectral graph partitioning, a graph-theoretic technique that simultaneously identifies clusters of similar localities as well as clusters of features characteristic of those localities. We compare the results using this approach with
Wieling, Martijn; Shackleton, Robert G.; Nerbonne, John
2013-01-01
This study explores the linguistic application of bipartite spectral graph partitioning, a graph-theoretic technique that simultaneously identifies clusters of similar localities as well as clusters of features characteristic of those localities. We compare the results using this approach with previ
Sun, Xu; Yang, Lina; Gao, Lianru; Zhang, Bing; Li, Shanshan; Li, Jun
2015-01-01
Center-oriented hyperspectral image clustering methods have been widely applied to hyperspectral remote sensing image processing; however, the drawbacks are obvious, including the over-simplicity of computing models and underutilized spatial information. In recent years, some studies have been conducted trying to improve this situation. We introduce the artificial bee colony (ABC) and Markov random field (MRF) algorithms to propose an ABC-MRF-cluster model to solve the problems mentioned above. In this model, a typical ABC algorithm framework is adopted in which cluster centers and iteration conditional model algorithm's results are considered as feasible solutions and objective functions separately, and MRF is modified to be capable of dealing with the clustering problem. Finally, four datasets and two indices are used to show that the application of ABC-cluster and ABC-MRF-cluster methods could help to obtain better image accuracy than conventional methods. Specifically, the ABC-cluster method is superior when used for a higher power of spectral discrimination, whereas the ABC-MRF-cluster method can provide better results when used for an adjusted random index. In experiments on simulated images with different signal-to-noise ratios, ABC-cluster and ABC-MRF-cluster showed good stability.
Discrete Direct Methods in the Fractional Calculus of Variations
Pooseh, Shakoor; Torres, Delfim F M
2012-01-01
Finite differences, as a subclass of direct methods in the calculus of variations, consist in discretizing the objective functional using appropriate approximations for derivatives that appear in the problem. This article generalizes the same idea for fractional variational problems. We consider a minimization problem with a Lagrangian that depends only on the left Riemann-Liouville fractional derivative. Using Grunwald-Letnikov definition, we approximate the objective functional in an equispaced grid as a multi-variable function of the values of the unknown function on mesh points. The problem is then transformed to an ordinary static optimization problem. The solution to the latter problem gives an approximation to the original fractional problem on mesh points.
Storm surge model based on variational data assimilation method
Institute of Scientific and Technical Information of China (English)
Shi-li HUANG; Jian XU; De-guan WANG; Dong-yan LU
2010-01-01
By combining computation and observation information,the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting.It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge.By controlling the wind stress drag coefficient,the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon.In the data assimilation tests,the model accurately identified the wind stress drag coefficient and obtained results close to the true state.Then,the actual storm surge induced by Typhoon 0515 was forecast by the developed model,and the results demonstrate its efficiency in practical application.
A convergent overlapping domain decomposition method for total variation minimization
Fornasier, Massimo
2010-06-22
In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.
Self-Adaptive Implicit Methods for Monotone Variant Variational Inequalities
Directory of Open Access Journals (Sweden)
Ge Zhili
2009-01-01
Full Text Available The efficiency of the implicit method proposed by He (1999 depends on the parameter heavily; while it varies for individual problem, that is, different problem has different "suitable" parameter, which is difficult to find. In this paper, we present a modified implicit method, which adjusts the parameter automatically per iteration, based on the message from former iterates. To improve the performance of the algorithm, an inexact version is proposed, where the subproblem is just solved approximately. Under mild conditions as those for variational inequalities, we prove the global convergence of both exact and inexact versions of the new method. We also present several preliminary numerical results, which demonstrate that the self-adaptive implicit method, especially the inexact version, is efficient and robust.
A clustering method of Chinese medicine prescriptions based on modified firefly algorithm.
Yuan, Feng; Liu, Hong; Chen, Shou-Qiang; Xu, Liang
2016-12-01
This paper is aimed to study the clustering method for Chinese medicine (CM) medical cases. The traditional K-means clustering algorithm had shortcomings such as dependence of results on the selection of initial value, trapping in local optimum when processing prescriptions form CM medical cases. Therefore, a new clustering method based on the collaboration of firefly algorithm and simulated annealing algorithm was proposed. This algorithm dynamically determined the iteration of firefly algorithm and simulates sampling of annealing algorithm by fitness changes, and increased the diversity of swarm through expansion of the scope of the sudden jump, thereby effectively avoiding premature problem. The results from confirmatory experiments for CM medical cases suggested that, comparing with traditional K-means clustering algorithms, this method was greatly improved in the individual diversity and the obtained clustering results, the computing results from this method had a certain reference value for cluster analysis on CM prescriptions.
New Nuclear Equation of State for Core-Collapse Supernovae with the Variational Method
Directory of Open Access Journals (Sweden)
Togashi H.
2014-03-01
Full Text Available We report the current status of our project to construct a new nuclear equation of state (EOS with the variational method for core-collapse supernova (SN simulations. Starting from the realistic nuclear Hamiltonian, the EOS for uniform nuclear matter is constructed with the cluster variational method: For non-uniform nuclear matter, the EOS is calculated with the Thomas-Fermi method. The obtained thermodynamic quantities of uniform matter are in good agreement with those with more sophisticated Fermi Hypernetted Chain variational calculations, and phase diagrams constructed so far are close to those of the Shen-EOS. The structure of neutron stars calculated with this EOS at zero temperature is consistent with recent observational data, and the maximum mass of the neutron star is slightly larger than that with the Shen-EOS. Using the present EOS of uniform nuclear matter, we also perform the 1D simulation of the core-collapse supernovae by a simplified prescription of adiabatic hydrodynamics. The stellar core with the present EOS is more compact than that with the Shen-EOS, and correspondingly, the explosion energy in this simulation with the present EOS is larger than that with the Shen-EOS.
Lee, Sharon X; McLachlan, Geoffrey J; Pyne, Saumyadipta
2016-01-01
We present an algorithm for modeling flow cytometry data in the presence of large inter-sample variation. Large-scale cytometry datasets often exhibit some within-class variation due to technical effects such as instrumental differences and variations in data acquisition, as well as subtle biological heterogeneity within the class of samples. Failure to account for such variations in the model may lead to inaccurate matching of populations across a batch of samples and poor performance in classification of unlabeled samples. In this paper, we describe the Joint Clustering and Matching (JCM) procedure for simultaneous segmentation and alignment of cell populations across multiple samples. Under the JCM framework, a multivariate mixture distribution is used to model the distribution of the expressions of a fixed set of markers for each cell in a sample such that the components in the mixture model may correspond to the various populations of cells, which have similar expressions of markers (that is, clusters), in the composition of the sample. For each class of samples, an overall class template is formed by the adoption of random-effects terms to model the inter-sample variation within a class. The construction of a parametric template for each class allows for direct quantification of the differences between the template and each sample, and also between each pair of samples, both within or between classes. The classification of a new unclassified sample is then undertaken by assigning the unclassified sample to the class that minimizes the distance between its fitted mixture density and each class density as provided by the class templates. For illustration, we use a symmetric form of the Kullback-Leibler divergence as a distance measure between two densities, but other distance measures can also be applied. We show and demonstrate on four real datasets how the JCM procedure can be used to carry out the tasks of automated clustering and alignment of cell
A Bayesian cluster analysis method for single-molecule localization microscopy data.
Griffié, Juliette; Shannon, Michael; Bromley, Claire L; Boelen, Lies; Burn, Garth L; Williamson, David J; Heard, Nicholas A; Cope, Andrew P; Owen, Dylan M; Rubin-Delanchy, Patrick
2016-12-01
Cell function is regulated by the spatiotemporal organization of the signaling machinery, and a key facet of this is molecular clustering. Here, we present a protocol for the analysis of clustering in data generated by 2D single-molecule localization microscopy (SMLM)-for example, photoactivated localization microscopy (PALM) or stochastic optical reconstruction microscopy (STORM). Three features of such data can cause standard cluster analysis approaches to be ineffective: (i) the data take the form of a list of points rather than a pixel array; (ii) there is a non-negligible unclustered background density of points that must be accounted for; and (iii) each localization has an associated uncertainty in regard to its position. These issues are overcome using a Bayesian, model-based approach. Many possible cluster configurations are proposed and scored against a generative model, which assumes Gaussian clusters overlaid on a completely spatially random (CSR) background, before every point is scrambled by its localization precision. We present the process of generating simulated and experimental data that are suitable to our algorithm, the analysis itself, and the extraction and interpretation of key cluster descriptors such as the number of clusters, cluster radii and the number of localizations per cluster. Variations in these descriptors can be interpreted as arising from changes in the organization of the cellular nanoarchitecture. The protocol requires no specific programming ability, and the processing time for one data set, typically containing 30 regions of interest, is ∼18 h; user input takes ∼1 h.
Classification of excessive domestic water consumption using Fuzzy Clustering Method
Zairi Zaidi, A.; Rasmani, Khairul A.
2016-08-01
Demand for clean and treated water is increasing all over the world. Therefore it is crucial to conserve water for better use and to avoid unnecessary, excessive consumption or wastage of this natural resource. Classification of excessive domestic water consumption is a difficult task due to the complexity in determining the amount of water usage per activity, especially as the data is known to vary between individuals. In this study, classification of excessive domestic water consumption is carried out using a well-known Fuzzy C-Means (FCM) clustering algorithm. Consumer data containing information on daily, weekly and monthly domestic water usage was employed for the purpose of classification. Using the same dataset, the result produced by the FCM clustering algorithm is compared with the result obtained from a statistical control chart. The finding of this study demonstrates the potential use of the FCM clustering algorithm for the classification of domestic consumer water consumption data.
An explicit four-dimensional variational data assimilation method
Institute of Scientific and Technical Information of China (English)
QIU ChongJian; ZHANG Lei; SHAO AiMei
2007-01-01
A new data assimilation method called the explicit four-dimensional variational (4DVAR) method is proposed. In this method, the singular value decomposition (SVD) is used to construct the orthogonal basis vectors from a forecast ensemble in a 4D space. The basis vectors represent not only the spatial structure of the analysis variables but also the temporal evolution. After the analysis variables are expressed by a truncated expansion of the basis vectors in the 4D space, the control variables in the cost function appear explicitly, so that the adjoint model, which is used to derive the gradient of cost function with respect to the control variables, is no longer needed. The new technique significantly simplifies the data assimilation process. The advantage of the proposed method is demonstrated by several experiments using a shallow water numerical model and the results are compared with those of the conventional 4DVAR. It is shown that when the observation points are very dense, the conventional 4DVAR is better than the proposed method. However, when the observation points are sparse, the proposed method performs better. The sensitivity of the proposed method with respect to errors in the observations and the numerical model is lower than that of the conventional method.
An explicit four-dimensional variational data assimilation method
Institute of Scientific and Technical Information of China (English)
2007-01-01
A new data assimilation method called the explicit four-dimensional variational (4DVAR) method is proposed. In this method, the singular value decomposition (SVD) is used to construct the orthogonal basis vectors from a forecast ensemble in a 4D space. The basis vectors represent not only the spatial structure of the analysis variables but also the temporal evolution. After the analysis variables are ex-pressed by a truncated expansion of the basis vectors in the 4D space, the control variables in the cost function appear explicitly, so that the adjoint model, which is used to derive the gradient of cost func-tion with respect to the control variables, is no longer needed. The new technique significantly simpli-fies the data assimilation process. The advantage of the proposed method is demonstrated by several experiments using a shallow water numerical model and the results are compared with those of the conventional 4DVAR. It is shown that when the observation points are very dense, the conventional 4DVAR is better than the proposed method. However, when the observation points are sparse, the proposed method performs better. The sensitivity of the proposed method with respect to errors in the observations and the numerical model is lower than that of the conventional method.
Engineering practice variation through provider agreement: a cluster-randomized feasibility trial
Directory of Open Access Journals (Sweden)
McCarren M
2014-10-01
Full Text Available Madeline McCarren,1 Elaine L Twedt,1 Faizmohamed M Mansuri,2 Philip R Nelson,3 Brian T Peek3 1Pharmacy Benefits Management Services, Department of Veterans Affairs, Hines, IL, 2Wilkes-Barre VA Medical Center, Wilkes-Barre, PA, 3Charles George VA Medical Center, Asheville, NC, USA Purpose: Minimal-risk randomized trials that can be embedded in practice could facilitate learning health-care systems. A cluster-randomized design was proposed to compare treatment strategies by assigning clusters (eg, providers to “favor” a particular drug, with providers retaining autonomy for specific patients. Patient informed consent might be waived, broadening inclusion. However, it is not known if providers will adhere to the assignment or whether institutional review boards will waive consent. We evaluated the feasibility of this trial design.Subjects and methods: Agreeable providers were randomized to “favor” either hydrochlorothiazide or chlorthalidone when starting patients on thiazide-type therapy for hypertension. The assignment applied when the provider had already decided to start a thiazide, and providers could deviate from the strategy as needed. Prescriptions were aggregated to produce a provider strategy-adherence rate.Results: All four institutional review boards waived documentation of patient consent. Providers (n=18 followed their assigned strategy for most of their new thiazide prescriptions (n=138 patients. In the “favor hydrochlorothiazide” group, there was 99% adherence to that strategy. In the “favor chlorthalidone” group, chlorthalidone comprised 77% of new thiazide starts, up from 1% in the pre-study period. When the assigned strategy was followed, dosing in the recommended range was 48% for hydrochlorothiazide (25–50 mg/day and 100% for chlorthalidone (12.5–25.0 mg/day. Providers were motivated to participate by a desire to contribute to a comparative effectiveness study. A study promotional mug, provider information
Šubelj, Lovro; Waltman, Ludo
2015-01-01
Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between di...
PPA BASED PREDICTION-CORRECTION METHODS FOR MONOTONE VARIATIONAL INEQUALITIES
Institute of Scientific and Technical Information of China (English)
He Bingsheng; Jiang Jianlin; Qian Maijian; Xu Ya
2005-01-01
In this paper we study the proximal point algorithm (PPA) based predictioncorrection (PC) methods for monotone variational inequalities. Each iteration of these methods consists of a prediction and a correction. The predictors are produced by inexact PPA steps. The new iterates are then updated by a correction using the PPA formula. We present two profit functions which serve two purposes: First we show that the profit functions are tight lower bounds of the improvements obtained in each iteration. Based on this conclusion we obtain the convergence inexactness restrictions for the prediction step. Second we show that the profit functions are quadratically dependent upon the step lengths, thus the optimal step lengths are obtained in the correction step. In the last part of the paper we compare the strengths of different methods based on their inexactness restrictions.
Multibeam Antennas Array Pattern Synthesis Using a Variational Method
Directory of Open Access Journals (Sweden)
F. T. Bendimerad
2007-06-01
Full Text Available In this paper a new method is described for multibeam antennas synthesis where both the amplitude and phase of each radiating element is a design variable. The developed optimization method made possible to solve the synthesis problem and to answer all the constraints imposed by the radiation pattern. Two approaches for visualizing satellite antenna radiation patterns are presented. Gain-level contours drawn over a geographical map gives clearest qualitative information. A three-dimensional (3D surface plot displays the qualitative shape of the radiation pattern more naturally. The simulations results have shown power, precision and speed of the variational method with respect to the constraints imposed on radiation pattern of the of multibeam antennas network.
An integral nodal variational method for multigroup criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Lewis, E.E. [Northwestern Univ., Evanston, IL (United States). Dept. of Mechanical Engineering]. E-mail: e-lewis@northwestern.edu; Smith, M.A.; Palmiotti, G. [Argonne National Lab., IL (United States)]. E-mail: masmith@ra.anl.gov; gpalmiotti@ra.anl.gov; Tsoulfanidis, N. [Missouri Univ., Rolla, MO (United States). Dept. of Nuclear Engineering]. E-mail: tsoul@umr.edu
2003-07-01
An integral formulation of the variational nodal method is presented and applied to a series of benchmark critically problems. The method combines an integral transport treatment of the even-parity flux within the spatial node with an odd-parity spherical harmonics expansion of the Lagrange multipliers at the node interfaces. The response matrices that result from this formulation are compatible with those in the VARIANT code at Argonne National Laboratory. Either homogeneous or heterogeneous nodes may be employed. In general, for calculations requiring higher-order angular approximations, the integral method yields solutions with comparable accuracy while requiring substantially less CPU time and memory than the standard spherical harmonics expansion using the same spatial approximations. (author)
Method for discovering relationships in data by dynamic quantum clustering
Energy Technology Data Exchange (ETDEWEB)
Weinstein, Marvin; Horn, David
2014-10-28
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2014-10-28
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Method for discovering relationships in data by dynamic quantum clustering
Energy Technology Data Exchange (ETDEWEB)
Weinstein, Marvin; Horn, David
2017-05-09
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Unified MOSFET Short Channel Factor Using Variational Method
Institute of Scientific and Technical Information of China (English)
陈文松; 田立林; 李志坚
2000-01-01
A new natural gate length scale for MOSFET's is presented using Variational Method. Comparison of the short channel effects is conducted for the uniform channel doping bulk MOSFET, intrinsic channel doping bulk MOSFET, SOI MOSFET and double gated MOSFET. And the results are verified by the 2D numerical simulation. Taken all the 2-D effects on front gate dielectric, back gate dielectric and silicon film into account, the data validity of electrical equivalent oxide thickness is investigated by this model, as shows that it is valid only when the gate dielectric constant is relatively small.
Meszaros, Szabolcs; Shetrone, Matthew; Lucatello, Sara; Troup, Nicholas W; Bovy, Jo; Cunha, Katia; Garcia-Hernandez, Domingo A; Overbeek, Jamie C; Prieto, Carlos Allende; Beers, Timothy C; Frinchaboy, Peter M; Perez, Ana E Garcia; Hearty, Fred R; Holtzman, Jon; Majewski, Steven R; Nidever, David L; Schiavon, Ricardo P; Schneider, Donald P; Sobeck, Jennifer S; Smith, Verne V; Zamora, Olga; Zasowski, Gail
2015-01-01
We investigate the light-element behavior of red giant stars in Northern globular clusters (GCs) observed by the SDSS-III Apache Point Observatory Galactic Evolution Experiment (APOGEE). We derive abundances of nine elements (Fe, C, N, O, Mg, Al, Si, Ca, and Ti) for 428 red giant stars in 10 globular clusters. The intrinsic abundance range relative to measurement errors is examined, and the well-known C-N and Mg-Al anticorrelations are explored using an extreme-deconvolution code for the first time in a consistent way. We find that Mg and Al drive the population membership in most clusters, except in M107 and M71, the two most metal-rich clusters in our study, where the grouping is most sensitive to N. We also find a diversity in the abundance distributions, with some clusters exhibiting clear abundance bimodalities (for example M3 and M53) while others show extended distributions. The spread of Al abundances increases significantly as cluster average metallicity decreases as previously found by other works, ...
Webb, Jeremy J.; Vesperini, Enrico
2016-10-01
We make use of N-body simulations to determine the relationship between two observable parameters that are used to quantify mass segregation and energy equipartition in star clusters. Mass segregation can be quantified by measuring how the slope of a cluster's stellar mass function α changes with clustercentric distance r, and then calculating δ _α = d α (r)/d ln(r/r_m) where rm is the cluster's half-mass radius. The degree of energy equipartition in a cluster is quantified by η, which is a measure of how stellar velocity dispersion σ depends on stellar mass m via σ(m)∝m-η. Through a suite of N-body star cluster simulations with a range of initial sizes, binary fractions, orbits, black hole retention fractions, and initial mass functions, we present the co-evolution of δα and η. We find that measurements of the global η are strongly affected by the radial dependence of σ and mean stellar mass and the relationship between η and δα depends mainly on the cluster's initial conditions and the tidal field. Within rm, where these effects are minimized, we find that η and δα initially share a linear relationship. However, once the degree of mass segregation increases such that the radial dependence of σ and mean stellar mass become a factor within rm, or the cluster undergoes core collapse, the relationship breaks down. We propose a method for determining η within rm from an observational measurement of δα. In cases where η and δα can be measured independently, this new method offers a way of measuring the cluster's dynamical state.
Webb, Jeremy J.; Vesperini, Enrico
2017-01-01
We make use of N-body simulations to determine the relationship between two observable parameters that are used to quantify mass segregation and energy equipartition in star clusters. Mass segregation can be quantified by measuring how the slope of a cluster's stellar mass function α changes with clustercentric distance r, and then calculating δ _α = d α (r)/d ln(r/r_m), where rm is the cluster's half-mass radius. The degree of energy equipartition in a cluster is quantified by η, which is a measure of how stellar velocity dispersion σ depends on stellar mass m via σ(m) ∝ m-η. Through a suite of N-body star cluster simulations with a range of initial sizes, binary fractions, orbits, black hole retention fractions, and initial mass functions, we present the co-evolution of δα and η. We find that measurements of the global η are strongly affected by the radial dependence of σ and mean stellar mass and the relationship between η and δα depends mainly on the cluster's initial conditions and the tidal field. Within rm, where these effects are minimized, we find that η and δα initially share a linear relationship. However, once the degree of mass segregation increases such that the radial dependence of σ and mean stellar mass become a factor within rm, or the cluster undergoes core collapse, the relationship breaks down. We propose a method for determining η within rm from an observational measurement of δα. In cases where η and δα can be measured independently, this new method offers a way of measuring the cluster's dynamical state.
The coupled cluster method and entanglement in three fermion systems
Lévay, Péter; Nagy, Szilvia; Pipek, János; Sárosi, Gábor
2017-01-01
The Coupled Cluster (CC) and full CI expansions are studied for three fermions with six and seven modes. Surprisingly the CC expansion is tailor made to characterize the usual stochastic local operations and classical communication (SLOCC) entanglement classes. It means that the notion of a SLOCC transformation shows up quite naturally as a one relating the CC and CI expansions, and going from the CI expansion to the CC one is equivalent to obtaining a form for the state where the structure of the entanglement classes is transparent. In this picture, entanglement is characterized by the parameters of the cluster operators describing transitions from occupied states to singles, doubles, and triples of non-occupied ones. Using the CC parametrization of states in the seven-mode case, we give a simple formula for the unique SLOCC invariant J . Then we consider a perturbation problem featuring a state from the unique SLOCC class characterized by J ≠ 0 . For this state with entanglement generated by doubles, we investigate the phenomenon of changing the entanglement type due to the perturbing effect of triples. We show that there are states with real amplitudes such that their entanglement encoded into configurations of clusters of doubles is protected from errors generated by triples. Finally we put forward a proposal to use the parameters of the cluster operator describing transitions to doubles for entanglement characterization. Compared to the usual SLOCC classes, this provides a coarse grained approach to fermionic entanglement.
An Empirical Comparison of Variable Standardization Methods in Cluster Analysis.
Schaffer, Catherine M.; Green, Paul E.
1996-01-01
The common marketing research practice of standardizing the columns of a persons-by-variables data matrix prior to clustering the entities corresponding to the rows was evaluated with 10 large-scale data sets. Results indicate that the column standardization practice may be problematic for some kinds of data that marketing researchers used for…
Galhenage, Randima P; Xie, Kangmin; Diao, Weijian; Tengco, John Meynard M; Seuser, Grant S; Monnier, John R; Chen, Donna A
2015-11-14
Bimetallic Pt-Ru clusters have been grown on highly ordered pyrolytic graphite (HOPG) surfaces by vapor deposition and by electroless deposition. These studies help to bridge the material gap between well-characterized vapor deposited clusters and electrolessly deposited clusters, which are better suited for industrial catalyst preparation. In the vapor deposition experiments, bimetallic clusters were formed by the sequential deposition of Pt on Ru or Ru on Pt. Seed clusters of the first metal were grown on HOPG surfaces that were sputtered with Ar(+) to introduce defects, which act as nucleation sites for Pt or Ru. On the unmodified HOPG surface, both Pt and Ru clusters preferentially nucleated at the step edges, whereas on the sputtered surface, clusters with relatively uniform sizes and spatial distributions were formed. Low energy ion scattering experiments showed that the surface compositions of the bimetallic clusters are Pt-rich, regardless of the order of deposition, indicating that the interdiffusion of metals within the clusters is facile at room temperature. Bimetallic clusters on sputtered HOPG were prepared by the electroless deposition of Pt on Ru seed clusters from a Pt(+2) solution using dimethylamine borane as the reducing agent at pH 11 and 40 °C. After exposure to the electroless deposition bath, Pt was selectively deposited on Ru, as demonstrated by the detection of Pt on the surface by XPS, and the increase in the average cluster height without an increase in the number of clusters, indicating that Pt atoms are incorporated into the Ru seed clusters. Electroless deposition of Ru on Pt seed clusters was also achieved, but it should be noted that this deposition method is extremely sensitive to the presence of other metal ions in solution that have a higher reduction potential than the metal ion targeted for deposition.
Novel crystal timing calibration method based on total variation
Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng
2016-11-01
A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.
A variational Bayesian method to inverse problems with impulsive noise
Jin, Bangti
2012-01-01
We propose a novel numerical method for solving inverse problems subject to impulsive noises which possibly contain a large number of outliers. The approach is of Bayesian type, and it exploits a heavy-tailed t distribution for data noise to achieve robustness with respect to outliers. A hierarchical model with all hyper-parameters automatically determined from the given data is described. An algorithm of variational type by minimizing the Kullback-Leibler divergence between the true posteriori distribution and a separable approximation is developed. The numerical method is illustrated on several one- and two-dimensional linear and nonlinear inverse problems arising from heat conduction, including estimating boundary temperature, heat flux and heat transfer coefficient. The results show its robustness to outliers and the fast and steady convergence of the algorithm. © 2011 Elsevier Inc.
Introduction to the variational and diffusion Monte Carlo methods
Toulouse, Julien; Umrigar, C J
2015-01-01
We provide a pedagogical introduction to the two main variants of real-space quantum Monte Carlo methods for electronic-structure calculations: variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC). Assuming no prior knowledge on the subject, we review in depth the Metropolis-Hastings algorithm used in VMC for sampling the square of an approximate wave function, discussing details important for applications to electronic systems. We also review in detail the more sophisticated DMC algorithm within the fixed-node approximation, introduced to avoid the infamous Fermionic sign problem, which allows one to sample a more accurate approximation to the ground-state wave function. Throughout this review, we discuss the statistical methods used for evaluating expectation values and statistical uncertainties. In particular, we show how to estimate nonlinear functions of expectation values and their statistical uncertainties.
Light and Heavy Element Abundance Variations in the Outer Halo Globular Cluster NGC 6229
Johnson, Christian I.; Caldwell, Nelson; Rich, R. Michael; Walker, Matthew G.
2017-10-01
NGC 6229 is a relatively massive outer halo globular cluster that is primarily known for exhibiting a peculiar bimodal horizontal branch morphology. Given the paucity of spectroscopic data on this cluster, we present a detailed chemical composition analysis of 11 red giant branch members based on high resolution (R ≈ 38,000), high S/N (>100) spectra obtained with the MMT-Hectochelle instrument. We find the cluster to have a mean heliocentric radial velocity of -{138.1}-1.0+1.0 {km} {{{s}}}-1, a small dispersion of {3.8}-0.7+1.0 {km} {{{s}}}-1, and a relatively low {(M/{L}{{V}})}ȯ ={0.82}-0.28+0.49. The cluster is moderately metal-poor with =-1.13 dex and a modest dispersion of 0.06 dex. However, 18% (2/11) of the stars in our sample have strongly enhanced [La, Nd/Fe] ratios that are correlated with a small (∼0.05 dex) increase in [Fe/H]. NGC 6229 shares several chemical signatures with M75, NGC 1851, and the intermediate metallicity populations of ω Cen, which lead us to conclude that NGC 6229 is a lower mass iron-complex cluster. The light elements exhibit the classical (anti-)correlations that extend up to Si, but the cluster possesses a large gap in the O–Na plane that separates first and second generation stars. NGC 6229 also has unusually low [Na, Al/Fe] abundances that are consistent with an accretion origin. A comparison with M54 and other Sagittarius clusters suggests that NGC 6229 could also be the remnant core of a former dwarf spheroidal galaxy.
The initial conditions of observed star clusters - I. Method description and validation
Pijloo, J T; Alexander, P E R; Gieles, M; Larsen, S S; Groot, P J; Devecchi, B
2015-01-01
We have coupled a fast, parametrized star cluster evolution code to a Markov Chain Monte Carlo code to determine the distribution of probable initial conditions of observed star clusters, which may serve as a starting point for future $N$-body calculations. In this paper we validate our method by applying it to a set of star clusters which have been studied in detail numerically with $N$-body simulations and Monte Carlo methods: the Galactic globular clusters M4, 47 Tucanae, NGC 6397, M22, $\\omega$ Centauri, Palomar 14 and Palomar 4, the Galactic open cluster M67, and the M31 globular cluster G1. For each cluster we derive a distribution of initial conditions that, after evolution up to the cluster's current age, evolves to the currently observed conditions. We find that there is a connection between the morphology of the distribution of initial conditions and the dynamical age of a cluster and that a degeneracy in the initial half-mass radius towards small radii is present for clusters which have undergone a...
Directory of Open Access Journals (Sweden)
Oleg A. Donichev
2013-01-01
Full Text Available The article describes the main problems of formation of innovation clusters in the regions, the role and the importance of government in these issues. The characteristics of the main socio-economic and innovative performances of the region are analyzed to determine its potential for creating innovative economic cluster. The methods for detecting possible potential areas of formation of such cluster are developed.
A cluster merging method for time series microarray with production values.
Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio
2014-09-01
A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.
A SPATIAL CLUSTER METHOD SUPPORTED BY GIS FOR URBAN-SUBURBAN-RURAL CLASSIFICATION
Institute of Scientific and Technical Information of China (English)
ZHOU De-min; XU Jian-chun; John RADKE; MU Lan
2004-01-01
This study was undertaken to construct a preliminary spatial analysis method for building an urban-suburban-rural category in the specific sample area of central California and providing distribution characteristics in each category, based on which, some further studies such as regional manners of residential wood burning emission (PM2.5, the term used for a mixture of solid particles and liquid droplets found in the air, refers to particulate matter that is 2.5 μm or smaller in size) could be carried out for the project of residential wood combustion. Demographic and infrastructure data with spatial characteristics were processed by integrating both Geographic Information System (GIS) and statistics method (Cluster Analysis), and then output to a category map as the result. It approached the quantitative and multi-variables description on the major characteristics variations among the urban, suburban and rural;and perfected the TIGER's urban-rural classification scheme by adding suburban category. Based on the free public GIS data, the spatial analysis method provides an easy and ideal tool for geographic researchers, environmental planners, urban/regional planners and administrators to delineate different categories of regional function on the specific locations and dig out spatial distribution information they wanted. Furthermore, it allows for future adjustment on some parameters as the spatial analysis method is implemented in the different regions or various eco-social models.
Comparison of three methods for the estimation of cross-shock electric potential using Cluster data
Directory of Open Access Journals (Sweden)
Y. Hobara
2011-05-01
Full Text Available Cluster four point measurements provide a comprehensive dataset for the separation of temporal and spatial variations, which is crucial for the calculation of the cross shock electrostatic potential using electric field measurements. While Cluster is probably the most suited among present and past spacecraft missions to provide such a separation at the terrestrial bow shock, it is far from ideal for a study of the cross shock potential, since only 2 components of the electric field are measured in the spacecraft spin plane. The present paper is devoted to the comparison of 3 different techniques that can be used to estimate the potential with this limitation. The first technique is the estimate taking only into account the projection of the measured components onto the shock normal. The second uses the ideal MHD condition E·B = 0 to estimate the third electric field component. The last method is based on the structure of the electric field in the Normal Incidence Frame (NIF for which only the potential component along the shock normal and the motional electric field exist. All 3 approaches are used to estimate the potential for a single crossing of the terrestrial bow shock that took place on the 31 March 2001. Surprisingly all three methods lead to the same order of magnitude for the cross shock potential. It is argued that the third method must lead to more reliable results. The effect of the shock normal inaccuracy is investigated for this particular shock crossing. The resulting electrostatic potential appears too high in comparison with the theoretical results for low Mach number shocks. This shows the variability of the potential, interpreted in the frame of the non-stationary shock model.
An extended affinity propagation clustering method based on different data density types.
Zhao, XiuLi; Xu, WeiXiang
2015-01-01
Affinity propagation (AP) algorithm, as a novel clustering method, does not require the users to specify the initial cluster centers in advance, which regards all data points as potential exemplars (cluster centers) equally and groups the clusters totally by the similar degree among the data points. But in many cases there exist some different intensive areas within the same data set, which means that the data set does not distribute homogeneously. In such situation the AP algorithm cannot group the data points into ideal clusters. In this paper, we proposed an extended AP clustering algorithm to deal with such a problem. There are two steps in our method: firstly the data set is partitioned into several data density types according to the nearest distances of each data point; and then the AP clustering method is, respectively, used to group the data points into clusters in each data density type. Two experiments are carried out to evaluate the performance of our algorithm: one utilizes an artificial data set and the other uses a real seismic data set. The experiment results show that groups are obtained more accurately by our algorithm than OPTICS and AP clustering algorithm itself.
Improving Energy Efficient Clustering Method for Wireless Sensor Network
Directory of Open Access Journals (Sweden)
Md. Imran Hossain
2013-08-01
Full Text Available Wireless sensor networks have recently emerged as important computing platform. These sensors are power-limited and have limited computing resources. Therefore the sensor energy has to be managed wisely in order to maximize the lifetime of the network. Simply speaking, LEACH requires the knowledge of energy for every node in the network topology used. In LEACHs threshold which selects the cluster head is fixed so this protocol does not consider network topology environments. We proposed IELP algorithm, which selects cluster heads using different thresholds. New cluster head selection probability consists of the initial energy and the number of neighbor nodes. On rotation basis, a head-set member receives data from the neighboring nodes and transmits the aggregated results to the distant base station. For a given number of data collecting sensor nodes, the number of control and management nodes can be systematically adjusted to reduce the energy consumption, which increases the network life.The simulation results show that the performance of IELP has an improvement of 39% over LEACH and 20% over SEP in the area of 100m*100m for m=0.1, α =2 where advanced nodes (m and the additional energy factor between advanced and normal nodes (α.
Spectral methods and cluster structure in correlation-based networks
Heimo, Tapio; Tibély, Gergely; Saramäki, Jari; Kaski, Kimmo; Kertész, János
2008-10-01
We investigate how in complex systems the eigenpairs of the matrices derived from the correlations of multichannel observations reflect the cluster structure of the underlying networks. For this we use daily return data from the NYSE and focus specifically on the spectral properties of weight W=|-δ and diffusion matrices D=W/sj-δ, where C is the correlation matrix and si=∑jW the strength of node j. The eigenvalues (and corresponding eigenvectors) of the weight matrix are ranked in descending order. As in the earlier observations, the first eigenvector stands for a measure of the market correlations. Its components are, to first approximation, equal to the strengths of the nodes and there is a second order, roughly linear, correction. The high ranking eigenvectors, excluding the highest ranking one, are usually assigned to market sectors and industrial branches. Our study shows that both for weight and diffusion matrices the eigenpair analysis is not capable of easily deducing the cluster structure of the network without a priori knowledge. In addition we have studied the clustering of stocks using the asset graph approach with and without spectrum based noise filtering. It turns out that asset graphs are quite insensitive to noise and there is no sharp percolation transition as a function of the ratio of bonds included, thus no natural threshold value for that ratio seems to exist. We suggest that these observations can be of use for other correlation based networks as well.
A NEW METHOD TO QUANTIFY X-RAY SUBSTRUCTURES IN CLUSTERS OF GALAXIES
Energy Technology Data Exchange (ETDEWEB)
Andrade-Santos, Felipe; Lima Neto, Gastao B.; Lagana, Tatiana F. [Departamento de Astronomia, Instituto de Astronomia, Geofisica e Ciencias Atmosfericas, Universidade de Sao Paulo, Geofisica e Ciencias Atmosfericas, Rua do Matao 1226, Cidade Universitaria, 05508-090 Sao Paulo, SP (Brazil)
2012-02-20
We present a new method to quantify substructures in clusters of galaxies, based on the analysis of the intensity of structures. This analysis is done in a residual image that is the result of the subtraction of a surface brightness model, obtained by fitting a two-dimensional analytical model ({beta}-model or Sersic profile) with elliptical symmetry, from the X-ray image. Our method is applied to 34 clusters observed by the Chandra Space Telescope that are in the redshift range z in [0.02, 0.2] and have a signal-to-noise ratio (S/N) greater than 100. We present the calibration of the method and the relations between the substructure level with physical quantities, such as the mass, X-ray luminosity, temperature, and cluster redshift. We use our method to separate the clusters in two sub-samples of high- and low-substructure levels. We conclude, using Monte Carlo simulations, that the method recuperates very well the true amount of substructure for small angular core radii clusters (with respect to the whole image size) and good S/N observations. We find no evidence of correlation between the substructure level and physical properties of the clusters such as gas temperature, X-ray luminosity, and redshift; however, analysis suggest a trend between the substructure level and cluster mass. The scaling relations for the two sub-samples (high- and low-substructure level clusters) are different (they present an offset, i.e., given a fixed mass or temperature, low-substructure clusters tend to be more X-ray luminous), which is an important result for cosmological tests using the mass-luminosity relation to obtain the cluster mass function, since they rely on the assumption that clusters do not present different scaling relations according to their dynamical state.
A Variational Method in Out of Equilibrium Physical Systems
Pinheiro, Mario J
2012-01-01
A variational principle is further developed for out of equilibrium dynamical systems by using the concept of maximum entropy. With this new formulation it is obtained a set of two first-order differential equations, revealing the same formal symplectic structure shared by classical mechanics, fluid mechanics and thermodynamics. In particular, it is obtained an extended equation of motion for a rotating dynamical system, from where it emerges a kind of topological torsion current of the form $\\epsilon_{ijk} A_j \\omega_k$, with $A_j$ and $\\omega_k$ denoting components of the vector potential (gravitational or/and electromagnetic) and $\\omega$ is the angular velocity of the accelerated frame. In addition, it is derived a special form of Umov-Poynting's theorem for rotating gravito-electromagnetic systems, and obtained a general condition of equilibrium for a rotating plasma. The variational method is then applied to clarify the working mechanism of some particular devices, such as the Bennett pinch and vacuum a...
A Total Variation-Based Reconstruction Method for Dynamic MRI
Directory of Open Access Journals (Sweden)
Germana Landi
2008-01-01
Full Text Available In recent years, total variation (TV regularization has become a popular and powerful tool for image restoration and enhancement. In this work, we apply TV minimization to improve the quality of dynamic magnetic resonance images. Dynamic magnetic resonance imaging is an increasingly popular clinical technique used to monitor spatio-temporal changes in tissue structure. Fast data acquisition is necessary in order to capture the dynamic process. Most commonly, the requirement of high temporal resolution is fulfilled by sacrificing spatial resolution. Therefore, the numerical methods have to address the issue of images reconstruction from limited Fourier data. One of the most successful techniques for dynamic imaging applications is the reduced-encoded imaging by generalized-series reconstruction method of Liang and Lauterbur. However, even if this method utilizes a priori data for optimal image reconstruction, the produced dynamic images are degraded by truncation artifacts, most notably Gibbs ringing, due to the spatial low resolution of the data. We use a TV regularization strategy in order to reduce these truncation artifacts in the dynamic images. The resulting TV minimization problem is solved by the fixed point iteration method of Vogel and Oman. The results of test problems with simulated and real data are presented to illustrate the effectiveness of the proposed approach in reducing the truncation artifacts of the reconstructed images.
Birkholz, Adam B; Schlegel, H Bernhard
2015-12-28
The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster.
Banik, Subrata; Pal, Sourav; Prasad, M Durga
2010-10-12
An effective operator approach based on the coupled cluster method is described and applied to calculate vibrational expectation values and absolute transition matrix elements. Coupled cluster linear response theory (CCLRT) is used to calculate excited states. The convergence pattern of these properties with the rank of the excitation operator is studied. The method is applied to a water molecule. Arponen-type double similarity transformation in extended coupled cluster (ECCM) framework is also used to generate an effective operator, and the convergence pattern of these properties is compared to the normal coupled cluster (NCCM) approach. It is found that the coupled cluster method provides an accurate description of these quantities for low lying vibrational excited states. The ECCM provides a significant improvement for the calculation of the transition matrix elements.
Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization
Fornasier, Massimo
2009-01-01
This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.
Fast Second Degree Total Variation Method for Image Compressive Sensing.
Liu, Pengfei; Xiao, Liang; Zhang, Jun
2015-01-01
This paper presents a computationally efficient algorithm for image compressive sensing reconstruction using a second degree total variation (HDTV2) regularization. Firstly, a preferably equivalent formulation of the HDTV2 functional is derived, which can be formulated as a weighted L1-L2 mixed norm of second degree image derivatives under the spectral decomposition framework. Secondly, using the equivalent formulation of HDTV2, we introduce an efficient forward-backward splitting (FBS) scheme to solve the HDTV2-based image reconstruction model. Furthermore, from the averaged non-expansive operator point of view, we make a detailed analysis on the convergence of the proposed FBS algorithm. Experiments on medical images demonstrate that the proposed method outperforms several fast algorithms of the TV and HDTV2 reconstruction models in terms of peak signal to noise ratio (PSNR), structural similarity index (SSIM) and convergence speed.
Leveraging long sequencing reads to investigate R-gene clustering and variation in sugar beet
Host-pathogen interactions are of prime importance to modern agriculture. Plants utilize various types of resistance genes to mitigate pathogen damage. Identification of the specific gene responsible for a specific resistance can be difficult due to duplication and clustering within R-gene families....
Clustering Methods; Part IV of Scientific Report No. ISR-18, Information Storage and Retrieval...
Cornell Univ., Ithaca, NY. Dept. of Computer Science.
Two papers are included as Part Four of this report on Salton's Magical Automatic Retriever of Texts (SMART) project report. The first paper: "A Controlled Single Pass Classification Algorithm with Application to Multilevel Clustering" by D. B. Johnson and J. M. Laferente presents a single pass clustering method which compares favorably…
Dumenci, Levent; Windle, Michael
2001-01-01
Used Monte Carlo methods to evaluate the adequacy of cluster analysis to recover group membership based on simulated latent growth curve (LCG) models. Cluster analysis failed to recover growth subtypes adequately when the difference between growth curves was shape only. Discusses circumstances under which it was more successful. (SLD)
A method of using cluster analysis to study statistical dependence in multivariate data
Borucki, W. J.; Card, D. H.; Lyle, G. C.
1975-01-01
A technique is presented that uses both cluster analysis and a Monte Carlo significance test of clusters to discover associations between variables in multidimensional data. The method is applied to an example of a noisy function in three-dimensional space, to a sample from a mixture of three bivariate normal distributions, and to the well-known Fisher's Iris data.
Clustering of hydrological data: a review of methods for runoff predictions in ungauged basins
Dogulu, Nilay; Kentel, Elcin
2017-04-01
There is a great body of research that has looked into the challenge of hydrological predictions in ungauged basins as driven by the Prediction in Ungauged Basins (PUB) initiative of the International Association of Hydrological Sciences (IAHS). Transfer of hydrological information (e.g. model parameters, flow signatures) from gauged to ungauged catchment, often referred as "regionalization", is the main objective and benefits from identification of hydrologically homogenous regions. Within this context, indirect representation of hydrologic similarity for ungauged catchments, which is not a straightforward task due to absence of streamflow measurements and insufficient knowledge of hydrologic behavior, has been explored in the literature. To this aim, clustering methods have been widely adopted. While most of the studies employ hard clustering techniques such as hierarchical (divisive or agglomerative) clustering, there have been more recent attempts taking advantage of fuzzy set theory (fuzzy clustering) and nonlinear methods (e.g. self-organizing maps). The relevant research findings from this fundamental task of hydrologic sciences have revealed the value of different clustering methods for improved understanding of catchment hydrology. However, despite advancements there still remains challenges and yet opportunities for research on clustering for regionalization purposes. The present work provides an overview of clustering techniques and their applications in hydrology with focus on regionalization for the PUB problem. Identifying their advantages and disadvantages, we discuss the potential of innovative clustering methods and reflect on future challenges in view of the research objectives of the PUB initiative.
K2: A new method for the detection of galaxy clusters based on CFHTLS multicolor images
Thanjavur, Karun; Crampton, David
2009-01-01
We have developed a new method, K2, optimized for the detection of galaxy clusters in multicolor images. Based on the Red Sequence approach, K2 detects clusters using simultaneous enhancements in both colors and position. The detection significance is robustly determined through extensive Monte-Carlo simulations and through comparison with available cluster catalogs based on two different optical methods, and also on X-ray data. K2 also provides quantitative estimates of the candidate clusters' richness and photometric redshifts. Initially K2 was applied to 161 sq deg of two color gri images of the CFHTLS-Wide data. Our simulations show that the false detection rate, at our selected threshold, is only ~1%, and that the cluster catalogs are ~80% complete up to a redshift of 0.6 for Fornax-like and richer clusters and to z ~0.3 for poorer clusters. Based on Terapix T05 release gri photometric catalogs, 35 clusters/sq deg are detected, with 1-2 Fornax-like or richer clusters every two square degrees. Catalogs co...
Genetic variations and haplotype diversity of the UGT1 gene cluster in the Chinese population.
Directory of Open Access Journals (Sweden)
Jing Yang
Full Text Available Vertebrates require tremendous molecular diversity to defend against numerous small hydrophobic chemicals. UDP-glucuronosyltransferases (UGTs are a large family of detoxification enzymes that glucuronidate xenobiotics and endobiotics, facilitating their excretion from the body. The UGT1 gene cluster contains a tandem array of variable first exons, each preceded by a specific promoter, and a common set of downstream constant exons, similar to the genomic organization of the protocadherin (Pcdh, immunoglobulin, and T-cell receptor gene clusters. To assist pharmacogenomics studies in Chinese, we sequenced nine first exons, promoter and intronic regions, and five common exons of the UGT1 gene cluster in a population sample of 253 unrelated Chinese individuals. We identified 101 polymorphisms and found 15 novel SNPs. We then computed allele frequencies for each polymorphism and reconstructed their linkage disequilibrium (LD map. The UGT1 cluster can be divided into five linkage blocks: Block 9 (UGT1A9, Block 9/7/6 (UGT1A9, UGT1A7, and UGT1A6, Block 5 (UGT1A5, Block 4/3 (UGT1A4 and UGT1A3, and Block 3' UTR. Furthermore, we inferred haplotypes and selected their tagSNPs. Finally, comparing our data with those of three other populations of the HapMap project revealed ethnic specificity of the UGT1 genetic diversity in Chinese. These findings have important implications for future molecular genetic studies of the UGT1 gene cluster as well as for personalized medical therapies in Chinese.
Method for exploratory cluster analysis and visualisation of single-trial ERP ensembles.
Williams, N J; Nasuto, S J; Saddy, J D
2015-07-30
The validity of ensemble averaging on event-related potential (ERP) data has been questioned, due to its assumption that the ERP is identical across trials. Thus, there is a need for preliminary testing for cluster structure in the data. We propose a complete pipeline for the cluster analysis of ERP data. To increase the signal-to-noise (SNR) ratio of the raw single-trials, we used a denoising method based on Empirical Mode Decomposition (EMD). Next, we used a bootstrap-based method to determine the number of clusters, through a measure called the Stability Index (SI). We then used a clustering algorithm based on a Genetic Algorithm (GA) to define initial cluster centroids for subsequent k-means clustering. Finally, we visualised the clustering results through a scheme based on Principal Component Analysis (PCA). After validating the pipeline on simulated data, we tested it on data from two experiments - a P300 speller paradigm on a single subject and a language processing study on 25 subjects. Results revealed evidence for the existence of 6 clusters in one experimental condition from the language processing study. Further, a two-way chi-square test revealed an influence of subject on cluster membership. Our analysis operates on denoised single-trials, the number of clusters are determined in a principled manner and the results are presented through an intuitive visualisation. Given the cluster structure in some experimental conditions, we suggest application of cluster analysis as a preliminary step before ensemble averaging. Copyright © 2015 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
D. A. Viattchenin
2009-01-01
Full Text Available A method for constructing a subset of labeled objects which is used in a heuristic algorithm of possible clusterization with partial training is proposed in the paper. The method is based on data preprocessing by the heuristic algorithm of possible clusterization using a transitive closure of a fuzzy tolerance. Method efficiency is demonstrated by way of an illustrative example.
Directory of Open Access Journals (Sweden)
Zhang Xinmin
2011-05-01
Full Text Available Abstract Background In highly copy number variable (CNV regions such as the human defensin gene locus, comprehensive assessment of sequence variations is challenging. PCR approaches are practically restricted to tiny fractions, and next-generation sequencing (NGS approaches of whole individual genomes e.g. by the 1000 Genomes Project is confined by an affordable sequence depth. Combining target enrichment with NGS may represent a feasible approach. Results As a proof of principle, we enriched a ~850 kb section comprising the CNV defensin gene cluster DEFB, the invariable DEFA part and 11 control regions from two genomes by sequence capture and sequenced it by 454 technology. 6,651 differences to the human reference genome were found. Comparison to HapMap genotypes revealed sensitivities and specificities in the range of 94% to 99% for the identification of variations. Using error probabilities for rigorous filtering revealed 2,886 unique single nucleotide variations (SNVs including 358 putative novel ones. DEFB CN determinations by haplotype ratios were in agreement with alternative methods. Conclusion Although currently labor extensive and having high costs, target enriched NGS provides a powerful tool for the comprehensive assessment of SNVs in highly polymorphic CNV regions of individual genomes. Furthermore, it reveals considerable amounts of putative novel variations and simultaneously allows CN estimation.
A two-stage method for microcalcification cluster segmentation in mammography by deformable models
Energy Technology Data Exchange (ETDEWEB)
Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.; Karahaliou, A.; Costaridou, L., E-mail: costarid@upatras.gr [Department of Medical Physics, School of Medicine, University of Patras, Patras 26504 (Greece); Vassiou, K. [Department of Anatomy, School of Medicine, University of Thessaly, Larissa 41500 (Greece)
2015-10-15
Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods are applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST{sub cluster}, average of minimum distance—AMINDIST{sub cluster}) and the area overlap measure (AOM{sub cluster}). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing
The Swift UVOT Stars Survey: I. Methods and Test Clusters
Siegel, Michael H; Linevsky, Jacquelyn S; Bond, Howard E; Holland, Stephen T; Hoversten, Erik A; Berrier, Joshua L; Breeveld, Alice A; Brown, Peter J; Gronwall, Caryl A
2014-01-01
We describe the motivations and background of a large survey of nearby stel- lar populations using the Ultraviolet Optical Telescope (UVOT) aboard the Swift Gamma-Ray Burst Mission. UVOT, with its wide field, NUV sensitivity, and 2.3 spatial resolution, is uniquely suited to studying nearby stellar populations and providing insight into the NUV properties of hot stars and the contribution of those stars to the integrated light of more distant stellar populations. We review the state of UV stellar photometry, outline the survey, and address problems spe- cific to wide- and crowded-field UVOT photometry. We present color-magnitude diagrams of the nearby open clusters M 67, NGC 188, and NGC 2539, and the globular cluster M 79. We demonstrate that UVOT can easily discern the young- and intermediate-age main sequences, blue stragglers, and hot white dwarfs, pro- ducing results consistent with previous studies. We also find that it characterizes the blue horizontal branch of M 79 and easily identifies a known post-...
The swift UVOT stars survey. I. Methods and test clusters
Energy Technology Data Exchange (ETDEWEB)
Siegel, Michael H.; Porterfield, Blair L.; Linevsky, Jacquelyn S.; Bond, Howard E.; Hoversten, Erik A.; Berrier, Joshua L.; Gronwall, Caryl A. [Department of Astronomy and Astrophysics, The Pennsylvania State University, 525 Davey Laboratory, University Park, PA 16802 (United States); Holland, Stephen T. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Breeveld, Alice A. [Mullard Space Science Laboratory, University College London, Holmbury St. Mary, Dorking, Surrey RH5 6NT (United Kingdom); Brown, Peter J., E-mail: siegel@astro.psu.edu, E-mail: blp14@psu.edu, E-mail: heb11@psu.edu, E-mail: caryl@astro.psu.edu, E-mail: sholland@stsci.edu, E-mail: aab@mssl.ucl.ac.uk, E-mail: grbpeter@yahoo.com [George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A. and M. University, Department of Physics and Astronomy, 4242 TAMU, College Station, TX 77843 (United States)
2014-12-01
We describe the motivations and background of a large survey of nearby stellar populations using the Ultraviolet Optical Telescope (UVOT) on board the Swift Gamma-Ray Burst Mission. UVOT, with its wide field, near-UV sensitivity, and 2.″3 spatial resolution, is uniquely suited to studying nearby stellar populations and providing insight into the near-UV properties of hot stars and the contribution of those stars to the integrated light of more distant stellar populations. We review the state of UV stellar photometry, outline the survey, and address problems specific to wide- and crowded-field UVOT photometry. We present color–magnitude diagrams of the nearby open clusters M67, NGC 188, and NGC 2539, and the globular cluster M79. We demonstrate that UVOT can easily discern the young- and intermediate-age main sequences, blue stragglers, and hot white dwarfs, producing results consistent with previous studies. We also find that it characterizes the blue horizontal branch of M79 and easily identifies a known post-asymptotic giant branch star.
Fast optimization of binary clusters using a novel dynamic lattice searching method.
Wu, Xia; Cheng, Wen
2014-09-28
Global optimization of binary clusters has been a difficult task despite of much effort and many efficient methods. Directing toward two types of elements (i.e., homotop problem) in binary clusters, two classes of virtual dynamic lattices are constructed and a modified dynamic lattice searching (DLS) method, i.e., binary DLS (BDLS) method, is developed. However, it was found that the BDLS can only be utilized for the optimization of binary clusters with small sizes because homotop problem is hard to be solved without atomic exchange operation. Therefore, the iterated local search (ILS) method is adopted to solve homotop problem and an efficient method based on the BDLS method and ILS, named as BDLS-ILS, is presented for global optimization of binary clusters. In order to assess the efficiency of the proposed method, binary Lennard-Jones clusters with up to 100 atoms are investigated. Results show that the method is proved to be efficient. Furthermore, the BDLS-ILS method is also adopted to study the geometrical structures of (AuPd)79 clusters with DFT-fit parameters of Gupta potential.
An empirical method to cluster objective nebulizer adherence data among adults with cystic fibrosis
Directory of Open Access Journals (Sweden)
Hoo ZH
2017-03-01
Full Text Available Zhe H Hoo,1,2 Michael J Campbell,1 Rachael Curley,1,2 Martin J Wildman1,2 1School of Health and Related Research (ScHARR, University of Sheffield, 2Sheffield Adult Cystic Fibrosis Centre, Northern General Hospital, Sheffield, UK Background: The purpose of using preventative inhaled treatments in cystic fibrosis is to improve health outcomes. Therefore, understanding the relationship between adherence to treatment and health outcome is crucial. Temporal variability, as well as absolute magnitude of adherence affects health outcomes, and there is likely to be a threshold effect in the relationship between adherence and outcomes. We therefore propose a pragmatic algorithm-based clustering method of objective nebulizer adherence data to better understand this relationship, and potentially, to guide clinical decisions.Methods to cluster adherence data: This clustering method consists of three related steps. The first step is to split adherence data for the previous 12 months into four 3-monthly sections. The second step is to calculate mean adherence for each section and to score the section based on mean adherence. The third step is to aggregate the individual scores to determine the final cluster (“cluster 1” = very low adherence; “cluster 2” = low adherence; “cluster 3” = moderate adherence; “cluster 4” = high adherence, and taking into account adherence trend as represented by sequential individual scores. The individual scores should be displayed along with the final cluster for clinicians to fully understand the adherence data.Three illustrative cases: We present three cases to illustrate the use of the proposed clustering method.Conclusion: This pragmatic clustering method can deal with adherence data of variable duration (ie, can be used even if 12 months’ worth of data are unavailable and can cluster adherence data in real time. Empirical support for some of the clustering parameters is not yet available, but the suggested
Directory of Open Access Journals (Sweden)
R. Yulita Molliq
2012-01-01
Full Text Available In this study, fractional Rosenau-Hynam equations is considered. We implement relatively new analytical techniques, the variational iteration method and the homotopy perturbation method, for solving this equation. The fractional derivatives are described in the Caputo sense. The two methods in applied mathematics can be used as alternative methods for obtaining analytic and approximate solutions for fractional Rosenau-Hynam equations. In these schemes, the solution takes the form of a convergent series with easily computable components. The present methods perform extremely well in terms of efficiency and simplicity.
Xu, Zhiqiang
2017-02-16
Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.
Comment on “Variational Iteration Method for Fractional Calculus Using He’s Polynomials”
Directory of Open Access Journals (Sweden)
Ji-Huan He
2012-01-01
boundary value problems. This note concludes that the method is a modified variational iteration method using He’s polynomials. A standard variational iteration algorithm for fractional differential equations is suggested.
Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo
2016-01-01
Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community.
Variational methods in electron-atom scattering theory
Nesbet, Robert K
1980-01-01
The investigation of scattering phenomena is a major theme of modern physics. A scattered particle provides a dynamical probe of the target system. The practical problem of interest here is the scattering of a low energy electron by an N-electron atom. It has been difficult in this area of study to achieve theoretical results that are even qualitatively correct, yet quantitative accuracy is often needed as an adjunct to experiment. The present book describes a quantitative theoretical method, or class of methods, that has been applied effectively to this problem. Quantum mechanical theory relevant to the scattering of an electron by an N-electron atom, which may gain or lose energy in the process, is summarized in Chapter 1. The variational theory itself is presented in Chapter 2, both as currently used and in forms that may facilitate future applications. The theory of multichannel resonance and threshold effects, which provide a rich structure to observed electron-atom scattering data, is presented in Cha...
Improved method for the feature extraction of laser scanner using genetic clustering
Institute of Scientific and Technical Information of China (English)
Yu Jinxia; Cai Zixing; Duan Zhuohua
2008-01-01
Feature extraction of range images provided by ranging sensor is a key issue of pattern recognition. To automatically extract the environmental feature sensed by a 2D ranging sensor laser scanner, an improved method based on genetic clustering VGA-clustering is presented. By integrating the spatial neighbouring information of range data into fuzzy clustering algorithm, a weighted fuzzy clustering algorithm (WFCA) instead of standard clustering algorithm is introduced to realize feature extraction of laser scanner. Aimed at the unknown clustering number in advance, several validation index functions are used to estimate the validity of different clustering al-gorithms and one validation index is selected as the fitness function of genetic algorithm so as to determine the accurate clustering number automatically. At the same time, an improved genetic algorithm IVGA on the basis of VGA is proposed to solve the local optimum of clustering algorithm, which is implemented by increasing the population diversity and improving the genetic operators of elitist rule to enhance the local search capacity and to quicken the convergence speed. By the comparison with other algorithms, the effectiveness of the algorithm introduced is demonstrated.
Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B
2017-08-15
In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of
Morgan, Katy E; Forbes, Andrew B; Keogh, Ruth H; Jairath, Vipul; Kahan, Brennan C
2017-01-30
In cluster randomised cross-over (CRXO) trials, clusters receive multiple treatments in a randomised sequence over time. In such trials, there is usual correlation between patients in the same cluster. In addition, within a cluster, patients in the same period may be more similar to each other than to patients in other periods. We demonstrate that it is necessary to account for these correlations in the analysis to obtain correct Type I error rates. We then use simulation to compare different methods of analysing a binary outcome from a two-period CRXO design. Our simulations demonstrated that hierarchical models without random effects for period-within-cluster, which do not account for any extra within-period correlation, performed poorly with greatly inflated Type I errors in many scenarios. In scenarios where extra within-period correlation was present, a hierarchical model with random effects for cluster and period-within-cluster only had correct Type I errors when there were large numbers of clusters; with small numbers of clusters, the error rate was inflated. We also found that generalised estimating equations did not give correct error rates in any scenarios considered. An unweighted cluster-level summary regression performed best overall, maintaining an error rate close to 5% for all scenarios, although it lost power when extra within-period correlation was present, especially for small numbers of clusters. Results from our simulation study show that it is important to model both levels of clustering in CRXO trials, and that any extra within-period correlation should be accounted for. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Using the SaTScan method to detect local malaria clusters for guiding malaria control programmes
Directory of Open Access Journals (Sweden)
Kok Gerdalize
2009-04-01
Full Text Available Abstract Background Mpumalanga Province, South Africa is a low malaria transmission area that is subject to malaria epidemics. SaTScan methodology was used by the malaria control programme to detect local malaria clusters to assist disease control planning. The third season for case cluster identification overlapped with the first season of implementing an outbreak identification and response system in the area. Methods SaTScan™ software using the Kulldorf method of retrospective space-time permutation and the Bernoulli purely spatial model was used to identify malaria clusters using definitively confirmed individual cases in seven towns over three malaria seasons. Following passive case reporting at health facilities during the 2002 to 2005 seasons, active case detection was carried out in the communities, this assisted with determining the probable source of infection. The distribution and statistical significance of the clusters were explored by means of Monte Carlo replication of data sets under the null hypothesis with replications greater than 999 to ensure adequate power for defining clusters. Results and discussion SaTScan detected five space-clusters and two space-time clusters during the study period. There was strong concordance between recognized local clustering of cases and outbreak declaration in specific towns. Both Albertsnek and Thambokulu reported malaria outbreaks in the same season as space-time clusters. This synergy may allow mutual validation of the two systems in confirming outbreaks demanding additional resources and cluster identification at local level to better target resources. Conclusion Exploring the clustering of cases assisted with the planning of public health activities, including mobilizing health workers and resources. Where appropriate additional indoor residual spraying, focal larviciding and health promotion activities, were all also carried out.
Energy Technology Data Exchange (ETDEWEB)
Gianturco, F.A.; De Lara-Castells, M.P. [Univ. of Rome (Italy)
1996-10-05
Several modelings of exchange and correlation forces which can be carried out using density functional theory (DFT) methods have been analyzed to study their efficiency and reliability when evaluating possible competing structures of helium ionic clusters of increasing size. This study examines He{sub n}{sup +} systems with n from 1 to 7 and compares the present calculations with earlier evaluations that used more conventional, and more computationally intensive, methods with configuration interaction (CI) approaches. The present results indicate that it is indeed possible to strike a fruitful balance between reduction of computational times and quality of the ensuing structural information. 62 refs., 1 fig., 8 tabs.
Proposing Cluster_Similarity Method in Order to Find as Much Better Similarities in Databases
Feizi-Derakhshi, Mohammad-Reza
2011-01-01
Different ways of entering data into databases result in duplicate records that cause increasing of databases' size. This is a fact that we cannot ignore it easily. There are several methods that are used for this purpose. In this paper, we have tried to increase the accuracy of operations by using cluster similarity instead of direct similarity of fields. So that clustering is done on fields of database and according to accomplished clustering on fields, similarity degree of records is obtained. In this method by using present information in database, more logical similarity is obtained for deficient information that in general, the method of cluster similarity could improve operations 24% compared with previous methods.
K-Profiles: A Nonlinear Clustering Method for Pattern Detection in High Dimensional Data
Directory of Open Access Journals (Sweden)
Kai Wang
2015-01-01
Full Text Available With modern technologies such as microarray, deep sequencing, and liquid chromatography-mass spectrometry (LC-MS, it is possible to measure the expression levels of thousands of genes/proteins simultaneously to unravel important biological processes. A very first step towards elucidating hidden patterns and understanding the massive data is the application of clustering techniques. Nonlinear relations, which were mostly unutilized in contrast to linear correlations, are prevalent in high-throughput data. In many cases, nonlinear relations can model the biological relationship more precisely and reflect critical patterns in the biological systems. Using the general dependency measure, Distance Based on Conditional Ordered List (DCOL that we introduced before, we designed the nonlinear K-profiles clustering method, which can be seen as the nonlinear counterpart of the K-means clustering algorithm. The method has a built-in statistical testing procedure that ensures genes not belonging to any cluster do not impact the estimation of cluster profiles. Results from extensive simulation studies showed that K-profiles clustering not only outperformed traditional linear K-means algorithm, but also presented significantly better performance over our previous General Dependency Hierarchical Clustering (GDHC algorithm. We further analyzed a gene expression dataset, on which K-profile clustering generated biologically meaningful results.
Directory of Open Access Journals (Sweden)
Nicoló Musmeci
Full Text Available We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover,we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging [corrected].
Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T
2015-01-01
We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover,we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging [corrected].
Discontinuous Galerkin Method for Total Variation Minimization on one-dimensional Inpainting Problem
Wang, Xijian
2011-01-01
This paper is concerned with the numerical minimization of energy functionals in $BV(\\Omega)$ (the space of bounded variation functions) involving total variation for gray-scale 1-dimensional inpainting problem. Applications are shown by finite element method and discontinuous Galerkin method for total variation minimization. We include the numerical examples which show the different recovery image by these two methods.
A comparison of four clustering methods for brain expression microarray data
Directory of Open Access Journals (Sweden)
Owen Michael J
2008-11-01
Full Text Available Abstract Background DNA microarrays, which determine the expression levels of tens of thousands of genes from a sample, are an important research tool. However, the volume of data they produce can be an obstacle to interpretation of the results. Clustering the genes on the basis of similarity of their expression profiles can simplify the data, and potentially provides an important source of biological inference, but these methods have not been tested systematically on datasets from complex human tissues. In this paper, four clustering methods, CRC, k-means, ISA and memISA, are used upon three brain expression datasets. The results are compared on speed, gene coverage and GO enrichment. The effects of combining the clusters produced by each method are also assessed. Results k-means outperforms the other methods, with 100% gene coverage and GO enrichments only slightly exceeded by memISA and ISA. Those two methods produce greater GO enrichments on the datasets used, but at the cost of much lower gene coverage, fewer clusters produced, and speed. The clusters they find are largely different to those produced by k-means. Combining clusters produced by k-means and memISA or ISA leads to increased GO enrichment and number of clusters produced (compared to k-means alone, without negatively impacting gene coverage. memISA can also find potentially disease-related clusters. In two independent dorsolateral prefrontal cortex datasets, it finds three overlapping clusters that are either enriched for genes associated with schizophrenia, genes differentially expressed in schizophrenia, or both. Two of these clusters are enriched for genes of the MAP kinase pathway, suggesting a possible role for this pathway in the aetiology of schizophrenia. Conclusion Considered alone, k-means clustering is the most effective of the four methods on typical microarray brain expression datasets. However, memISA and ISA can add extra high-quality clusters to the set produced
Directory of Open Access Journals (Sweden)
Mehmet Tarik Atay
2013-01-01
Full Text Available The Variational Iteration Method (VIM and Modified Variational Iteration Method (MVIM are used to find solutions of systems of stiff ordinary differential equations for both linear and nonlinear problems. Some examples are given to illustrate the accuracy and effectiveness of these methods. We compare our results with exact results. In some studies related to stiff ordinary differential equations, problems were solved by Adomian Decomposition Method and VIM and Homotopy Perturbation Method. Comparisons with exact solutions reveal that the Variational Iteration Method (VIM and the Modified Variational Iteration Method (MVIM are easier to implement. In fact, these methods are promising methods for various systems of linear and nonlinear stiff ordinary differential equations. Furthermore, VIM, or in some cases MVIM, is giving exact solutions in linear cases and very satisfactory solutions when compared to exact solutions for nonlinear cases depending on the stiffness ratio of the stiff system to be solved.
Institute of Scientific and Technical Information of China (English)
梁立孚
1999-01-01
By using the involutory transformations, the classical variational principle——Hamiltonian principle of two kinds of variables in general mechanics is advanced and by using undetermined Lagrangian multiplier method, the generalized variational principles and generalized variational principles with subsidiary conditions are established. The stationary conditions of various kinds of variational principles are derived and the relational problems discussed.
Variational methods applied to problems of diffusion and reaction
Strieder, William
1973-01-01
This monograph is an account of some problems involving diffusion or diffusion with simultaneous reaction that can be illuminated by the use of variational principles. It was written during a period that included sabbatical leaves of one of us (W. S. ) at the University of Minnesota and the other (R. A. ) at the University of Cambridge and we are grateful to the Petroleum Research Fund for helping to support the former and the Guggenheim Foundation for making possible the latter. We would also like to thank Stephen Prager for getting us together in the first place and for showing how interesting and useful these methods can be. We have also benefitted from correspondence with Dr. A. M. Arthurs of the University of York and from the counsel of Dr. B. D. Coleman the general editor of this series. Table of Contents Chapter 1. Introduction and Preliminaries . 1. 1. General Survey 1 1. 2. Phenomenological Descriptions of Diffusion and Reaction 2 1. 3. Correlation Functions for Random Suspensions 4 1. 4. Mean Free ...
Institute of Scientific and Technical Information of China (English)
JEONG Myeong-ho; JANG Yong-ll; PARK Soon-young; BAE Hae-young
2004-01-01
A shared-nothing spatial database cluster is system that provides continuous service even if some system failure happens in any node. So, an efficient recovery of system failure is very important. Generally, the existing method recovers the failed node by using both cluster log and local log. This method, however, cause several problems that increase communication cost and size of cluster log. This paper proposes novel recovery method using recently updated record information in shared-nothing spatial database cluster. The proposed technique utilizes update information of records and pointers of actual data. This makes a reduction of log size and communication cost.Consequently, this reduces recovery time of failed node due to less processing of update operations.
A method for context-based adaptive QRS clustering in real-time
Castro, Daniel; Presedo, Jesús
2014-01-01
Continuous follow-up of heart condition through long-term electrocardiogram monitoring is an invaluable tool for diagnosing some cardiac arrhythmias. In such context, providing tools for fast locating alterations of normal conduction patterns is mandatory and still remains an open issue. This work presents a real-time method for adaptive clustering QRS complexes from multilead ECG signals that provides the set of QRS morphologies that appear during an ECG recording. The method processes the QRS complexes sequentially, grouping them into a dynamic set of clusters based on the information content of the temporal context. The clusters are represented by templates which evolve over time and adapt to the QRS morphology changes. Rules to create, merge and remove clusters are defined along with techniques for noise detection in order to avoid their proliferation. To cope with beat misalignment, Derivative Dynamic Time Warping is used. The proposed method has been validated against the MIT-BIH Arrhythmia Database and...
The Local Variational Multiscale Method for Turbulence Simulation.
Energy Technology Data Exchange (ETDEWEB)
Collis, Samuel Scott; Ramakrishnan, Srinivas
2005-05-01
Accurate and efficient turbulence simulation in complex geometries is a formidable chal-lenge. Traditional methods are often limited by low accuracy and/or restrictions to simplegeometries. We explore the merger of Discontinuous Galerkin (DG) spatial discretizationswith Variational Multi-Scale (VMS) modeling, termed Local VMS (LVMS), to overcomethese limitations. DG spatial discretizations support arbitrarily high-order accuracy on un-structured grids amenable for complex geometries. Furthermore, high-order, hierarchicalrepresentation within DG provides a natural framework fora prioriscale separation crucialfor VMS implementation. We show that the combined benefits of DG and VMS within theLVMS method leads to promising new approach to LES for use in complex geometries.The efficacy of LVMS for turbulence simulation is assessed by application to fully-developed turbulent channelflow. First, a detailed spatial resolution study is undertakento record the effects of the DG discretization on turbulence statistics. Here, the localhp[?]refinement capabilites of DG are exploited to obtain reliable low-order statistics effi-ciently. Likewise, resolution guidelines for simulating wall-bounded turbulence using DGare established. We also explore the influence of enforcing Dirichlet boundary conditionsindirectly through numericalfluxes in DG which allows the solution to jump (slip) at thechannel walls. These jumps are effective in simulating the influence of the wall commen-surate with the local resolution and this feature of DG is effective in mitigating near-wallresolution requirements. In particular, we show that by locally modifying the numericalviscousflux used at the wall, we are able to regulate the near-wall slip through a penaltythat leads to improved shear-stress predictions. This work, demonstrates the potential ofthe numerical viscousflux to act as a numerically consistent wall-model and this successwarrents future research.As in any high-order numerical method some
Source clustering in the Hi-GAL survey determined using a minimum spanning tree method
Beuret, M.; Billot, N.; Cambrésy, L.; Eden, D. J.; Elia, D.; Molinari, S.; Pezzuto, S.; Schisano, E.
2017-01-01
Aims: The aims are to investigate the clustering of the far-infrared sources from the Herschel infrared Galactic Plane Survey (Hi-GAL) in the Galactic longitude range of -71 to 67 deg. These clumps, and their spatial distribution, are an imprint of the original conditions within a molecular cloud. This will produce a catalogue of over-densities. Methods: The minimum spanning tree (MST) method was used to identify the over-densities in two dimensions. The catalogue was further refined by folding in heliocentric distances, resulting in more reliable over-densities, which are cluster candidates. Results: We found 1633 over-densities with more than ten members. Of these, 496 are defined as cluster candidates because of the reliability of the distances, with a further 1137 potential cluster candidates. The spatial distributions of the cluster candidates are different in the first and fourth quadrants, with all clusters following the spiral structure of the Milky Way. The cluster candidates are fractal. The clump mass functions of the clustered and isolated are statistically indistinguishable from each other and are consistent with Kroupa's initial mass function. Hi-GAL is a key-project of the Herschel Space Observatory survey (Pilbratt et al. 2010) and uses the PACS (Poglitsch et al. 2010) and SPIRE (Griffin et al. 2010) cameras in parallel mode.The catalogues of cluster candidates and potential clusters are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/597/A114
Wild; Blankley
2000-01-01
Four different two-dimensional fingerprint types (MACCS, Unity, BCI, and Daylight) and nine methods of selecting optimal cluster levels from the output of a hierarchical clustering algorithm were evaluated for their ability to select clusters that represent chemical series present in some typical examples of chemical compound data sets. The methods were evaluated using a Ward's clustering algorithm on subsets of the publicly available National Cancer Institute HIV data set, as well as with compounds from our corporate data set. We make a number of observations and recommendations about the choice of fingerprint type and cluster level selection methods for use in this type of clustering
Safner, T.; Miller, M.P.; McRae, B.H.; Fortin, M.-J.; Manel, S.
2011-01-01
Recently, techniques available for identifying clusters of individuals or boundaries between clusters using genetic data from natural populations have expanded rapidly. Consequently, there is a need to evaluate these different techniques. We used spatially-explicit simulation models to compare three spatial Bayesian clustering programs and two edge detection methods. Spatially-structured populations were simulated where a continuous population was subdivided by barriers. We evaluated the ability of each method to correctly identify boundary locations while varying: (i) time after divergence, (ii) strength of isolation by distance, (iii) level of genetic diversity, and (iv) amount of gene flow across barriers. To further evaluate the methods' effectiveness to detect genetic clusters in natural populations, we used previously published data on North American pumas and a European shrub. Our results show that with simulated and empirical data, the Bayesian spatial clustering algorithms outperformed direct edge detection methods. All methods incorrectly detected boundaries in the presence of strong patterns of isolation by distance. Based on this finding, we support the application of Bayesian spatial clustering algorithms for boundary detection in empirical datasets, with necessary tests for the influence of isolation by distance. ?? 2011 by the authors; licensee MDPI, Basel, Switzerland.
Furihata, Daisuke
2010-01-01
Nonlinear Partial Differential Equations (PDEs) have become increasingly important in the description of physical phenomena. Unlike Ordinary Differential Equations, PDEs can be used to effectively model multidimensional systems. The methods put forward in Discrete Variational Derivative Method concentrate on a new class of ""structure-preserving numerical equations"" which improves the qualitative behaviour of the PDE solutions and allows for stable computing. The authors have also taken care to present their methods in an accessible manner, which means that the book will be useful to engineer
Mannila, Maria Nastase; Eriksson, Per; Lundman, Pia; Samnegård, Ann; Boquist, Susanna; Ericsson, Carl-Göran; Tornvall, Per; Hamsten, Anders; Silveira, Angela
2005-03-01
Fibrinogen has consistently been recognized as an independent predictor of myocardial infarction (MI). Multiple mechanisms link fibrinogen to MI; therefore disentangling the factors underlying variation in plasma fibrinogen concentration is essential. Candidate regions in the fibrinogen gamma (FGG), alpha (FGA) and beta (FGB) genes were screened for single nucleotide polymorphisms (SNPs). Several novel SNPs were detected in the FGG and FGA genes in addition to the previously known SNPs in the fibrinogen genes. Tight linkage disequilibrium extending over various physical distances was observed between most SNPs. Consequently, eight SNPs were chosen and determined in 377 postinfarction patients and 387 healthy individuals. None of the SNPs were associated with plasma fibrinogen concentration or MI. Haplotype analyses revealed a consistent pattern of haplotypes associated with variation in risk of MI. Of the four haplotypes inferred using the FGA -58G>A and FGG 1299 +79T>C SNPs, the most frequent haplotype, FGG-FGA*1 (prevalence 46.6%), was associated with increased risk of MI (OR 1.51; 95%CI 1.18, 1.93), whereas the least frequent haplotype, FGG-FGA*4 (11.8%), was associated with lower risk of MI (OR 0.79 95%CI 0.64, 0.98). In conclusion, fibrinogen haplotypes, but not SNPs in isolation, are associated with variation in risk of MI.
Hamilton City Board of Education (Ontario).
Suggestions for studying the topic of variation of individuals and objects (balls) to help develop elementary school students' measurement, comparison, classification, evaluation, and data collection and recording skills are made. General suggestions of variables that can be investigated are made for the study of human variation. Twelve specific…
Stenning, D. C.; Wagner-Kaiser, R.; Robinson, E.; van Dyk, D. A.; von Hippel, T.; Sarajedini, A.; Stein, N.
2016-07-01
We develop a Bayesian model for globular clusters composed of multiple stellar populations, extending earlier statistical models for open clusters composed of simple (single) stellar populations. Specifically, we model globular clusters with two populations that differ in helium abundance. Our model assumes a hierarchical structuring of the parameters in which physical properties—age, metallicity, helium abundance, distance, absorption, and initial mass—are common to (i) the cluster as a whole or to (ii) individual populations within a cluster, or are unique to (iii) individual stars. An adaptive Markov chain Monte Carlo (MCMC) algorithm is devised for model fitting that greatly improves convergence relative to its precursor non-adaptive MCMC algorithm. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We use numerical studies to demonstrate that our method can recover parameters of two-population clusters, and also show how model misspecification can potentially be identified. As a proof of concept, we analyze the two stellar populations of globular cluster NGC 5272 using our model and methods. (BASE-9 is available from GitHub: https://github.com/argiopetech/base/releases).
A new method to search for high-redshift clusters using photometric redshifts
Energy Technology Data Exchange (ETDEWEB)
Castignani, G.; Celotti, A. [SISSA, Via Bonomea 265, I-34136 Trieste (Italy); Chiaberge, M.; Norman, C., E-mail: castigna@sissa.it [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States)
2014-09-10
We describe a new method (Poisson probability method, PPM) to search for high-redshift galaxy clusters and groups by using photometric redshift information and galaxy number counts. The method relies on Poisson statistics and is primarily introduced to search for megaparsec-scale environments around a specific beacon. The PPM is tailored to both the properties of the FR I radio galaxies in the Chiaberge et al. sample, which are selected within the COSMOS survey, and to the specific data set used. We test the efficiency of our method of searching for cluster candidates against simulations. Two different approaches are adopted. (1) We use two z ∼ 1 X-ray detected cluster candidates found in the COSMOS survey and we shift them to higher redshift up to z = 2. We find that the PPM detects the cluster candidates up to z = 1.5, and it correctly estimates both the redshift and size of the two clusters. (2) We simulate spherically symmetric clusters of different size and richness, and we locate them at different redshifts (i.e., z = 1.0, 1.5, and 2.0) in the COSMOS field. We find that the PPM detects the simulated clusters within the considered redshift range with a statistical 1σ redshift accuracy of ∼0.05. The PPM is an efficient alternative method for high-redshift cluster searches that may also be applied to both present and future wide field surveys such as SDSS Stripe 82, LSST, and Euclid. Accurate photometric redshifts and a survey depth similar or better than that of COSMOS (e.g., I < 25) are required.
Variations in Decision-Making Profiles by Age and Gender: A Cluster-Analytic Approach.
Delaney, Rebecca; Strough, JoNell; Parker, Andrew M; de Bruin, Wandi Bruine
2015-10-01
Using cluster-analysis, we investigated whether rational, intuitive, spontaneous, dependent, and avoidant styles of decision making (Scott & Bruce, 1995) combined to form distinct decision-making profiles that differed by age and gender. Self-report survey data were collected from 1,075 members of RAND's American Life Panel (56.2% female, 18-93 years, Mage = 53.49). Three decision-making profiles were identified: affective/experiential, independent/self-controlled, and an interpersonally-oriented dependent profile. Older people were less likely to be in the affective/experiential profile and more likely to be in the independent/self-controlled profile. Women were less likely to be in the affective/experiential profile and more likely to be in the interpersonally-oriented dependent profile. Interpersonally-oriented profiles are discussed as an overlooked but important dimension of how people make important decisions.
A new method to search for high redshift clusters using photometric redshifts
Castignani, Gianluca; Celotti, Annalisa; Norman, Colin
2014-01-01
We describe a new method (Poisson Probability Method, PPM) to search for high redshift galaxy clusters and groups by using photometric redshift information and galaxy number counts. The method relies on Poisson statistics and is primarily introduced to search for Mpc-scale environments around a specific beacon. The PPM is tailored to both the properties of the FR I radio galaxies in the Chiaberge et al. (2009) sample, that are selected within the COSMOS survey, and on the specific dataset used. We test the efficiency of our method of searching for cluster candidates against simulations. Two different approaches are adopted. i) We use two z~1 X-ray detected cluster candidates found in the COSMOS survey and we shift them to higher redshift up to z=2. We find that the PPM detects the cluster candidates up to z=1.5, and it correctly estimates both the redshift and size of the two clusters. ii) We simulate spherically symmetric clusters of different size and richness, and we locate them at different redshifts (i.e...
An efficient method of key-frame extraction based on a cluster algorithm.
Zhang, Qiang; Yu, Shao-Pei; Zhou, Dong-Sheng; Wei, Xiao-Peng
2013-12-18
This paper proposes a novel method of key-frame extraction for use with motion capture data. This method is based on an unsupervised cluster algorithm. First, the motion sequence is clustered into two classes by the similarity distance of the adjacent frames so that the thresholds needed in the next step can be determined adaptively. Second, a dynamic cluster algorithm called ISODATA is used to cluster all the frames and the frames nearest to the center of each class are automatically extracted as key-frames of the sequence. Unlike many other clustering techniques, the present improved cluster algorithm can automatically address different motion types without any need for specified parameters from users. The proposed method is capable of summarizing motion capture data reliably and efficiently. The present work also provides a meaningful comparison between the results of the proposed key-frame extraction technique and other previous methods. These results are evaluated in terms of metrics that measure reconstructed motion and the mean absolute error value, which are derived from the reconstructed data and the original data.
A novel experimental method for the measurement of the caloric curves of clusters
Chirot, Fabien; Zamith, Sébastien; Labastie, Pierre; L'Hermite, Jean-Marc; 10.1063/1.3000628
2008-01-01
A novel experimental scheme has been developed in order to measure the heat capacity of mass selected clusters. It is based on controlled sticking of atoms on clusters. This allows one to construct the caloric curve, thus determining the melting temperature and the latent heat of fusion in the case of first-order phase transitions. This method is model-free. It is transferable to many systems since the energy is brought to clusters through sticking collisions. As an example, it has been applied to Na\\_90\\^+ and Na\\_140\\^+. Our results are in good agreement with previous measurements.
Rastgarpour, Maryam; Shanbehzadeh, Jamshid; Soltanian-Zadeh, Hamid
2014-08-01
medical images are more affected by intensity inhomogeneity rather than noise and outliers. This has a great impact on the efficiency of region-based image segmentation methods, because they rely on homogeneity of intensities in the regions of interest. Meanwhile, initialization and configuration of controlling parameters affect the performance of level set segmentation. To address these problems, this paper proposes a new hybrid method that integrates a local region-based level set method with a variation of fuzzy clustering. Specifically it takes an information fusion approach based on a coarse-to-fine framework that seamlessly fuses local spatial information and gray level information with the information of the local region-based level set method. Also, the controlling parameters of level set are directly computed from fuzzy clustering result. This approach has valuable benefits such as automation, no need to prior knowledge about the region of interest (ROI), robustness on intensity inhomogeneity, automatic adjustment of controlling parameters, insensitivity to initialization, and satisfactory accuracy. So, the contribution of this paper is to provide these advantages together which have not been proposed yet for inhomogeneous medical images. Proposed method was tested on several medical images from different modalities for performance evaluation. Experimental results approve its effectiveness in segmenting medical images in comparison with similar methods.
Variational methods to estimate terrestrial ecosystem model parameters
Delahaies, Sylvain; Roulstone, Ian
2016-04-01
Carbon is at the basis of the chemistry of life. Its ubiquity in the Earth system is the result of complex recycling processes. Present in the atmosphere in the form of carbon dioxide it is adsorbed by marine and terrestrial ecosystems and stored within living biomass and decaying organic matter. Then soil chemistry and a non negligible amount of time transform the dead matter into fossil fuels. Throughout this cycle, carbon dioxide is released in the atmosphere through respiration and combustion of fossils fuels. Model-data fusion techniques allow us to combine our understanding of these complex processes with an ever-growing amount of observational data to help improving models and predictions. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Over the last decade several studies have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF, 4DVAR) to estimate model parameters and initial carbon stocks for DALEC and to quantify the uncertainty in the predictions. Despite its simplicity, DALEC represents the basic processes at the heart of more sophisticated models of the carbon cycle. Using adjoint based methods we study inverse problems for DALEC with various data streams (8 days MODIS LAI, monthly MODIS LAI, NEE). The framework of constraint optimization allows us to incorporate ecological common sense into the variational framework. We use resolution matrices to study the nature of the inverse problems and to obtain data importance and information content for the different type of data. We study how varying the time step affect the solutions, and we show how "spin up" naturally improves the conditioning of the inverse problems.
An adaptive spatial clustering method for automatic brain MR image segmentation
Institute of Scientific and Technical Information of China (English)
Jingdan Zhang; Daoqing Dai
2009-01-01
In this paper, an adaptive spatial clustering method is presented for automatic brain MR image segmentation, which is based on a competitive learning algorithm-self-organizing map (SOM). We use a pattern recognition approach in terms of feature generation and classifier design. Firstly, a multi-dimensional feature vector is constructed using local spatial information. Then, an adaptive spatial growing hierarchical SOM (ASGHSOM) is proposed as the classifier, which is an extension of SOM, fusing multi-scale segmentation with the competitive learning clustering algorithm to overcome the problem of overlapping grey-scale intensities on boundary regions. Furthermore, an adaptive spatial distance is integrated with ASGHSOM, in which local spatial information is considered in the cluster-ing process to reduce the noise effect and the classification ambiguity. Our proposed method is validated by extensive experiments using both simulated and real MR data with varying noise level, and is compared with the state-of-the-art algorithms.
Atmospheric Cluster Dynamics Code: a flexible method for solution of the birth-death equations
Directory of Open Access Journals (Sweden)
M. J. McGrath
2012-03-01
Full Text Available The Atmospheric Cluster Dynamics Code (ACDC is presented and explored. This program was created to study the first steps of atmospheric new particle formation by examining the formation of molecular clusters from atmospherically relevant molecules. The program models the cluster kinetics by explicit solution of the birth–death equations, using an efficient computer script for their generation and the MATLAB ode15s routine for their solution. Through the use of evaporation rate coefficients derived from formation free energies calculated by quantum chemical methods for clusters containing dimethylamine or ammonia and sulphuric acid, we have explored the effect of changing various parameters at atmospherically relevant monomer concentrations. We have included in our model clusters with 0–4 base molecules and 0–4 sulfuric acid molecules for which we have commensurable quantum chemical data. The tests demonstrate that large effects can be seen for even small changes in different parameters, due to the non-linearity of the system. In particular, changing the temperature had a significant impact on the steady-state concentrations of all clusters, while the boundary effects (allowing clusters to grow to sizes beyond the largest cluster that the code keeps track of, or forbidding such processes, coagulation sink terms, non-monomer collisions, sticking probabilities and monomer concentrations did not show as large effects under the conditions studied. Removal of coagulation sink terms prevented the system from reaching the steady state when all the initial cluster concentrations were set to the default value of 1 m^{−3}, which is probably an effect caused by studying only relatively small cluster sizes.
Cluster analysis of European Y-chromosomal STR haplotypes using the discrete Laplace method
DEFF Research Database (Denmark)
Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels
2014-01-01
method can be used for cluster analysis to further validate the discrete Laplace method. A very important practical fact is that the calculations can be performed on a normal computer. We identified two sub-clusters of the Eastern and Western European Y-STR haplotypes similar to results of previous...... studies. We also compared pairwise distances (between geographically separated samples) with those obtained using the AMOVA method and found good agreement. Further analyses that are impossible with AMOVA were made using the discrete Laplace method: analysis of the homogeneity in two different ways......The European Y-chromosomal short tandem repeat (STR) haplotype distribution has previously been analysed in various ways. Here, we introduce a new way of analysing population substructure using a new method based on clustering within the discrete Laplace exponential family that models...
Smoothed Particle Inference: A Kilo-Parametric Method for X-ray Galaxy Cluster Modeling
Peterson, J. R.; Marshall, P. J.; Andersson, K.
2005-01-01
We propose an ambitious new method that models the intracluster medium in clusters of galaxies as a set of X-ray emitting smoothed particles of plasma. Each smoothed particle is described by a handful of parameters including temperature, location, size, and elemental abundances. Hundreds to thousands of these particles are used to construct a model cluster of galaxies, with the appropriate complexity estimated from the data quality. This model is then compared iteratively with X-ray data in t...
A multi-sequential number-theoretic optimization algorithm using clustering methods
Institute of Scientific and Technical Information of China (English)
XU Qing-song; LIANG Yi-zeng; HOU Zhen-ting
2005-01-01
A multi-sequential number-theoretic optimization method based on clustering was developed and applied to the optimization of functions with many local extrema. Details of the procedure to generate the clusters and the sequential schedules were given. The algorithm was assessed by comparing its performance with generalized simulated annealing algorithm in a difficult instructive example and a D-optimum experimental design problem. It is shown the presented algorithm to be more effective and reliable based on the two examples.
Directory of Open Access Journals (Sweden)
Issam SAHMOUDI
2013-12-01
Full Text Available Document Clustering is a branch of a larger area of scientific study kn own as data mining .which is an unsupervised classification using to find a structu re in a collection of unlabeled data. The useful information in the documents can be accompanied b y a large amount of noise words when using Full Tex t Representation, and therefore will affect negativel y the result of the clustering process. So it is w ith great need to eliminate the noise words and keeping just the useful information in order to enhance the qual ity of the clustering results. This problem occurs with di fferent degree for any language such as English, European, Hindi, Chinese, and Arabic Language. To o vercome this problem, in this paper, we propose a new and efficient Keyphrases extraction method base d on the Suffix Tree data structure (KpST, the extracted Keyphrases are then used in the clusterin g process instead of Full Text Representation. The proposed method for Keyphrases extraction is langua ge independent and therefore it may be applied to a ny language. In this investigation, we are interested to deal with the Arabic language which is one of th e most complex languages. To evaluate our method, we condu ct an experimental study on Arabic Documents using the most popular Clustering approach of Hiera rchical algorithms: Agglomerative Hierarchical algorithm with seven linkage techniques and a varie ty of distance functions and similarity measures to perform Arabic Document Clustering task. The obtain ed results show that our method for extracting Keyphrases increases the quality of the clustering results. We propose also to study the effect of using the stemming for the testing dataset to cluster it with the same documents clustering techniques and similarity/distance measures.
Statistical Methods for Studying Genetic Variation in Populations
2012-08-01
over 25 all the measurements at a particular marker for all individuals. We use the Bayesian Information Criterion (BIC) [Schwarz, 1978] to determine...dimensionality models in combination with an information criterion [ Akaike , 1974, Gao et al., 2011, Schwarz, 1978] to decide the number of ancestral...Hong Gao, Katarzyna Bryc, and Carlos D Bustamante. On identifying the optimal number of population clusters via the deviance information criterion
A semantics-based method for clustering of Chinese web search results
Zhang, Hui; Wang, Deqing; Wang, Li; Bi, Zhuming; Chen, Yong
2014-01-01
Information explosion is a critical challenge to the development of modern information systems. In particular, when the application of an information system is over the Internet, the amount of information over the web has been increasing exponentially and rapidly. Search engines, such as Google and Baidu, are essential tools for people to find the information from the Internet. Valuable information, however, is still likely submerged in the ocean of search results from those tools. By clustering the results into different groups based on subjects automatically, a search engine with the clustering feature allows users to select most relevant results quickly. In this paper, we propose an online semantics-based method to cluster Chinese web search results. First, we employ the generalised suffix tree to extract the longest common substrings (LCSs) from search snippets. Second, we use the HowNet to calculate the similarities of the words derived from the LCSs, and extract the most representative features by constructing the vocabulary chain. Third, we construct a vector of text features and calculate snippets' semantic similarities. Finally, we improve the Chameleon algorithm to cluster snippets. Extensive experimental results have shown that the proposed algorithm has outperformed over the suffix tree clustering method and other traditional clustering methods.
Non-parametric method for measuring gas inhomogeneities from X-ray observations of galaxy clusters
Morandi, Andrea; Cui, Wei
2013-01-01
We present a non-parametric method to measure inhomogeneities in the intracluster medium (ICM) from X-ray observations of galaxy clusters. Analyzing mock Chandra X-ray observations of simulated clusters, we show that our new method enables the accurate recovery of the 3D gas density and gas clumping factor profiles out to large radii of galaxy clusters. We then apply this method to Chandra X-ray observations of Abell 1835 and present the first determination of the gas clumping factor from the X-ray cluster data. We find that the gas clumping factor in Abell 1835 increases with radius and reaches ~2-3 at r=R_{200}. This is in good agreement with the predictions of hydrodynamical simulations, but it is significantly below the values inferred from recent Suzaku observations. We further show that the radially increasing gas clumping factor causes flattening of the derived entropy profile of the ICM and affects physical interpretation of the cluster gas structure, especially at the large cluster-centric radii. Our...
Directory of Open Access Journals (Sweden)
Jibing Wu
2017-01-01
Full Text Available Clustering analysis is a basic and essential method for mining heterogeneous information networks, which consist of multiple types of objects and rich semantic relations among different object types. Heterogeneous information networks are ubiquitous in the real-world applications, such as bibliographic networks and social media networks. Unfortunately, most existing approaches, such as spectral clustering, are designed to analyze homogeneous information networks, which are composed of only one type of objects and links. Some recent studies focused on heterogeneous information networks and yielded some research fruits, such as RankClus and NetClus. However, they often assumed that the heterogeneous information networks usually follow some simple schemas, such as bityped network schema or star network schema. To overcome the above limitations, we model the heterogeneous information network as a tensor without the restriction of network schema. Then, a tensor CP decomposition method is adapted to formulate the clustering problem in heterogeneous information networks. Further, we develop two stochastic gradient descent algorithms, namely, SGDClus and SOSClus, which lead to effective clustering multityped objects simultaneously. The experimental results on both synthetic datasets and real-world dataset have demonstrated that our proposed clustering framework can model heterogeneous information networks efficiently and outperform state-of-the-art clustering methods.
MHCcluster, a method for functional clustering of MHC molecules
DEFF Research Database (Denmark)
Thomsen, Martin Christen Frølund; Lundegaard, Claus; Buus, Søren;
2013-01-01
binding specificity. The method has a flexible web interface that allows the user to include any MHC of interest in the analysis. The output consists of a static heat map and graphical tree-based visualizations of the functional relationship between MHC variants and a dynamic TreeViewer interface where...
A comparison of clustering methods for writer identification and verification
Bulacu, M.L.; Schomaker, L.R.B.
2005-01-01
An effective method for writer identification and verification is based on assuming that each writer acts as a stochastic generator of ink-trace fragments, or graphemes. The probability distribution of these simple shapes in a given handwriting sample is characteristic for the writer and is computed
Adaptive cluster sampling: An efficient method for assessing inconspicuous species
Andrea M. Silletti; Joan Walker
2003-01-01
Restorationistis typically evaluate the success of a project by estimating the population sizes of species that have been planted or seeded. Because total census is raely feasible, they must rely on sampling methods for population estimates. However, traditional random sampling designs may be inefficient for species that, for one reason or another, are challenging to...
Source clustering in the Hi-GAL survey determined using a minimum spanning tree method
Beuret, Maxime; Cambrésy, Laurent; Eden, David J; Elia, Davide; Molinari, Sergio; Pezzuto, Stefano; Schisano, Eugenio
2016-01-01
The aims are to investigate the clustering of the far-infrared sources from the Herschel infrared Galactic Plane Survey (Hi-GAL) in the Galactic longitude range of -71 to 67 deg. These clumps, and their spatial distribution, are an imprint of the original conditions within a molecular cloud. This will produce a catalogue of over-densities. The minimum spanning tree (MST) method was used to identify the over-densities in two dimensions. The catalogue was further refined by folding in heliocentric distances, resulting in more reliable over-densities, which are cluster candidates. We found 1,633 over-densities with more than ten members. Of these, 496 are defined as cluster candidates because of the reliability of the distances, with a further 1,137 potential cluster candidates. The spatial distributions of the cluster candidates are different in the first and fourth quadrants, with all clusters following the spiral structure of the Milky Way. The cluster candidates are fractal. The clump mass functions of the c...
Robustness of serial clustering of extratropical cyclones to the choice of tracking method
Directory of Open Access Journals (Sweden)
Joaquim G. Pinto
2016-07-01
Full Text Available Cyclone clusters are a frequent synoptic feature in the Euro-Atlantic area. Recent studies have shown that serial clustering of cyclones generally occurs on both flanks and downstream regions of the North Atlantic storm track, while cyclones tend to occur more regulary on the western side of the North Atlantic basin near Newfoundland. This study explores the sensitivity of serial clustering to the choice of cyclone tracking method using cyclone track data from 15 methods derived from ERA-Interim data (1979–2010. Clustering is estimated by the dispersion (ratio of variance to mean of winter [December – February (DJF] cyclone passages near each grid point over the Euro-Atlantic area. The mean number of cyclone counts and their variance are compared between methods, revealing considerable differences, particularly for the latter. Results show that all different tracking methods qualitatively capture similar large-scale spatial patterns of underdispersion and overdispersion over the study region. The quantitative differences can primarily be attributed to the differences in the variance of cyclone counts between the methods. Nevertheless, overdispersion is statistically significant for almost all methods over parts of the eastern North Atlantic and Western Europe, and is therefore considered as a robust feature. The influence of the North Atlantic Oscillation (NAO on cyclone clustering displays a similar pattern for all tracking methods, with one maximum near Iceland and another between the Azores and Iberia. The differences in variance between methods are not related with different sensitivities to the NAO, which can account to over 50% of the clustering in some regions. We conclude that the general features of underdispersion and overdispersion of extratropical cyclones over the North Atlantic and Western Europe are robust to the choice of tracking method. The same is true for the influence of the NAO on cyclone dispersion.
A Sensitive Attribute based Clustering Method for kanonymization
Bhaladhare, Pawan R
2012-01-01
In medical organizations large amount of personal data are collected and analyzed by the data miner or researcher, for further perusal. However, the data collected may contain sensitive information such as specific disease of a patient and should be kept confidential. Hence, the analysis of such data must ensure due checks that ensure protection against threats to the individual privacy. In this context, greater emphasis has now been given to the privacy preservation algorithms in data mining research. One of the approaches is anonymization approach that is able to protect private information; however, valuable information can be lost. Therefore, the main challenge is how to minimize the information loss during an anonymization process. The proposed method is grouping similar data together based on sensitive attribute and then anonymizes them. Our experimental results show the proposed method offers better outcomes with respect to information loss and execution time.
Perturbative vs. variational methods in the study of carbon nanotubes
DEFF Research Database (Denmark)
Cornean, Horia; Pedersen, Thomas Garm; Ricaud, Benjamin
2007-01-01
Recent two-photon photo-luminescence experiments give accurate data for the ground and first excited excitonic energies at different nanotube radii. In this paper we compare the analytic approximations proved in [CDR], with a standard variational approach. We show an excellent agreement at suffic......Recent two-photon photo-luminescence experiments give accurate data for the ground and first excited excitonic energies at different nanotube radii. In this paper we compare the analytic approximations proved in [CDR], with a standard variational approach. We show an excellent agreement...
An effective trust-based recommendation method using a novel graph clustering algorithm
Moradi, Parham; Ahmadian, Sajad; Akhlaghian, Fardin
2015-10-01
Recommender systems are programs that aim to provide personalized recommendations to users for specific items (e.g. music, books) in online sharing communities or on e-commerce sites. Collaborative filtering methods are important and widely accepted types of recommender systems that generate recommendations based on the ratings of like-minded users. On the other hand, these systems confront several inherent issues such as data sparsity and cold start problems, caused by fewer ratings against the unknowns that need to be predicted. Incorporating trust information into the collaborative filtering systems is an attractive approach to resolve these problems. In this paper, we present a model-based collaborative filtering method by applying a novel graph clustering algorithm and also considering trust statements. In the proposed method first of all, the problem space is represented as a graph and then a sparsest subgraph finding algorithm is applied on the graph to find the initial cluster centers. Then, the proposed graph clustering algorithm is performed to obtain the appropriate users/items clusters. Finally, the identified clusters are used as a set of neighbors to recommend unseen items to the current active user. Experimental results based on three real-world datasets demonstrate that the proposed method outperforms several state-of-the-art recommender system methods.
A New Method to Quantify X-ray Substructures in Clusters of Galaxies
Andrade-Santos, Felipe; Laganá, Tatiana Ferraz
2011-01-01
We present a new method to quantify substructures in clusters of galaxies, based on the analysis of the intensity of structures. This analysis is done in a residual image that is the result of the subtraction of a surface brightness model, obtained by fitting a two-dimensional analytical model (beta-model or S\\'ersic profile) with elliptical symmetry, from the X-ray image. Our method is applied to 34 clusters observed by the Chandra Space Telescope that are in the redshift range 0.02
Improved fuzzy identification method based on Hough transformation and fuzzy clustering
Institute of Scientific and Technical Information of China (English)
刘福才; 路平立; 潘江华; 裴润
2004-01-01
This paper presents an approach that is useful for the identification of a fuzzy model in SISO system. The initial values of cluster centers are identified by the Hough transformation, which considers the linearity and continuity of given input-output data, respectively. For the premise parts parameters identification, we use fuzzy-C-means clustering method. The consequent parameters are identified based on recursive least square. This method not only makes approximation more accurate, but also let computation be simpler and the procedure is realized more easily. Finally, it is shown that this method is useful for the identification of a fuzzy model by simulation.
Directory of Open Access Journals (Sweden)
Burhan Ergen
2014-01-01
Full Text Available This paper proposes two edge detection methods for medical images by integrating the advantages of Gabor wavelet transform (GWT and unsupervised clustering algorithms. The GWT is used to enhance the edge information in an image while suppressing noise. Following this, the k-means and Fuzzy c-means (FCM clustering algorithms are used to convert a gray level image into a binary image. The proposed methods are tested using medical images obtained through Computed Tomography (CT and Magnetic Resonance Imaging (MRI devices, and a phantom image. The results prove that the proposed methods are successful for edge detection, even in noisy cases.
Ergen, Burhan
2014-01-01
This paper proposes two edge detection methods for medical images by integrating the advantages of Gabor wavelet transform (GWT) and unsupervised clustering algorithms. The GWT is used to enhance the edge information in an image while suppressing noise. Following this, the k-means and Fuzzy c-means (FCM) clustering algorithms are used to convert a gray level image into a binary image. The proposed methods are tested using medical images obtained through Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) devices, and a phantom image. The results prove that the proposed methods are successful for edge detection, even in noisy cases.
Ocak, Mahir E
2012-01-01
Firstly, a sequential symmetry adaptation procedure is derived for semidirect product groups. Then, this sequential symmetry adaptation procedure is used in the development of new method named Monomer Basis Representation (MBR) for calculating the vibration-rotation-tunneling (VRT) spectra of molecular clusters. The method is based on generation of optimized bases for each monomer in the cluster as a linear combination of some primitive basis functions and then using the sequential symmetry adaptation procedure for generating a small symmetry adapted basis for the solution of the full problem. It is seen that given an optimized basis for each monomer the application of the sequential symmetry adaptation procedure leads to a generalized eigenvalue problem instead of a standard eigenvalue problem if the procedure is used as it is. In this paper, MBR method will be developed as a solution of that problem such that it leads to generation of an orthogonal optimized basis for the cluster being studied regardless of...
Energy Technology Data Exchange (ETDEWEB)
Girardi, E.; Ruggieri, J.M. [CEA Cadarache (DER/SPRC/LEPH), 13 - Saint-Paul-lez-Durance (France). Dept. d' Etudes des Reacteurs; Santandrea, S. [CEA Saclay, Dept. Modelisation de Systemes et Structures DM2S/SERMA/LENR, 91 - Gif sur Yvette (France)
2005-07-01
This paper describes a recently-developed extension of our 'Multi-methods,multi-domains' (MM-MD) method for the solution of the multigroup transport equation. Based on a domain decomposition technique, our approach allows us to treat the one-group equation by cooperatively employing several numerical methods together. In this work, we describe the coupling between the Method of Characteristics (integro-differential equation, unstructured meshes) with the Variational Nodal Method (even parity equation, cartesian meshes). Then, the coupling method is applied to the benchmark model of the Phebus experimental facility (Cea Cadarache). Our domain decomposition method give us the capability to employ a very fine mesh in describing a particular fuel bundle with an appropriate numerical method (MOC), while using a much large mesh size in the rest of the core, in conjunction with a coarse-mesh method (VNM). This application shows the benefits of our MM-MD approach, in terms of accuracy and computing time: the domain decomposition method allows us to reduce the Cpu time, while preserving a good accuracy of the neutronic indicators: reactivity, core-to-bundle power coupling coefficient and flux error. (authors)
The tidal tails of globular cluster Palomar 5 based on the neural networks method
Institute of Scientific and Technical Information of China (English)
Hu Zou; Zhen-Yu WU; Jun Ma; Xu Zhou
2009-01-01
The sixth Data Release (DR6) of the Sloan Digital Sky Survey (SDSS) provides more photometric regions,new features and more accurate data around globular cluster Palomar 5.A new method,Back Propagation Neural Network (BPNN),is used to estimate the cluster membership probability in order to detect its tidal tails.Cluster and field stars,used for training the networks,are extracted over a 40×20 deg~2 field by color-magnitude diagrams (CMDs).The best BPNNs with two hidden layers and a Levenberg-Marquardt(LM) training algorithm are determined by the chosen cluster and field samples.The membership probabilities of stars in the whole field are obtained with the BPNNs,and contour maps of the probability distribution show that a tail extends 5.42°to the north of the cluster and another tail extends 3.77°to the south.The tails are similar to those detected by Odenkirchen et al.,but no more debris from the cluster is found to the northeast in the sky.The radial density profiles are investigated both along the tails and near the cluster center.Quite a few substructures are discovered in the tails.The number density profile of the cluster is fitted with the King model and the tidal radius is determined as 14.28'.However,the King model cannot fit the observed profile at the outer regions (R ＞ 8') because of the tidal tails generated by the tidal force.Luminosity functions of the cluster and the tidal tails are calculated,which confirm that the tails originate from Palomar 5.
New Clustering Method in High-Dimensional Space Based on Hypergraph-Models
Institute of Scientific and Technical Information of China (English)
CHEN Jian-bin; WANG Shu-jing; SONG Han-tao
2006-01-01
To overcome the limitation of the traditional clustering algorithms which fail to produce meanirigful clusters in high-dimensional, sparseness and binary value data sets, a new method based on hypergraph model is proposed. The hypergraph model maps the relationship present in the original data in high dimensional space into a hypergraph. A hyperedge represents the similarity of attribute-value distribution between two points. A hypergraph partitioning algorithm is used to find a partitioning of the vertices such that the corresponding data items in each partition are highly related and the weight of the hyperedges cut by the partitioning is minimized. The quality of the clustering result can be evaluated by applying the intra-cluster singularity value.Analysis and experimental results have demonstrated that this approach is applicable and effective in wide ranging scheme.
The IMACS Cluster Building Survey. I. Description of the Survey and Analysis Methods
Oemler,, Augustus; Gladders, Michael G; Rigby, Jane R; Bai, Lei; Kelson, Daniel; Villanueva, Edward; Fritz, Jacopo; Rieke, George; Poggianti, Bianca M; Vulcani, Benedetta
2013-01-01
The IMACS Cluster Building Survey uses the wide field spectroscopic capabilities of the IMACS spectrograph on the 6.5m Baade Telescope to survey the large-scale environment surrounding rich intermediate-redshift clusters of galaxies. The goal is to understand the processes which may be transforming star-forming field galaxies into quiescent cluster members as groups and individual galaxies fall into the cluster from the surrounding supercluster. This first paper describes the survey: the data taking and reduction methods. We provide new calibrations of star formation rates derived from optical and infrared spectroscopy and photometry. We demonstrate that there is a tight relation between the observed star formation rate per unit B luminosity, and the ratio of the extinctions of the stellar continuum and the optical emission lines. With this, we can obtain accurate extinction-corrected colors of galaxies. Using these colors as well as other spectral measures, we determine new criteria for the existence of ongo...
A Load Balancing Algorithm Based on Maximum Entropy Methods in Homogeneous Clusters
Directory of Open Access Journals (Sweden)
Long Chen
2014-10-01
Full Text Available In order to solve the problems of ill-balanced task allocation, long response time, low throughput rate and poor performance when the cluster system is assigning tasks, we introduce the concept of entropy in thermodynamics into load balancing algorithms. This paper proposes a new load balancing algorithm for homogeneous clusters based on the Maximum Entropy Method (MEM. By calculating the entropy of the system and using the maximum entropy principle to ensure that each scheduling and migration is performed following the increasing tendency of the entropy, the system can achieve the load balancing status as soon as possible, shorten the task execution time and enable high performance. The result of simulation experiments show that this algorithm is more advanced when it comes to the time and extent of the load balance of the homogeneous cluster system compared with traditional algorithms. It also provides novel thoughts of solutions for the load balancing problem of the homogeneous cluster system.
Barker, Daniel; D'Este, Catherine; Campbell, Michael J; McElduff, Patrick
2017-03-09
Stepped wedge cluster randomised trials frequently involve a relatively small number of clusters. The most common frameworks used to analyse data from these types of trials are generalised estimating equations and generalised linear mixed models. A topic of much research into these methods has been their application to cluster randomised trial data and, in particular, the number of clusters required to make reasonable inferences about the intervention effect. However, for stepped wedge trials, which have been claimed by many researchers to have a statistical power advantage over the parallel cluster randomised trial, the minimum number of clusters required has not been investigated. We conducted a simulation study where we considered the most commonly used methods suggested in the literature to analyse cross-sectional stepped wedge cluster randomised trial data. We compared the per cent bias, the type I error rate and power of these methods in a stepped wedge trial setting with a binary outcome, where there are few clusters available and when the appropriate adjustment for a time trend is made, which by design may be confounding the intervention effect. We found that the generalised linear mixed modelling approach is the most consistent when few clusters are available. We also found that none of the common analysis methods for stepped wedge trials were both unbiased and maintained a 5% type I error rate when there were only three clusters. Of the commonly used analysis approaches, we recommend the generalised linear mixed model for small stepped wedge trials with binary outcomes. We also suggest that in a stepped wedge design with three steps, at least two clusters be randomised at each step, to ensure that the intervention effect estimator maintains the nominal 5% significance level and is also reasonably unbiased.
Perturbative vs. variational methods in the study of carbon nanotubes
DEFF Research Database (Denmark)
Cornean, Horia; Pedersen, Thomas Garm; Ricaud, Benjamin
2007-01-01
Recent two-photon photo-luminescence experiments give accurate data for the ground and first excited excitonic energies at different nanotube radii. In this paper we compare the analytic approximations proved in [CDR], with a standard variational approach. We show an excellent agreement at suffic...
Cluster Analysis of the Newcastle Electronic Corpus of Tyneside English: A Comparison of Methods
Moisl, Hermann; Jones, Val
2005-01-01
This article examines the feasibility of an empirical approach to sociolinguistic analysis of the Newcastle Electronic Corpus of Tyneside English using exploratory multivariate methods. It addresses a known problem with one class of such methods, hierarchical cluster analysis¿that different clusteri
A Cluster-based Method to Map Urban Area from DMSP/OLS Nightlights
Energy Technology Data Exchange (ETDEWEB)
Zhou, Yuyu; Smith, Steven J.; Elvidge, Christopher; Zhao, Kaiguang; Thomson, Allison M.; Imhoff, Marc L.
2014-05-05
Accurate information of urban areas at regional and global scales is important for both the science and policy-making communities. The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) nighttime stable light data (NTL) provide a potential way to map urban area and its dynamics economically and timely. In this study, we developed a cluster-based method to estimate the optimal thresholds and map urban extents from the DMSP/OLS NTL data in five major steps, including data preprocessing, urban cluster segmentation, logistic model development, threshold estimation, and urban extent delineation. Different from previous fixed threshold method with over- and under-estimation issues, in our method the optimal thresholds are estimated based on cluster size and overall nightlight magnitude in the cluster, and they vary with clusters. Two large countries of United States and China with different urbanization patterns were selected to map urban extents using the proposed method. The result indicates that the urbanized area occupies about 2% of total land area in the US ranging from lower than 0.5% to higher than 10% at the state level, and less than 1% in China, ranging from lower than 0.1% to about 5% at the province level with some municipalities as high as 10%. The derived thresholds and urban extents were evaluated using high-resolution land cover data at the cluster and regional levels. It was found that our method can map urban area in both countries efficiently and accurately. Compared to previous threshold techniques, our method reduces the over- and under-estimation issues, when mapping urban extent over a large area. More important, our method shows its potential to map global urban extents and temporal dynamics using the DMSP/OLS NTL data in a timely, cost-effective way.
Directory of Open Access Journals (Sweden)
I. Crawford
2015-07-01
Full Text Available In this paper we present improved methods for discriminating and quantifying Primary Biological Aerosol Particles (PBAP by applying hierarchical agglomerative cluster analysis to multi-parameter ultra violet-light induced fluorescence (UV-LIF spectrometer data. The methods employed in this study can be applied to data sets in excess of 1×106 points on a desktop computer, allowing for each fluorescent particle in a dataset to be explicitly clustered. This reduces the potential for misattribution found in subsampling and comparative attribution methods used in previous approaches, improving our capacity to discriminate and quantify PBAP meta-classes. We evaluate the performance of several hierarchical agglomerative cluster analysis linkages and data normalisation methods using laboratory samples of known particle types and an ambient dataset. Fluorescent and non-fluorescent polystyrene latex spheres were sampled with a Wideband Integrated Bioaerosol Spectrometer (WIBS-4 where the optical size, asymmetry factor and fluorescent measurements were used as inputs to the analysis package. It was found that the Ward linkage with z-score or range normalisation performed best, correctly attributing 98 and 98.1 % of the data points respectively. The best performing methods were applied to the BEACHON-RoMBAS ambient dataset where it was found that the z-score and range normalisation methods yield similar results with each method producing clusters representative of fungal spores and bacterial aerosol, consistent with previous results. The z-score result was compared to clusters generated with previous approaches (WIBS AnalysiS Program, WASP where we observe that the subsampling and comparative attribution method employed by WASP results in the overestimation of the fungal spore concentration by a factor of 1.5 and the underestimation of bacterial aerosol concentration by a factor of 5. We suggest that this likely due to errors arising from misatrribution
Directory of Open Access Journals (Sweden)
I. Crawford
2015-11-01
Full Text Available In this paper we present improved methods for discriminating and quantifying primary biological aerosol particles (PBAPs by applying hierarchical agglomerative cluster analysis to multi-parameter ultraviolet-light-induced fluorescence (UV-LIF spectrometer data. The methods employed in this study can be applied to data sets in excess of 1 × 106 points on a desktop computer, allowing for each fluorescent particle in a data set to be explicitly clustered. This reduces the potential for misattribution found in subsampling and comparative attribution methods used in previous approaches, improving our capacity to discriminate and quantify PBAP meta-classes. We evaluate the performance of several hierarchical agglomerative cluster analysis linkages and data normalisation methods using laboratory samples of known particle types and an ambient data set. Fluorescent and non-fluorescent polystyrene latex spheres were sampled with a Wideband Integrated Bioaerosol Spectrometer (WIBS-4 where the optical size, asymmetry factor and fluorescent measurements were used as inputs to the analysis package. It was found that the Ward linkage with z-score or range normalisation performed best, correctly attributing 98 and 98.1 % of the data points respectively. The best-performing methods were applied to the BEACHON-RoMBAS (Bio–hydro–atmosphere interactions of Energy, Aerosols, Carbon, H2O, Organics and Nitrogen–Rocky Mountain Biogenic Aerosol Study ambient data set, where it was found that the z-score and range normalisation methods yield similar results, with each method producing clusters representative of fungal spores and bacterial aerosol, consistent with previous results. The z-score result was compared to clusters generated with previous approaches (WIBS AnalysiS Program, WASP where we observe that the subsampling and comparative attribution method employed by WASP results in the overestimation of the fungal spore concentration by a factor of 1.5 and the
Orsi, Rebecca
2017-02-01
Concept mapping is now a commonly-used technique for articulating and evaluating programmatic outcomes. However, research regarding validity of knowledge and outcomes produced with concept mapping is sparse. The current study describes quantitative validity analyses using a concept mapping dataset. We sought to increase the validity of concept mapping evaluation results by running multiple cluster analysis methods and then using several metrics to choose from among solutions. We present four different clustering methods based on analyses using the R statistical software package: partitioning around medoids (PAM), fuzzy analysis (FANNY), agglomerative nesting (AGNES) and divisive analysis (DIANA). We then used the Dunn and Davies-Bouldin indices to assist in choosing a valid cluster solution for a concept mapping outcomes evaluation. We conclude that the validity of the outcomes map is high, based on the analyses described. Finally, we discuss areas for further concept mapping methods research.
Scattering cluster wave functions on the lattice using the adiabatic projection method
Rokash, Alexander; Elhatisari, Serdar; Lee, Dean; Epelbaum, Evgeny; Krebs, Hermann
2015-01-01
The adiabatic projection method is a general framework for studying scattering and reactions on the lattice. It provides a low-energy effective theory for clusters which becomes exact in the limit of large Euclidean projection time. Previous studies have used the adiabatic projection method to extract scattering phase shifts from finite periodic-box energy levels using L\\"uschers method. In this paper we demonstrate that scattering observables can be computed directly from asymptotic cluster wave functions. For a variety of examples in one and three spatial dimensions, we extract elastic phase shifts from asymptotic cluster standing waves corresponding to spherical wall boundary conditions. We find that this approach of extracting scattering wave functions from the adiabatic Hamiltonian to be less sensitive to small stochastic and systematic errors as compared with using periodic-box energy levels.
Directory of Open Access Journals (Sweden)
Savita Agrawal
2015-11-01
Full Text Available In the last decades, image segmentation has proved its applicability in various areas like satellite image processing, medical image processing and many more. In the present scenario the researchers tries to develop hybrid image segmentation techniques to generates efficient segmentation. Due to the development of the parallel programming, the lattice Boltzmann method (LBM has attracted much attention as a fast alternative approach for solving partial differential equations. In this project work, first designed an energy functional based on the fuzzy c-means objective function which incorporates the bias field that accounts for the intensity in homogeneity of the real-world image. Using the gradient descent method, corresponding level set equations are obtained from which we deduce a fuzzy external force for the LBM solver based on the model by Zhao. The method is speedy, robust for denoise, and does not dependent on the position of the initial contour, effective in the presence of intensity in homogeneity, highly parallelizable and can detect objects with or without edges. For the implementation of segmentation techniques defined for gray images, most of the time researchers determines single channel segments of the images and superimposes the single channel segment information on color images. This idea leads to provide color image segmentation using single channel segments of multi channel images. Though this method is widely adopted but doesn’t provide complete true segmentation of multichannel ie color images because a color image contains three different channels for Red, green and blue components. Hence segmenting a color image, by having only single channel segments information, will definitely loose important segment regions of color images. To overcome this problem this paper work starts with the development of Enhanced Level Set Segmentation for single channel Images Using Fuzzy Clustering and Lattice Boltzmann Method. For the
Energy Technology Data Exchange (ETDEWEB)
Zayed, Elsayed M.E. [Dept. of Mathematics, Zagazig Univ. (Egypt); Abdel Rahman, Hanan M. [Dept. of Basic Sciences, Higher Technological Inst., Tenth of Ramadan City (Egypt)
2010-01-15
In this article, two powerful analytical methods called the variational iteration method (VIM) and the variational homotopy perturbation method (VHPM) are introduced to obtain the exact and the numerical solutions of the (2+1)-dimensional Korteweg-de Vries-Burgers (KdVB) equation and the (1+1)-dimensional Sharma-Tasso-Olver equation. The main objective of the present article is to propose alternative methods of solutions, which avoid linearization and physical unrealistic assumptions. The results show that these methods are very efficient, convenient and can be applied to a large class of nonlinear problems. (orig.)
Identification of rural landscape classes through a GIS clustering method
Directory of Open Access Journals (Sweden)
Irene Diti
2013-09-01
Full Text Available The paper presents a methodology aimed at supporting the rural planning process. The analysis of the state of the art of local and regional policies focused on rural and suburban areas, and the study of the scientific literature in the field of spatial analysis methodologies, have allowed the definition of the basic concept of the research. The proposed method, developed in a GIS, is based on spatial metrics selected and defined to cover various agricultural, environmental, and socio-economic components. The specific goal of the proposed methodology is to identify homogeneous extra-urban areas through their objective characterization at different scales. Once areas with intermediate urban-rural characters have been identified, the analysis is then focused on the more detailed definition of periurban agricultural areas. The synthesis of the results of the analysis of the various landscape components is achieved through an original interpretative key which aims to quantify the potential impacts of rural areas on the urban system. This paper presents the general framework of the methodology and some of the main results of its first implementation through an Italian case study.
An Efficient Initialization Method for K-Means Clustering of Hyperspectral Data
Alizade Naeini, A.; Jamshidzadeh, A.; Saadatseresht, M.; Homayouni, S.
2014-10-01
K-means is definitely the most frequently used partitional clustering algorithm in the remote sensing community. Unfortunately due to its gradient decent nature, this algorithm is highly sensitive to the initial placement of cluster centers. This problem deteriorates for the high-dimensional data such as hyperspectral remotely sensed imagery. To tackle this problem, in this paper, the spectral signatures of the endmembers in the image scene are extracted and used as the initial positions of the cluster centers. For this purpose, in the first step, A Neyman-Pearson detection theory based eigen-thresholding method (i.e., the HFC method) has been employed to estimate the number of endmembers in the image. Afterwards, the spectral signatures of the endmembers are obtained using the Minimum Volume Enclosing Simplex (MVES) algorithm. Eventually, these spectral signatures are used to initialize the k-means clustering algorithm. The proposed method is implemented on a hyperspectral dataset acquired by ROSIS sensor with 103 spectral bands over the Pavia University campus, Italy. For comparative evaluation, two other commonly used initialization methods (i.e., Bradley & Fayyad (BF) and Random methods) are implemented and compared. The confusion matrix, overall accuracy and Kappa coefficient are employed to assess the methods' performance. The evaluations demonstrate that the proposed solution outperforms the other initialization methods and can be applied for unsupervised classification of hyperspectral imagery for landcover mapping.
Generalized Variational Principle for Long Water-Wave Equation by He's Semi-Inverse Method
Directory of Open Access Journals (Sweden)
Weimin Zhang
2009-01-01
Full Text Available Variational principles for nonlinear partial differential equations have come to play an important role in mathematics and physics. However, it is well known that not every nonlinear partial differential equation admits a variational formula. In this paper, He's semi-inverse method is used to construct a family of variational principles for the long water-wave problem.
Variational space-time (dis)continuous Galerkin method for linear free surface waves
Ambati, V.R.; Vegt, van der J.J.W.; Bokhove, O.
2008-01-01
A new variational (dis)continuous Galerkin finite element method is presented for the linear free surface gravity water wave equations. We formulate the space-time finite element discretization based on a variational formulation analogous to Luke's variational principle. The linear algebraic system
Colour based fire detection method with temporal intensity variation filtration
Trambitckii, K.; Anding, K.; Musalimov, V.; Linß, G.
2015-02-01
Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library.
Some new mathematical methods for variational objective analysis
Wahba, Grace; Johnson, Donald R.
1994-01-01
Numerous results were obtained relevant to remote sensing, variational objective analysis, and data assimilation. A list of publications relevant in whole or in part is attached. The principal investigator gave many invited lectures, disseminating the results to the meteorological community as well as the statistical community. A list of invited lectures at meetings is attached, as well as a list of departmental colloquia at various universities and institutes.
Some new mathematical methods for variational objective analysis
Wahba, Grace; Johnson, Donald R.
1994-01-01
Numerous results were obtained relevant to remote sensing, variational objective analysis, and data assimilation. A list of publications relevant in whole or in part is attached. The principal investigator gave many invited lectures, disseminating the results to the meteorological community as well as the statistical community. A list of invited lectures at meetings is attached, as well as a list of departmental colloquia at various universities and institutes.
Genetic variation within the interleukin-1 gene cluster and ischemic stroke.
Olsson, Sandra; Holmegaard, Lukas; Jood, Katarina; Sjögren, Marketa; Engström, Gunnar; Lövkvist, Håkan; Blomstrand, Christian; Norrving, Bo; Melander, Olle; Lindgren, Arne; Jern, Christina
2012-09-01
Evidence is emerging that inflammation plays a key role in the pathophysiology of ischemic stroke (IS). The aim of this study was to investigate whether genetic variation in the interleukin-1α, interleukin-1β, and interleukin-1 receptor antagonist genes (IL1A, IL1B, and IL1RN) is associated with IS and/or any etiologic subtype of IS. Twelve tagSNPs were analyzed in the Sahlgrenska Academy Study on Ischemic Stroke (SAHLSIS), which comprises 844 patients with IS and 668 control subjects. IS subtypes were defined according to the Trial of Org 10172 in Acute Stroke Treatment criteria in SAHLSIS. The Lund Stroke Register and the Malmö Diet and Cancer study were used as a replication sample for overall IS (in total 3145 patients and 1793 control subjects). The single nucleotide polymorphism rs380092 in IL1RN showed an association with overall IS in SAHLSIS (OR, 1.21; 95% CI, 1.02-1.43; P=0.03), which was replicated in the Lund Stroke Register and the Malmö Diet and Cancer study sample. An association was also detected in all samples combined (OR, 1.12; 95% CI, 1.04-1.21; P=0.03). Three single nucleotide polymorphisms in IL1RN (including rs380092) were nominally associated with the subtype of cryptogenic stroke in SAHLSIS, but the statistical significance did not remain after correction for multiple testing. Furthermore, increased plasma levels of interleukin-1 receptor antagonist were observed in the subtype of cryptogenic stroke compared with controls. This comprehensive study, based on a tagSNP approach and replication, presents support for the role of IL1RN in overall IS.
Clustering of attitudes towards obesity: a mixed methods study of Australian parents and children
2013-01-01
Background Current population-based anti-obesity campaigns often target individuals based on either weight or socio-demographic characteristics, and give a ‘mass’ message about personal responsibility. There is a recognition that attempts to influence attitudes and opinions may be more effective if they resonate with the beliefs that different groups have about the causes of, and solutions for, obesity. Limited research has explored how attitudinal factors may inform the development of both upstream and downstream social marketing initiatives. Methods Computer-assisted face-to-face interviews were conducted with 159 parents and 184 of their children (aged 9–18 years old) in two Australian states. A mixed methods approach was used to assess attitudes towards obesity, and elucidate why different groups held various attitudes towards obesity. Participants were quantitatively assessed on eight dimensions relating to the severity and extent, causes and responsibility, possible remedies, and messaging strategies. Cluster analysis was used to determine attitudinal clusters. Participants were also able to qualify each answer. Qualitative responses were analysed both within and across attitudinal clusters using a constant comparative method. Results Three clusters were identified. Concerned Internalisers (27% of the sample) judged that obesity was a serious health problem, that Australia had among the highest levels of obesity in the world and that prevalence was rapidly increasing. They situated the causes and remedies for the obesity crisis in individual choices. Concerned Externalisers (38% of the sample) held similar views about the severity and extent of the obesity crisis. However, they saw responsibility and remedies as a societal rather than an individual issue. The final cluster, the Moderates, which contained significantly more children and males, believed that obesity was not such an important public health issue, and judged the extent of obesity to be
Sweeney, Timothy E; Chen, Albert C; Gevaert, Olivier
2015-11-19
In order to discover new subsets (clusters) of a data set, researchers often use algorithms that perform unsupervised clustering, namely, the algorithmic separation of a dataset into some number of distinct clusters. Deciding whether a particular separation (or number of clusters, K) is correct is a sort of 'dark art', with multiple techniques available for assessing the validity of unsupervised clustering algorithms. Here, we present a new technique for unsupervised clustering that uses multiple clustering algorithms, multiple validity metrics, and progressively bigger subsets of the data to produce an intuitive 3D map of cluster stability that can help determine the optimal number of clusters in a data set, a technique we call COmbined Mapping of Multiple clUsteriNg ALgorithms (COMMUNAL). COMMUNAL locally optimizes algorithms and validity measures for the data being used. We show its application to simulated data with a known K, and then apply this technique to several well-known cancer gene expression datasets, showing that COMMUNAL provides new insights into clustering behavior and stability in all tested cases. COMMUNAL is shown to be a useful tool for determining K in complex biological datasets, and is freely available as a package for R.
Color image segmentation using watershed and Nyström method based spectral clustering
Bai, Xiaodong; Cao, Zhiguo; Yu, Zhenghong; Zhu, Hu
2011-11-01
Color image segmentation draws a lot of attention recently. In order to improve efficiency of spectral clustering in color image segmentation, a novel two-stage color image segmentation method is proposed. In the first stage, we use vector gradient approach to detect color image gradient information, and watershed transformation to get the pre-segmentation result. In the second stage, Nyström extension based spectral clustering is used to get the final result. To verify the proposed algorithm, it is applied to color images from the Berkeley Segmentation Dataset. Experiments show our method can bring promising results and reduce the runtime significantly.
Energy Technology Data Exchange (ETDEWEB)
Brabec, Jiri; Banik, Subrata; Kowalski, Karol; Pittner, Jiří
2016-10-28
The implementation details of the universal state-selective (USS) multi-reference coupled cluster (MRCC) formalism with singles and doubles (USS(2)) are discussed on the example of several benchmark systems. We demonstrate that the USS(2) formalism is capable of improving accuracies of state specific multi-reference coupled-cluster (MRCC) methods based on the Brillouin-Wigner and Mukherjee’s sufficiency conditions. Additionally, it is shown that the USS(2) approach significantly alleviates problems associated with the lack of invariance of MRCC theories upon the rotation of active orbitals. We also discuss the perturbative USS(2) formulations that significantly reduce numerical overhead of the full USS(2) method.
Clustering method and representative feeder selection for the California solar initiative
Energy Technology Data Exchange (ETDEWEB)
Broderick, Robert Joseph; Williams, Joseph R.; Munoz-Ramos, Karina
2014-02-01
The screening process for DG interconnection procedures needs to be improved in order to increase the PV deployment level on the distribution grid. A significant improvement in the current screening process could be achieved by finding a method to classify the feeders in a utility service territory and determine the sensitivity of particular groups of distribution feeders to the impacts of high PV deployment levels. This report describes the utility distribution feeder characteristics in California for a large dataset of 8,163 feeders and summarizes the California feeder population including the range of characteristics identified and most important to hosting capacity. The report describes the set of feeders that are identified for modeling and analysis as well as feeders identified for the control group. The report presents a method for separating a utilitys distribution feeders into unique clusters using the k-means clustering algorithm. An approach for determining the feeder variables of interest for use in a clustering algorithm is also described. The report presents an approach for choosing the feeder variables to be utilized in the clustering process and a method is identified for determining the optimal number of representative clusters.
A NEW SELF-ADAPTIVE ITERATIVE METHOD FOR GENERAL MIXED QUASI VARIATIONAL INEQUALITIES
Institute of Scientific and Technical Information of China (English)
Abdellah Bnouhachem; Mohamed Khalfaoui; Hafida Benazza
2008-01-01
The general mixed quasi variational inequality containing a nonlinear term ψ is a useful and an important generalization of variational inequalities. The projection method can not be applied to solve this problem due to the presence of nonlinear term. It is well known that the variational inequalities involving the nonlinear term ψ are equivalent to the fixed point problems and re, solvent equations. In this article, the authors use these alternative equivalent formulations to suggest and analyze a new self-adaptive iterative method for solving general mixed quasi variational inequalities. Global convergence of the new method is proved. An example is given to illustrate the efficiency of the proposed method.
The Tidal Tails of Globular Cluster Palomar 5 Based on Neural Networks Method
Zou, H; Ma, J; Zhou, X
2009-01-01
The Sixth Data Release (DR6) in the Sloan Digital Sky Survey (SDSS) provides more photometric regions, new features and more accurate data around globular cluster Palomar 5. A new method, Back Propagation Neural Network (BPNN), is used to estimate the probability of cluster member to detect its tidal tails. Cluster and field stars, used for training the networks, are extracted over a $40\\times20$ deg$^2$ field by color-magnitude diagrams (CMDs). The best BPNNs with two hidden layers and Levenberg-Marquardt (LM) training algorithm are determined by the chosen cluster and field samples. The membership probabilities of stars in the whole field are obtained with the BPNNs, and contour maps of the probability distribution show that a tail extends $5.42\\dg$ to the north of the cluster and a tail extends $3.77\\dg$ to the south. The whole tails are similar to those detected by \\citet{od03}, but no longer debris of the cluster is found to the northeast of the sky. The radial density profiles are investigated both alon...
A VARIATIONAL EXPECTATION-MAXIMIZATION METHOD FOR THE INVERSE BLACK BODY RADIATION PROBLEM
Institute of Scientific and Technical Information of China (English)
Jiantao Cheng; Tie Zhou
2008-01-01
The inverse black body radiation problem, which is to reconstruct the area tempera-ture distribution from the measurement of power spectrum distribution, is a well-known ill-posed problem. In this paper, a variational expectation-maximization (EM) method is developed and its convergence is studied. Numerical experiments demonstrate that the variational EM method is more efficient and accurate than the traditional methods, in-cluding the Tikhonov regularization method, the Landweber method and the conjugate gradient method.
AN EFFICIENT INITIALIZATION METHOD FOR K-MEANS CLUSTERING OF HYPERSPECTRAL DATA
Directory of Open Access Journals (Sweden)
A. Alizade Naeini
2014-10-01
Full Text Available K-means is definitely the most frequently used partitional clustering algorithm in the remote sensing community. Unfortunately due to its gradient decent nature, this algorithm is highly sensitive to the initial placement of cluster centers. This problem deteriorates for the high-dimensional data such as hyperspectral remotely sensed imagery. To tackle this problem, in this paper, the spectral signatures of the endmembers in the image scene are extracted and used as the initial positions of the cluster centers. For this purpose, in the first step, A Neyman–Pearson detection theory based eigen-thresholding method (i.e., the HFC method has been employed to estimate the number of endmembers in the image. Afterwards, the spectral signatures of the endmembers are obtained using the Minimum Volume Enclosing Simplex (MVES algorithm. Eventually, these spectral signatures are used to initialize the k-means clustering algorithm. The proposed method is implemented on a hyperspectral dataset acquired by ROSIS sensor with 103 spectral bands over the Pavia University campus, Italy. For comparative evaluation, two other commonly used initialization methods (i.e., Bradley & Fayyad (BF and Random methods are implemented and compared. The confusion matrix, overall accuracy and Kappa coefficient are employed to assess the methods’ performance. The evaluations demonstrate that the proposed solution outperforms the other initialization methods and can be applied for unsupervised classification of hyperspectral imagery for landcover mapping.
Directory of Open Access Journals (Sweden)
Suvorova L.A.
2017-01-01
Full Text Available The cluster approach is considered by the authors as the tool to ensure the accelerated development of the country’s industrial complex. In the article the authors examine the problem of forming the model of the cluster development in high-tech sectors of industry and the methods for evaluating its economic effectiveness. Unlike traditional approaches, the authors of the article identify a cluster unit as the main structural element of the development model of the innovative industrial cluster. They think that a cluster unit is one member of the cluster (i.e.one enterprise. This point of view is differed from modern scientists` opinion, who view a cluster unit as the complex of enterprises operating within cluster units. The purpose of the study was the development and the economic evaluation of the model of the cluster development. In this research the authors examined the cluster of industrial biotechnologies. They have developed and proposed the model of the development of the cluster of industrial biotechnologies: the Non-commercial partnership (NP “The biotechnology cluster of the Kirov region”. This model takes into account the peculiarities of the innovative production. The authors have calculated the absolute and relative effect from clustering taking into account the effectiveness and profitability indicators of the cluster units activities within the cluster in question and the evaluation of the project activity of the cluster. Thus the authors have proved the economic effectiveness of the proposed model of the cluster development. The received research results allow us to conclude that the designed model of the development of the NP “The biotechnology cluster of the Kirov region” provides a steady growth trend of positive economic effect from the corporate activities of the enterprises within the cluster and increase in the region’s competitiveness in the production of high-tech industrial products.
Directory of Open Access Journals (Sweden)
Savita Agrawal
2014-05-01
Full Text Available In the last decades, image segmentation has proved its applicability in various areas like satellite image processing, medical image processing and many more. In the present scenario the researchers tries to develop hybrid image segmentation techniques to generates efficient segmentation. Due to the development of the parallel programming, the lattice Boltzmann met hod (LBM has attracted much attention as a fast alternative approach for solving partial differential equations. In this project work, first designed an energy functional based on the fuzzy c-means objective function which incorporates the bias field that accounts for the intensity in homogeneity of the real-world image. Using the gradient descent method, corresponding level set equations are obtained from which we deduce a fuzzy external force for the LBM solver based on the model by Zhao. The method is speedy, robust for denoise, and does not dependent on the position of the initial contour, effective in the presence of intensity in homogeneity, highly parallelizable and can detect objects with or without edges. For the implementation of segmentation techniques defined for gr ay images, most of the time researchers determines single channel segments of the images and superimposes the single channel segment information on color images. This idea leads to provide color image segmentation using single channel segments of multi chann el images. Though this method is widely adopted but doesn’t provide complete true segmentation of multichannel ie color images because a color image contains three different channels for Red, green and blue components. Hence segmenting a color image, b y having only single channel segments information, will definitely loose important segment regions of color images. To overcome this problem this paper work starts with the development of Enhanced Level Set Segmentation for single channel Images Using Fuzzy Clustering and Lattice Boltzmann Method. For the
An analytical method for Mathieu oscillator based on method of variation of parameter
Li, Xianghong; Hou, Jingyu; Chen, Jufeng
2016-08-01
A simple, but very accurate analytical method for forced Mathieu oscillator is proposed, the idea of which is based on the method of variation of parameter. Assuming that the time-varying parameter in Mathieu oscillator is constant, one could easily obtain its accurately analytical solution. Then the approximately analytical solution for Mathieu oscillator could be established after substituting periodical time-varying parameter for the constant one in the obtained accurate analytical solution. In order to certify the correctness and precision of the proposed analytical method, the first-order and ninth-order approximation solutions by harmonic balance method (HBM) are also presented. The comparisons between the results by the proposed method with those by the numerical simulation and HBM verify that the results by the proposed analytical method agree very well with those by the numerical simulation. Moreover, the precision of the proposed new analytical method is not only higher than the approximation solution by first-order HBM, but also better than the approximation solution by the ninth-order HBM in large ranges of system parameters.
Open-Source Sequence Clustering Methods Improve the State Of the Art.
Kopylova, Evguenia; Navas-Molina, Jose A; Mercier, Céline; Xu, Zhenjiang Zech; Mahé, Frédéric; He, Yan; Zhou, Hong-Wei; Rognes, Torbjørn; Caporaso, J Gregory; Knight, Rob
2016-01-01
Sequence clustering is a common early step in amplicon-based microbial community analysis, when raw sequencing reads are clustered into operational taxonomic units (OTUs) to reduce the run time of subsequent analysis steps. Here, we evaluated the performance of recently released state-of-the-art open-source clustering software products, namely, OTUCLUST, Swarm, SUMACLUST, and SortMeRNA, against current principal options (UCLUST and USEARCH) in QIIME, hierarchical clustering methods in mothur, and USEARCH's most recent clustering algorithm, UPARSE. All the latest open-source tools showed promising results, reporting up to 60% fewer spurious OTUs than UCLUST, indicating that the underlying clustering algorithm can vastly reduce the number of these derived OTUs. Furthermore, we observed that stringent quality filtering, such as is done in UPARSE, can cause a significant underestimation of species abundance and diversity, leading to incorrect biological results. Swarm, SUMACLUST, and SortMeRNA have been included in the QIIME 1.9.0 release. IMPORTANCE Massive collections of next-generation sequencing data call for fast, accurate, and easily accessible bioinformatics algorithms to perform sequence clustering. A comprehensive benchmark is presented, including open-source tools and the popular USEARCH suite. Simulated, mock, and environmental communities were used to analyze sensitivity, selectivity, species diversity (alpha and beta), and taxonomic composition. The results demonstrate that recent clustering algorithms can significantly improve accuracy and preserve estimated diversity without the application of aggressive filtering. Moreover, these tools are all open source, apply multiple levels of multithreading, and scale to the demands of modern next-generation sequencing data, which is essential for the analysis of massive multidisciplinary studies such as the Earth Microbiome Project (EMP) (J. A. Gilbert, J. K. Jansson, and R. Knight, BMC Biol 12:69, 2014, http
Genetic variation in the toll-like receptor gene cluster (TLR10-TLR1-TLR6) and prostate cancer risk.
Stevens, Victoria L; Hsing, Ann W; Talbot, Jeffrey T; Zheng, Siqun Lilly; Sun, Jielin; Chen, Jinbo; Thun, Michael J; Xu, Jianfeng; Calle, Eugenia E; Rodriguez, Carmen
2008-12-01
Toll-like receptors (TLRs) are key players in the innate immune system and initiate the inflammatory response to foreign pathogens such as bacteria, fungi and viruses. The proposed role of chronic inflammation in prostate carcinogenesis has prompted investigation into the association of common genetic variation in TLRs with the risk of this cancer. We investigated the role of common SNPs in a gene cluster encoding the TLR10, TLR6 and TLR1 proteins in prostate cancer etiology among 1,414 cancer cases and 1,414 matched controls from the Cancer Prevention Study II Nutrition Cohort. Twenty-eight SNPs, which included the majority of the common nonsynonymous SNPs in the 54-kb gene region and haplotype-tagging SNPs that defined 5 specific haplotype blocks, were genotyped and their association with prostate cancer risk determined. Two SNPs in TLR10 [I369L (rs11096955) and N241H (rs11096957)] and 4 SNPs in TLR1 [N248S (rs4833095), S26L (rs5743596), rs5743595 and rs5743551] were associated with a statistically significant reduced risk of prostate cancer of 29-38% (for the homozygous variant genotype). The association of these SNPs was similar when the analysis was limited to cases with advanced prostate cancer. Haplotype analysis and linkage disequilibrium findings revealed that the 6 associated SNPs were not independent and represent a single association with reduced prostate cancer risk (OR = 0.55, 95% CI: 0.33, 0.90). Our study suggest that a common haplotype in the TLR10-TLR1-TLR6 gene cluster influences prostate cancer risk and clearly supports the need for further investigation of TLR genes in other populations.
A quaternion-based spectral clustering method for color image segmentation
Li, Xiang; Jin, Lianghai; Liu, Hong; He, Zeng
2011-11-01
Spectral clustering method has been widely used in image segmentation. A key issue in spectral clustering is how to build the affinity matrix. When it is applied to color image segmentation, most of the existing methods either use Euclidean metric to define the affinity matrix, or first converting color-images into gray-level images and then use the gray-level images to construct the affinity matrix (component-wise method). However, it is known that Euclidean distances can not represent the color differences well and the component-wise method does not consider the correlation between color channels. In this paper, we propose a new method to produce the affinity matrix, in which the color images are first represented in quaternion form and then the similarities between color pixels are measured by quaternion rotation (QR) mechanism. The experimental results show the superiority of the new method.
Analysis of cost data in a cluster-randomized, controlled trial: comparison of methods
DEFF Research Database (Denmark)
Sokolowski, Ineta; Ørnbøl, Eva; Rosendal, Marianne
studies have used non-valid analysis of skewed data. We propose two different methods to compare mean cost in two groups. Firstly, we use a non-parametric bootstrap method where the re-sampling takes place on two levels in order to take into account the cluster effect. Secondly, we proceed with a log...... We consider health care data from a cluster-randomized intervention study in primary care to test whether the average health care costs among study patients differ between the two groups. The problems of analysing cost data are that most data are severely skewed. Median instead of mean...... is commonly used for skewed distributions. For health care data, however, we need to recover the total cost in a given patient population. Thus, we focus, on making inferences on population means. Furthermore, a problem of clustered data is added as data related to patients in primary care are organized...
Form gene clustering method about pan-ethnic-group products based on emotional semantic
Chen, Dengkai; Ding, Jingjing; Gao, Minzhuo; Ma, Danping; Liu, Donghui
2016-09-01
The use of pan-ethnic-group products form knowledge primarily depends on a designer's subjective experience without user participation. The majority of studies primarily focus on the detection of the perceptual demands of consumers from the target product category. A pan-ethnic-group products form gene clustering method based on emotional semantic is constructed. Consumers' perceptual images of the pan-ethnic-group products are obtained by means of product form gene extraction and coding and computer aided product form clustering technology. A case of form gene clustering about the typical pan-ethnic-group products is investigated which indicates that the method is feasible. This paper opens up a new direction for the future development of product form design which improves the agility of product design process in the era of Industry 4.0.
Felfer, P; Ceguerra, A V; Ringer, S P; Cairney, J M
2015-03-01
The analysis of the formation of clusters in solid solutions is one of the most common uses of atom probe tomography. Here, we present a method where we use the Voronoi tessellation of the solute atoms and its geometric dual, the Delaunay triangulation to test for spatial/chemical randomness of the solid solution as well as extracting the clusters themselves. We show how the parameters necessary for cluster extraction can be determined automatically, i.e. without user interaction, making it an ideal tool for the screening of datasets and the pre-filtering of structures for other spatial analysis techniques. Since the Voronoi volumes are closely related to atomic concentrations, the parameters resulting from this analysis can also be used for other concentration based methods such as iso-surfaces. Copyright © 2014 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Lu-Chuan Ceng
2013-01-01
Full Text Available We introduce Mann-type extragradient methods for a general system of variational inequalities with solutions of a multivalued variational inclusion and common fixed points of a countable family of nonexpansive mappings in real smooth Banach spaces. Here the Mann-type extragradient methods are based on Korpelevich’s extragradient method and Mann iteration method. We first consider and analyze a Mann-type extragradient algorithm in the setting of uniformly convex and 2-uniformly smooth Banach space and then another Mann-type extragradient algorithm in a smooth and uniformly convex Banach space. Under suitable assumptions, we derive some weak and strong convergence theorems. The results presented in this paper improve, extend, supplement, and develop the corresponding results announced in the earlier and very recent literature.
Variational methods for crystalline microstructure analysis and computation
Dolzmann, Georg
2003-01-01
Phase transformations in solids typically lead to surprising mechanical behaviour with far reaching technological applications. The mathematical modeling of these transformations in the late 80s initiated a new field of research in applied mathematics, often referred to as mathematical materials science, with deep connections to the calculus of variations and the theory of partial differential equations. This volume gives a brief introduction to the essential physical background, in particular for shape memory alloys and a special class of polymers (nematic elastomers). Then the underlying mathematical concepts are presented with a strong emphasis on the importance of quasiconvex hulls of sets for experiments, analytical approaches, and numerical simulations.
Detecting and extracting clusters in atom probe data: A simple, automated method using Voronoi cells
Energy Technology Data Exchange (ETDEWEB)
Felfer, P., E-mail: peter.felfer@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia); Ceguerra, A.V., E-mail: anna.ceguerra@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia); Ringer, S.P., E-mail: simon.ringer@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia); Cairney, J.M., E-mail: julie.cairney@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia)
2015-03-15
The analysis of the formation of clusters in solid solutions is one of the most common uses of atom probe tomography. Here, we present a method where we use the Voronoi tessellation of the solute atoms and its geometric dual, the Delaunay triangulation to test for spatial/chemical randomness of the solid solution as well as extracting the clusters themselves. We show how the parameters necessary for cluster extraction can be determined automatically, i.e. without user interaction, making it an ideal tool for the screening of datasets and the pre-filtering of structures for other spatial analysis techniques. Since the Voronoi volumes are closely related to atomic concentrations, the parameters resulting from this analysis can also be used for other concentration based methods such as iso-surfaces. - Highlights: • Cluster analysis of atom probe data can be significantly simplified by using the Voronoi cell volumes of the atomic distribution. • Concentration fields are defined on a single atomic basis using Voronoi cells. • All parameters for the analysis are determined by optimizing the separation probability of bulk atoms vs clustered atoms.
A Fast Variational Method for the Construction of Resolution Adaptive C-Smooth Molecular Surfaces.
Bajaj, Chandrajit L; Xu, Guoliang; Zhang, Qin
2009-05-01
We present a variational approach to smooth molecular (proteins, nucleic acids) surface constructions, starting from atomic coordinates, as available from the protein and nucleic-acid data banks. Molecular dynamics (MD) simulations traditionally used in understanding protein and nucleic-acid folding processes, are based on molecular force fields, and require smooth models of these molecular surfaces. To accelerate MD simulations, a popular methodology is to employ coarse grained molecular models, which represent clusters of atoms with similar physical properties by psuedo- atoms, resulting in coarser resolution molecular surfaces. We consider generation of these mixed-resolution or adaptive molecular surfaces. Our approach starts from deriving a general form second order geometric partial differential equation in the level-set formulation, by minimizing a first order energy functional which additionally includes a regularization term to minimize the occurrence of chemically infeasible molecular surface pockets or tunnel-like artifacts. To achieve even higher computational efficiency, a fast cubic B-spline C(2) interpolation algorithm is also utilized. A narrow band, tri-cubic B-spline level-set method is then used to provide C(2) smooth and resolution adaptive molecular surfaces.
Variational iteration method for solving non-linear partial differential equations
Energy Technology Data Exchange (ETDEWEB)
Hemeda, A.A. [Department of Mathematics, Faculty of Science, University of Tanta, Tanta (Egypt)], E-mail: aahemeda@yahoo.com
2009-02-15
In this paper, we shall use the variational iteration method to solve some problems of non-linear partial differential equations (PDEs) such as the combined KdV-MKdV equation and Camassa-Holm equation. The variational iteration method is superior than the other non-linear methods, such as the perturbation methods where this method does not depend on small parameters, such that it can fined wide application in non-linear problems without linearization or small perturbation. In this method, the problems are initially approximated with possible unknowns, then a correction functional is constructed by a general Lagrange multiplier, which can be identified optimally via the variational theory.
Quantum Monte Carlo diagonalization method as a variational calculation
Energy Technology Data Exchange (ETDEWEB)
Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio
1997-05-01
A stochastic method for performing large-scale shell model calculations is presented, which utilizes the auxiliary field Monte Carlo technique and diagonalization method. This method overcomes the limitation of the conventional shell model diagonalization and can extremely widen the feasibility of shell model calculations with realistic interactions for spectroscopic study of nuclear structure. (author)
Directory of Open Access Journals (Sweden)
Yonghan Choi
2014-01-01
Full Text Available An adjoint sensitivity-based data assimilation (ASDA method is proposed and applied to a heavy rainfall case over the Korean Peninsula. The heavy rainfall case, which occurred on 26 July 2006, caused torrential rainfall over the central part of the Korean Peninsula. The mesoscale convective system (MCS related to the heavy rainfall was classified as training line/adjoining stratiform (TL/AS-type for the earlier period, and back building (BB-type for the later period. In the ASDA method, an adjoint model is run backwards with forecast-error gradient as input, and the adjoint sensitivity of the forecast error to the initial condition is scaled by an optimal scaling factor. The optimal scaling factor is determined by minimising the observational cost function of the four-dimensional variational (4D-Var method, and the scaled sensitivity is added to the original first guess. Finally, the observations at the analysis time are assimilated using a 3D-Var method with the improved first guess. The simulated rainfall distribution is shifted northeastward compared to the observations when no radar data are assimilated or when radar data are assimilated using the 3D-Var method. The rainfall forecasts are improved when radar data are assimilated using the 4D-Var or ASDA method. Simulated atmospheric fields such as horizontal winds, temperature, and water vapour mixing ratio are also improved via the 4D-Var or ASDA method. Due to the improvement in the analysis, subsequent forecasts appropriately simulate the observed features of the TL/AS- and BB-type MCSs and the corresponding heavy rainfall. The computational cost associated with the ASDA method is significantly lower than that of the 4D-Var method.
Bustamam, A.; Aldila, D.; Fatimah, Arimbi, M. D.
2017-07-01
One of the most widely used clustering method, since it has advantage on its robustness, is Self-Organizing Maps (SOM) method. This paper discusses the application of SOM method on Human Papillomavirus (HPV) DNA which is the main cause of cervical cancer disease, the most dangerous cancer in developing countries. We use 18 types of HPV DNA-based on the newest complete genome. By using open-source-based program R, clustering process can separate 18 types of HPV into two different clusters. There are two types of HPV in the first cluster while 16 others in the second cluster. The analyzing result of 18 types HPV based on the malignancy of the virus (the difficultness to cure). Two of HPV types the first cluster can be classified as tame HPV, while 16 others in the second cluster are classified as vicious HPV.
von der Linden, Anja; Applegate, Douglas E; Kelly, Patrick L; Allen, Steven W; Ebeling, Harald; Burchat, Patricia R; Burke, David L; Donovan, David; Morris, R Glenn; Blandford, Roger; Erben, Thomas; Mantz, Adam
2012-01-01
This is the first in a series of papers in which we measure accurate weak-lensing masses for 51 of the most X-ray luminous galaxy clusters known at redshifts 0.15
Variational iteration method for Bratu-like equation arising in electrospinning.
He, Ji-Huan; Kong, Hai-Yan; Chen, Rou-Xi; Hu, Ming-sheng; Chen, Qiao-ling
2014-05-25
This paper points out that the so called enhanced variational iteration method (Colantoni & Boubaker, 2014) for a nonlinear equation arising in electrospinning and vibration-electrospinning process is the standard variational iteration method. An effective algorithm using the variational iteration algorithm-II is suggested for Bratu-like equation arising in electrospinning. A suitable choice of initial guess results in a relatively accurate solution by one or few iteration.
Variational space-time (dis)continuous Galerkin method for nonlinear free surface waves
Gagarina, E; Vegt, van der, N.F.A.; Ambati, V.R.; Bokhove, O.
2013-01-01
A new variational finite element method is developed for nonlinear free surface gravity water waves. This method also handles waves generated by a wave maker. Its formulation stems from Miles' variational principle for water waves together with a space-time finite element discretization that is continuous in space and discontinuous in time. The key features of this formulation are: (i) a discrete variational approach that gives rise to conservation of discrete energy and phase space and prese...
Directory of Open Access Journals (Sweden)
Leuze Michael
2009-07-01
Full Text Available Abstract Background The Centers for Disease Control and Prevention's (CDC's BioSense system provides near-real time situational awareness for public health monitoring through analysis of electronic health data. Determination of anomalous spatial and temporal disease clusters is a crucial part of the daily disease monitoring task. Our study focused on finding useful anomalies at manageable alert rates according to available BioSense data history. Methods The study dataset included more than 3 years of daily counts of military outpatient clinic visits for respiratory and rash syndrome groupings. We applied four spatial estimation methods in implementations of space-time scan statistics cross-checked in Matlab and C. We compared the utility of these methods according to the resultant background cluster rate (a false alarm surrogate and sensitivity to injected cluster signals. The comparison runs used a spatial resolution based on the facility zip code in the patient record and a finer resolution based on the residence zip code. Results Simple estimation methods that account for day-of-week (DOW data patterns yielded a clear advantage both in background cluster rate and in signal sensitivity. A 28-day baseline gave the most robust results for this estimation; the preferred baseline is long enough to remove daily fluctuations but short enough to reflect recent disease trends and data representation. Background cluster rates were lower for the rash syndrome counts than for the respiratory counts, likely because of seasonality and the large scale of the respiratory counts. Conclusion The spatial estimation method should be chosen according to characteristics of the selected data streams. In this dataset with strong day-of-week effects, the overall best detection performance was achieved using subregion averages over a 28-day baseline stratified by weekday or weekend/holiday behavior. Changing the estimation method for particular scenarios involving
Application of Grey Relational Cluster Method in Muon Tomography for Materials Detection
Institute of Scientific and Technical Information of China (English)
无
2011-01-01
When the number of particles is small, We try to use grey system theory better in dealing the work which has little sample and incomplete information. Grey relational cluster method is applied for materials detection of the research of Muon tomography
Directory of Open Access Journals (Sweden)
Lee Yun-Shien
2008-03-01
Full Text Available Abstract Background The hierarchical clustering tree (HCT with a dendrogram 1 and the singular value decomposition (SVD with a dimension-reduced representative map 2 are popular methods for two-way sorting the gene-by-array matrix map employed in gene expression profiling. While HCT dendrograms tend to optimize local coherent clustering patterns, SVD leading eigenvectors usually identify better global grouping and transitional structures. Results This study proposes a flipping mechanism for a conventional agglomerative HCT using a rank-two ellipse (R2E, an improved SVD algorithm for sorting purpose seriation by Chen 3 as an external reference. While HCTs always produce permutations with good local behaviour, the rank-two ellipse seriation gives the best global grouping patterns and smooth transitional trends. The resulting algorithm automatically integrates the desirable properties of each method so that users have access to a clustering and visualization environment for gene expression profiles that preserves coherent local clusters and identifies global grouping trends. Conclusion We demonstrate, through four examples, that the proposed method not only possesses better numerical and statistical properties, it also provides more meaningful biomedical insights than other sorting algorithms. We suggest that sorted proximity matrices for genes and arrays, in addition to the gene-by-array expression matrix, can greatly aid in the search for comprehensive understanding of gene expression structures. Software for the proposed methods can be obtained at http://gap.stat.sinica.edu.tw/Software/GAP.
Non-Hierarchical Clustering as a method to analyse an open-ended ...
African Journals Online (AJOL)
Apple
tests, provide instructors with tools to probe students' conceptual knowledge of various fields of science and ... quantitative non-hierarchical clustering analysis method known as k-means (Everitt, Landau, Leese & Stahl, ...... undergraduate engineering students in creating ... mathematics-formal reasoning and the contextual.
A novel PPGA-based clustering analysis method for business cycle indicator selection
Institute of Scientific and Technical Information of China (English)
Dabin ZHANG; Lean YU; Shouyang WANG; Yingwen SONG
2009-01-01
A new clustering analysis method based on the pseudo parallel genetic algorithm (PPGA) is proposed for business cycle indicator selection. In the proposed method,the category of each indicator is coded by real numbers,and some illegal chromosomes are repaired by the identi-fication arid restoration of empty class. Two mutation op-erators, namely the discrete random mutation operator andthe optimal direction mutation operator, are designed to bal-ance the local convergence speed and the global convergence performance, which are then combined with migration strat-egy and insertion strategy. For the purpose of verification and illustration, the proposed method is compared with the K-means clustering algorithm and the standard genetic algo-rithms via a numerical simulation experiment. The experi-mental result shows the feasibility and effectiveness of the new PPGA-based clustering analysis algorithm. Meanwhile,the proposed clustering analysis algorithm is also applied to select the business cycle indicators to examine the status of the macro economy. Empirical results demonstrate that the proposed method can effectively and correctly select some leading indicators, coincident indicators, and lagging indi-cators to reflect the business cycle, which is extremely op-erational for some macro economy administrative managers and business decision-makers.
Advanced methods in the fractional calculus of variations
Malinowska, Agnieszka B; Torres, Delfim F M
2015-01-01
This brief presents a general unifying perspective on the fractional calculus. It brings together results of several recent approaches in generalizing the least action principle and the Euler–Lagrange equations to include fractional derivatives. The dependence of Lagrangians on generalized fractional operators as well as on classical derivatives is considered along with still more general problems in which integer-order integrals are replaced by fractional integrals. General theorems are obtained for several types of variational problems for which recent results developed in the literature can be obtained as special cases. In particular, the authors offer necessary optimality conditions of Euler–Lagrange type for the fundamental and isoperimetric problems, transversality conditions, and Noether symmetry theorems. The existence of solutions is demonstrated under Tonelli type conditions. The results are used to prove the existence of eigenvalues and corresponding orthogonal eigenfunctions of fractional Stur...
1987-06-26
BUREAU OF STANDAR-S1963-A Nw BOM -ILE COPY -. 4eo .?3sa.9"-,,A WIN* MAT HEMATICAL SCIENCES _*INSTITUTE AD-A184 687 DTICS!ELECTE ANNOTATED COMPUTER OUTPUT...intoduction to the use of mixture models in clustering. Cornell University Biometrics Unit Technical Report BU-920-M and Mathematical Sciences Institute...mixture method and two comparable methods from SAS. Cornell University Biometrics Unit Technical Report BU-921-M and Mathematical Sciences Institute
The Application of High-Level Iterative Coupled-Cluster Methods to the Cytosine Molecule
Energy Technology Data Exchange (ETDEWEB)
Kowalski, Karol; Valiev, Marat
2008-06-19
The need for inclusion higher-order correlation effects for adequate description of the excitation energies of the DNA bases became clear in the last few years. In particular, we demonstrated that there is a sizable effect of triply excited configurations estimated in a non-iterative manner on the coupled-cluster excitation energies of the cytosine molecule in DNA environment. In this paper we discuss the accuracies of the non-iterative methods for biologically relevant systems in realistic environment in comparison with interative formulations that explicitly include the effect of triply excited clusters.
A simple and fast method to determine the parameters for fuzzy c-means cluster analysis
DEFF Research Database (Denmark)
Schwämmle, Veit; Jensen, Ole Nørregaard
2010-01-01
MOTIVATION: Fuzzy c-means clustering is widely used to identify cluster structures in high-dimensional datasets, such as those obtained in DNA microarray and quantitative proteomics experiments. One of its main limitations is the lack of a computationally fast method to set optimal values...... on the main properties of the dataset. Taking the dimension of the set and the number of objects as input values instead of evaluating the entire dataset allows us to propose a functional relationship determining the fuzzifier directly. This result speaks strongly against using a predefined fuzzifier...
Three-step relaxed hybrid steepest-descent methods for variational inequalities
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
The classical variational inequality problem with a Lipschitzian and strongly monotone operator on a nonempty closed convex subset in a real Hilbert space is studied.A new three-step relaxed hybrid steepest-descent method for this class of variational inequalities is introduced. Strong convergence of this method is establishe d under suitable assumptions imposed on the algorithm parameters.
Variational space–time (dis)continuous Galerkin method for nonlinear free surface water waves
Gagarina, E.; Ambati, V.R.; Vegt, van der J.J.W.; Bokhove, O.
2014-01-01
A new variational finite element method is developed for nonlinear free surface gravity water waves using the potential flow approximation. This method also handles waves generated by a wave maker. Its formulation stems from Miles’ variational principle for water waves together with a finite element
Variational space-time (dis)continuous Galerkin method for nonlinear free surface waves
Gagarina, E.; Vegt, van der J.J.W.; Ambati, V.R.; Bokhove, O.
2013-01-01
A new variational finite element method is developed for nonlinear free surface gravity water waves. This method also handles waves generated by a wave maker. Its formulation stems from Miles' variational principle for water waves together with a space-time finite element discretization that is cont
On the Finite Convergence of Newton-type Methods for P0 Affine Variational Inequalities
Institute of Scientific and Technical Information of China (English)
Li Ping ZHANG; Wen Xun XING
2007-01-01
Based on the techniques used in non-smooth Newton methods and regularized smoothing Newton methods, a Newton-type algorithm is proposed for solving the P0 affine variational inequality problem. Under mild conditions, the algorithm can find an exact solution of the P0 affine variational inequality problem in finite steps. Preliminary numerical results indicate that the algorithm is promis-ing.
A method for clustering of miRNA sequences using fragmented programming
Ivashchenko, Anatoly; Pyrkova, Anna; Niyazova, Raigul
2016-01-01
Clustering of miRNA sequences is an important problem in molecular genetics associated cellular biology. Thousands of such sequences are known today through advancement in sophisticated molecular tools, sequencing techniques, computational resources and rule based mathematical models. Analysis of such large-scale miRNA sequences for inferring patterns towards deducing cellular function is a great challenge in modern molecular biology. Therefore, it is of interest to develop mathematical models specific for miRNA sequences. The process is to group (cluster) such miRNA sequences using well-defined known features. We describe a method for clustering of miRNA sequences using fragmented programming. Subsequently, we illustrated the utility of the model using a dendrogram (a tree diagram) for publically known A.thaliana miRNA nucleotide sequences towards the inference of observed conserved patterns PMID:27212839
Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models
Elsheikh, Ahmed H.
2013-05-01
A novel multi-modal parameter estimation algorithm is introduced. Parameter estimation is an ill-posed inverse problem that might admit many different solutions. This is attributed to the limited amount of measured data used to constrain the inverse problem. The proposed multi-modal model calibration algorithm uses an iterative stochastic ensemble method (ISEM) for parameter estimation. ISEM employs an ensemble of directional derivatives within a Gauss-Newton iteration for nonlinear parameter estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging of close clusters approaching the same local minima. Numerical testing demonstrates the potential of the proposed algorithm in dealing with multi-modal nonlinear parameter estimation for subsurface flow models. © 2013 Elsevier B.V.
Parallel recovery method in shared-nothing spatial database cluster system
Institute of Scientific and Technical Information of China (English)
YOU Byeong-seob; KIM Myung-keun; ZOU Yong-gui; BAE Hae-young
2004-01-01
Shared-nothing spatial database cluster system provides high availability since a replicated node can continue service even if any node in cluster system was crashed.However if the failed node wouldn't be recovered quickly, whole system performance will decrease since the other nodes must process the queries which the failed node may be processed. Therefore the recovery of cluster system is very important to provide the stable service. In most previous proposed techniques, external logs should be recorded in all nodes even if the failed node does not exist. So update transactions are processed slowly.Also recovery time of the failed node increases since a single storage for all database is used to record external logs in each node. Therefore we propose a parallel recovery method for recovering the failed node quickly.
Kafieh, Rahele; Mehridehnavi, Alireza
2013-01-01
In this study, we considered some competitive learning methods including hard competitive learning and soft competitive learning with/without fixed network dimensionality for reliability analysis in microarrays. In order to have a more extensive view, and keeping in mind that competitive learning methods aim at error minimization or entropy maximization (different kinds of function optimization), we decided to investigate the abilities of mixture decomposition schemes. Therefore, we assert that this study covers the algorithms based on function optimization with particular insistence on different competitive learning methods. The destination is finding the most powerful method according to a pre-specified criterion determined with numerical methods and matrix similarity measures. Furthermore, we should provide an indication showing the intrinsic ability of the dataset to form clusters before we apply a clustering algorithm. Therefore, we proposed Hopkins statistic as a method for finding the intrinsic ability of a data to be clustered. The results show the remarkable ability of Rayleigh mixture model in comparison with other methods in reliability analysis task. PMID:24083134
Galaxy Cluster Mass Reconstruction Project: I. Methods and first results on galaxy-based techniques
Old, L; Pearce, F R; Croton, D; Muldrew, S I; Muñoz-Cuartas, J C; Gifford, D; Gray, M E; von der Linden, A; Mamon, G A; Merrifield, M R; Müller, V; Pearson, R J; Ponman, T J; Saro, A; Sepp, T; Sifón, C; Tempel, E; Tundo, E; Wang, Y O; Wojtak, R
2014-01-01
This paper is the first in a series in which we perform an extensive comparison of various galaxy-based cluster mass estimation techniques that utilise the positions, velocities and colours of galaxies. Our primary aim is to test the performance of these cluster mass estimation techniques on a diverse set of models that will increase in complexity. We begin by providing participating methods with data from a simple model that delivers idealised clusters, enabling us to quantify the underlying scatter intrinsic to these mass estimation techniques. The mock catalogue is based on a Halo Occupation Distribution (HOD) model that assumes spherical Navarro, Frenk and White (NFW) haloes truncated at R_200, with no substructure nor colour segregation, and with isotropic, isothermal Maxwellian velocities. We find that, above 10^14 M_solar, recovered cluster masses are correlated with the true underlying cluster mass with an intrinsic scatter of typically a factor of two. Below 10^14 M_solar, the scatter rises as the nu...
Directory of Open Access Journals (Sweden)
Wen Liu
2016-12-01
Full Text Available Indoor positioning technologies has boomed recently because of the growing commercial interest in indoor location-based service (ILBS. Due to the absence of satellite signal in Global Navigation Satellite System (GNSS, various technologies have been proposed for indoor applications. Among them, Wi-Fi fingerprinting has been attracting much interest from researchers because of its pervasive deployment, flexibility and robustness to dense cluttered indoor environments. One challenge, however, is the deployment of Access Points (AP, which would bring a significant influence on the system positioning accuracy. This paper concentrates on WLAN based fingerprinting indoor location by analyzing the AP deployment influence, and studying the advantages of coordinate-based clustering compared to traditional RSS-based clustering. A coordinate-based clustering method for indoor fingerprinting location, named Smallest-Enclosing-Circle-based (SEC, is then proposed aiming at reducing the positioning error lying in the AP deployment and improving robustness to dense cluttered environments. All measurements are conducted in indoor public areas, such as the National Center For the Performing Arts (as Test-bed 1 and the XiDan Joy City (Floors 1 and 2, as Test-bed 2, and results show that SEC clustering algorithm can improve system positioning accuracy by about 32.7% for Test-bed 1, 71.7% for Test-bed 2 Floor 1 and 73.7% for Test-bed 2 Floor 2 compared with traditional RSS-based clustering algorithms such as K-means.
Assessing the Eutrophication of Shengzhong Reservoir Based on Grey Clustering Method
Institute of Scientific and Technical Information of China (English)
Pan An; Hu Lihui; Li Tesong; Li Chengzhu
2009-01-01
Reservoir water environment is a grey system.The grey clustering method is applied to assessing the reservoir water envi-ronment to establish a relatively complete model suitable for the reservoir eutrophication evaluation and appropriately evaluate the quality of reservoir water, providing evidence for reservoir man-agement.According to Chiua's lakes and reservoir eutrophication criteria and the characteristics of China's entrophication, as well as certain evaluation indices, the degree of eutrophication is classified into six categories with the utilization of grey classified whitening weight function to represent the boundaries of classification, to determine the clustering weight and clustering coefficient of each index in grey classifications, and the classification of each cluster-lag object.The comprehensive evaluation of reservoir eutrophica-tion is established on such a foundation, with Sichuan Shengzhong Reservoir as the survey object and the analysis of the data attained by several typical monitoring points there in 2006.It is found that eutrophication of Tiebian Power Generation Station, Guoyu-anchang and Dashiqiao Bridge is the heaviest, Tielusi and Qing-gangya the second, and Lijiaba the least.The eutrophication of this reservoir is closely relevant to the irrational exploitation in its surrounding areas, especially to the aggravation of the non-point source pollution and the increase of net-culture fishing.Therefore, it is feasible to use grey clustering in environment quality evalu-ation, and the point lies in the correct division of grey whitening function
Liu, Wen; Fu, Xiao; Deng, Zhongliang
2016-12-02
Indoor positioning technologies has boomed recently because of the growing commercial interest in indoor location-based service (ILBS). Due to the absence of satellite signal in Global Navigation Satellite System (GNSS), various technologies have been proposed for indoor applications. Among them, Wi-Fi fingerprinting has been attracting much interest from researchers because of its pervasive deployment, flexibility and robustness to dense cluttered indoor environments. One challenge, however, is the deployment of Access Points (AP), which would bring a significant influence on the system positioning accuracy. This paper concentrates on WLAN based fingerprinting indoor location by analyzing the AP deployment influence, and studying the advantages of coordinate-based clustering compared to traditional RSS-based clustering. A coordinate-based clustering method for indoor fingerprinting location, named Smallest-Enclosing-Circle-based (SEC), is then proposed aiming at reducing the positioning error lying in the AP deployment and improving robustness to dense cluttered environments. All measurements are conducted in indoor public areas, such as the National Center For the Performing Arts (as Test-bed 1) and the XiDan Joy City (Floors 1 and 2, as Test-bed 2), and results show that SEC clustering algorithm can improve system positioning accuracy by about 32.7% for Test-bed 1, 71.7% for Test-bed 2 Floor 1 and 73.7% for Test-bed 2 Floor 2 compared with traditional RSS-based clustering algorithms such as K-means.
Directory of Open Access Journals (Sweden)
Yong-Ju Yang
2013-01-01
Full Text Available The local fractional variational iteration method for local fractional Laplace equation is investigated in this paper. The operators are described in the sense of local fractional operators. The obtained results reveal that the method is very effective.
DEFF Research Database (Denmark)
Harris, Abigail K P; Williamson, Neil R; Slater, Holly
2004-01-01
The prodigiosin biosynthesis gene cluster (pig cluster) from two strains of Serratia (S. marcescens ATCC 274 and Serratia sp. ATCC 39006) has been cloned, sequenced and expressed in heterologous hosts. Sequence analysis of the respective pig clusters revealed 14 ORFs in S. marcescens ATCC 274 and...
IP2P K-means: an efficient method for data clustering on sensor networks
Directory of Open Access Journals (Sweden)
Peyman Mirhadi
2013-03-01
Full Text Available Many wireless sensor network applications require data gathering as the most important parts of their operations. There are increasing demands for innovative methods to improve energy efficiency and to prolong the network lifetime. Clustering is considered as an efficient topology control methods in wireless sensor networks, which can increase network scalability and lifetime. This paper presents a method, IP2P K-means – Improved P2P K-means, which uses efficient leveling in clustering approach, reduces false labeling and restricts the necessary communication among various sensors, which obviously saves more energy. The proposed method is examined in Network Simulator Ver.2 (NS2 and the preliminary results show that the algorithm works effectively and relatively more precisely.
Directory of Open Access Journals (Sweden)
Qunyi Xie
2016-01-01
Full Text Available Content-based image retrieval has recently become an important research topic and has been widely used for managing images from repertories. In this article, we address an efficient technique, called MNGS, which integrates multiview constrained nonnegative matrix factorization (NMF and Gaussian mixture model- (GMM- based spectral clustering for image retrieval. In the proposed methodology, the multiview NMF scheme provides competitive sparse representations of underlying images through decomposition of a similarity-preserving matrix that is formed by fusing multiple features from different visual aspects. In particular, the proposed method merges manifold constraints into the standard NMF objective function to impose an orthogonality constraint on the basis matrix and satisfy the structure preservation requirement of the coefficient matrix. To manipulate the clustering method on sparse representations, this paper has developed a GMM-based spectral clustering method in which the Gaussian components are regrouped in spectral space, which significantly improves the retrieval effectiveness. In this way, image retrieval of the whole database translates to a nearest-neighbour search in the cluster containing the query image. Simultaneously, this study investigates the proof of convergence of the objective function and the analysis of the computational complexity. Experimental results on three standard image datasets reveal the advantages that can be achieved with the proposed retrieval scheme.
Methods for accurate analysis of galaxy clustering on non-linear scales
Vakili, Mohammadjavad
2017-01-01
Measurements of galaxy clustering with the low-redshift galaxy surveys provide sensitive probe of cosmology and growth of structure. Parameter inference with galaxy clustering relies on computation of likelihood functions which requires estimation of the covariance matrix of the observables used in our analyses. Therefore, accurate estimation of the covariance matrices serves as one of the key ingredients in precise cosmological parameter inference. This requires generation of a large number of independent galaxy mock catalogs that accurately describe the statistical distribution of galaxies in a wide range of physical scales. We present a fast method based on low-resolution N-body simulations and approximate galaxy biasing technique for generating mock catalogs. Using a reference catalog that was created using the high resolution Big-MultiDark N-body simulation, we show that our method is able to produce catalogs that describe galaxy clustering at a percentage-level accuracy down to highly non-linear scales in both real-space and redshift-space.In most large-scale structure analyses, modeling of galaxy bias on non-linear scales is performed assuming a halo model. Clustering of dark matter halos has been shown to depend on halo properties beyond mass such as halo concentration, a phenomenon referred to as assembly bias. Standard large-scale structure studies assume that halo mass alone is sufficient in characterizing the connection between galaxies and halos. However, modeling of galaxy bias can face systematic effects if the number of galaxies are correlated with other halo properties. Using the Small MultiDark-Planck high resolution N-body simulation and the clustering measurements of Sloan Digital Sky Survey DR7 main galaxy sample, we investigate the extent to which the dependence of galaxy bias on halo concentration can improve our modeling of galaxy clustering.
Yan, Donghui; Jordan, Michael I
2011-01-01
Inspired by Random Forests (RF) in the context of classification, we propose a new clustering ensemble method---Cluster Forests (CF). Geometrically, CF randomly probes a high-dimensional data cloud to obtain "good local clusterings" and then aggregates via spectral clustering to obtain cluster assignments for the whole dataset. The search for good local clusterings is guided by a cluster quality measure $\\kappa$. CF progressively improves each local clustering in a fashion that resembles the tree growth in RF. Empirical studies on several real-world datasets under two different performance metrics show that CF compares favorably to its competitors. Theoretical analysis shows that the $\\kappa$ criterion is shown to grow each local clustering in a desirable way---it is "noise-resistant." A closed-form expression is obtained for the mis-clustering rate of spectral clustering under a perturbation model, which yields new insights into some aspects of spectral clustering.
Directory of Open Access Journals (Sweden)
Javad Aramideh
2014-11-01
Full Text Available Wireless sensor networks have attracted attention of researchers considering their abundant applications. One of the important issues in this network is limitation of energy consumption which is directly related to life of the network. One of the main works which have been done recently to confront with this problem is clustering. In this paper, an attempt has been made to present clustering method which performs clustering in two stages. In the first stage, it specifies candidate nodes for being head cluster with fuzzy method and in the next stage, the node of the head cluster is determined among the candidate nodes with cellular learning automata. Advantage of the clustering method is that clustering has been done based on three main parameters of the number of neighbors, energy level of nodes and distance between each node and sink node which results in selection of the best nodes as a candidate head of cluster nodes. Connectivity of network is also evaluated in the second part of head cluster determination. Therefore, more energy will be stored by determining suitable head clusters and creating balanced clusters in the network and consequently, life of the network increases.
Comparison of clustering methods for high-dimensional single-cell flow and mass cytometry data.
Weber, Lukas M; Robinson, Mark D
2016-12-01
Recent technological developments in high-dimensional flow cytometry and mass cytometry (CyTOF) have made it possible to detect expression levels of dozens of protein markers in thousands of cells per second, allowing cell populations to be characterized in unprecedented detail. Traditional data analysis by "manual gating" can be inefficient and unreliable in these high-dimensional settings, which has led to the development of a large number of automated analysis methods. Methods designed for unsupervised analysis use specialized clustering algorithms to detect and define cell populations for further downstream analysis. Here, we have performed an up-to-date, extensible performance comparison of clustering methods for high-dimensional flow and mass cytometry data. We evaluated methods using several publicly available data sets from experiments in immunology, containing both major and rare cell populations, with cell population identities from expert manual gating as the reference standard. Several methods performed well, including FlowSOM, X-shift, PhenoGraph, Rclusterpp, and flowMeans. Among these, FlowSOM had extremely fast runtimes, making this method well-suited for interactive, exploratory analysis of large, high-dimensional data sets on a standard laptop or desktop computer. These results extend previously published comparisons by focusing on high-dimensional data and including new methods developed for CyTOF data. R scripts to reproduce all analyses are available from GitHub (https://github.com/lmweber/cytometry-clustering-comparison), and pre-processed data files are available from FlowRepository (FR-FCM-ZZPH), allowing our comparisons to be extended to include new clustering methods and reference data sets. © 2016 The Authors. Cytometry Part A published by Wiley Periodicals, Inc. on behalf of ISAC.
Blanchard, Philippe
2015-01-01
The second edition of this textbook presents the basic mathematical knowledge and skills that are needed for courses on modern theoretical physics, such as those on quantum mechanics, classical and quantum field theory, and related areas. The authors stress that learning mathematical physics is not a passive process and include numerous detailed proofs, examples, and over 200 exercises, as well as hints linking mathematical concepts and results to the relevant physical concepts and theories. All of the material from the first edition has been updated, and five new chapters have been added on such topics as distributions, Hilbert space operators, and variational methods. The text is divided into three main parts. Part I is a brief introduction to distribution theory, in which elements from the theories of ultradistributions and hyperfunctions are considered in addition to some deeper results for Schwartz distributions, thus providing a comprehensive introduction to the theory of generalized functions. P...
Leukocyte telomere length variation due to DNA extraction method.
Denham, Joshua; Marques, Francine Z; Charchar, Fadi J
2014-12-04
Telomere length is indicative of biological age. Shorter telomeres have been associated with several disease and health states. There are inconsistencies throughout the literature amongst relative telomere length measured by quantitative PCR (qPCR) and different extraction methods or kits used. We quantified whole-blood leukocyte telomere length using the telomere to single copy gene (T/S) ratio by qPCR in 20 young (18-25 yrs) men after extracting DNA using three common extraction methods: Lahiri and Nurnberger (high salt) method, PureLink Genomic DNA Mini kit (Life Technologies) and QiaAmp DNA Mini kit (Qiagen). Telomere length differences of DNA extracted from the three extraction methods was assessed by one-way analysis of variance (ANOVA). DNA purity differed between extraction methods used (P=0.01). Telomere length was impacted by the DNA extraction method used (P=0.01). Telomeres extracted using the Lahiri and Nurnberger method (mean T/S ratio: 2.43, range: 1.57-3.02) and PureLink Genomic DNA Mini Kit (mean T/S ratio: 2.57, range: 2.24-2.80) did not differ (P=0.13). Likewise, QiaAmp and Purelink-extracted telomeres were not statistically different (P=0.14). The Lahiri-extracted telomeres, however, were significantly shorter than those extracted using the QiaAmp DNA Mini Kit (mean T/S ratio: 2.71, range: 2.32-3.02; P=0.003). DNA purity was associated with telomere length. There are discrepancies between the length of leukocyte telomeres extracted from the same individuals according to the DNA extraction method used. DNA purity could be responsible for the discrepancy in telomere length but this will require validation studies. We recommend using the same DNA extraction kit when quantifying leukocyte telomere length by qPCR or when comparing different cohorts to avoid erroneous associations between telomere length and traits of interest.
Exploring systematic lesson variation : A teaching method in mathematics
Russell, Laurence
2015-01-01
The study presented in this licentiate thesis explores the student perception of systematically varied lessons, a specific lesson design and exploratory method of teaching mathematics in comprehensive school. Two 9th grade classes are taught in the curriculum defined mathematical core content sections of algebra and mathematical relationships and rate of change but by two different methods. One class is taught using a conventional textbook approach and the other with a series of systematicall...
Directory of Open Access Journals (Sweden)
William E Stutz
Full Text Available Genes of the vertebrate major histocompatibility complex (MHC are of great interest to biologists because of their important role in immunity and disease, and their extremely high levels of genetic diversity. Next generation sequencing (NGS technologies are quickly becoming the method of choice for high-throughput genotyping of multi-locus templates like MHC in non-model organisms. Previous approaches to genotyping MHC genes using NGS technologies suffer from two problems:1 a "gray zone" where low frequency alleles and high frequency artifacts can be difficult to disentangle and 2 a similar sequence problem, where very similar alleles can be difficult to distinguish as two distinct alleles. Here were present a new method for genotyping MHC loci--Stepwise Threshold Clustering (STC--that addresses these problems by taking full advantage of the increase in sequence data provided by NGS technologies. Unlike previous approaches for genotyping MHC with NGS data that attempt to classify individual sequences as alleles or artifacts, STC uses a quasi-Dirichlet clustering algorithm to cluster similar sequences at increasing levels of sequence similarity. By applying frequency and similarity based criteria to clusters rather than individual sequences, STC is able to successfully identify clusters of sequences that correspond to individual or similar alleles present in the genomes of individual samples. Furthermore, STC does not require duplicate runs of all samples, increasing the number of samples that can be genotyped in a given project. We show how the STC method works using a single sample library. We then apply STC to 295 threespine stickleback (Gasterosteus aculeatus samples from four populations and show that neighboring populations differ significantly in MHC allele pools. We show that STC is a reliable, accurate, efficient, and flexible method for genotyping MHC that will be of use to biologists interested in a variety of downstream applications.
Dogulu, Nilay; Solomatine, Dimitri; Lal Shrestha, Durga
2014-05-01
Within the context of flood forecasting, assessment of predictive uncertainty has become a necessity for most of the modelling studies in operational hydrology. There are several uncertainty analysis and/or prediction methods available in the literature; however, most of them rely on normality and homoscedasticity assumptions for model residuals occurring in reproducing the observed data. This study focuses on a statistical method analyzing model residuals without having any assumptions and based on a clustering approach: Uncertainty Estimation based on local Errors and Clustering (UNEEC). The aim of this work is to provide a comprehensive evaluation of the UNEEC method's performance in view of clustering approach employed within its methodology. This is done by analyzing normality of model residuals and comparing uncertainty analysis results (for 50% and 90% confidence level) with those obtained from uniform interval and quantile regression methods. An important part of the basis by which the methods are compared is analysis of data clusters representing different hydrometeorological conditions. The validation measures used are PICP, MPI, ARIL and NUE where necessary. A new validation measure linking prediction interval to the (hydrological) model quality - weighted mean prediction interval (WMPI) - is also proposed for comparing the methods more effectively. The case study is Brue catchment, located in the South West of England. A different parametrization of the method than its previous application in Shrestha and Solomatine (2008) is used, i.e. past error values in addition to discharge and effective rainfall is considered. The results show that UNEEC's notable characteristic in its methodology, i.e. applying clustering to data of predictors upon which catchment behaviour information is encapsulated, contributes increased accuracy of the method's results for varying flow conditions. Besides, classifying data so that extreme flow events are individually
Directory of Open Access Journals (Sweden)
C. Ünlü
2013-01-01
Full Text Available A modification of the variational iteration method (VIM for solving systems of nonlinear fractional-order differential equations is proposed. The fractional derivatives are described in the Caputo sense. The solutions of fractional differential equations (FDE obtained using the traditional variational iteration method give good approximations in the neighborhood of the initial position. The main advantage of the present method is that it can accelerate the convergence of the iterative approximate solutions relative to the approximate solutions obtained using the traditional variational iteration method. Illustrative examples are presented to show the validity of this modification.
Multiclass Total Variation Clustering
2014-12-01
before thresholding) plotted over the fours and nines. Right: Solution f 4 from LSD [1] plotted over the fours and nines. 3.2 Transductive Framework From...of [11] and [3] with default parameters. We used the code available from [19] to test each NMF algorithm. All non-recursive algorithms ( LSD [1], NMFR...significantly improved upon previously reported results of LSD in particular. We allowed each non-recursive algorithm 10000 iterations using initial
Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.
1997-01-01
Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
Implementation of the parametric variation method in an EMTP program
DEFF Research Database (Denmark)
Holdyk, Andrzej; Holbøll, Joachim
2013-01-01
of parameters in an electric system. The proposed method allows varying any parameter of a circuit, including the simulation settings and exploits the specific structure of the ATP-EMTP software. In the implementation of the method, Matlab software is used to control the execution of the ATP solver. Two...... examples are shown, for both time domain and frequency domain studies, where the sensitivity of maximum overvoltages at transformer terminals and the admittance resonances in a radial of an offshore wind farm to a change of the collection grid cable parameters is investigated....
Are fragment-based quantum chemistry methods applicable to medium-sized water clusters?
Yuan, Dandan; Shen, Xiaoling; Li, Wei; Li, Shuhua
2016-06-28
Fragment-based quantum chemistry methods are either based on the many-body expansion or the inclusion-exclusion principle. To compare the applicability of these two categories of methods, we have systematically evaluated the performance of the generalized energy based fragmentation (GEBF) method (J. Phys. Chem. A, 2007, 111, 2193) and the electrostatically embedded many-body (EE-MB) method (J. Chem. Theory Comput., 2007, 3, 46) for medium-sized water clusters (H2O)n (n = 10, 20, 30). Our calculations demonstrate that the GEBF method provides uniformly accurate ground-state energies for 10 low-energy isomers of three water clusters under study at a series of theory levels, while the EE-MB method (with one water molecule as a fragment and without using the cutoff distance) shows a poor convergence for (H2O)20 and (H2O)30 when the basis set contains diffuse functions. Our analysis shows that the neglect of the basis set superposition error for each subsystem has little effect on the accuracy of the GEBF method, but leads to much less accurate results for the EE-MB method. The accuracy of the EE-MB method can be dramatically improved by using an appropriate cutoff distance and using two water molecules as a fragment. For (H2O)30, the average deviation of the EE-MB method truncated up to the three-body level calculated using this strategy (relative to the conventional energies) is about 0.003 hartree at the M06-2X/6-311++G** level, while the deviation of the GEBF method with a similar computational cost is less than 0.001 hartree. The GEBF method is demonstrated to be applicable for electronic structure calculations of water clusters at any basis set.
a Three-Step Spatial-Temporal Clustering Method for Human Activity Pattern Analysis
Huang, W.; Li, S.; Xu, S.
2016-06-01
How people move in cities and what they do in various locations at different times form human activity patterns. Human activity pattern plays a key role in in urban planning, traffic forecasting, public health and safety, emergency response, friend recommendation, and so on. Therefore, scholars from different fields, such as social science, geography, transportation, physics and computer science, have made great efforts in modelling and analysing human activity patterns or human mobility patterns. One of the essential tasks in such studies is to find the locations or places where individuals stay to perform some kind of activities before further activity pattern analysis. In the era of Big Data, the emerging of social media along with wearable devices enables human activity data to be collected more easily and efficiently. Furthermore, the dimension of the accessible human activity data has been extended from two to three (space or space-time) to four dimensions (space, time and semantics). More specifically, not only a location and time that people stay and spend are collected, but also what people "say" for in a location at a time can be obtained. The characteristics of these datasets shed new light on the analysis of human mobility, where some of new methodologies should be accordingly developed to handle them. Traditional methods such as neural networks, statistics and clustering have been applied to study human activity patterns using geosocial media data. Among them, clustering methods have been widely used to analyse spatiotemporal patterns. However, to our best knowledge, few of clustering algorithms are specifically developed for handling the datasets that contain spatial, temporal and semantic aspects all together. In this work, we propose a three-step human activity clustering method based on space, time and semantics to fill this gap. One-year Twitter data, posted in Toronto, Canada, is used to test the clustering-based method. The results show that the
A THREE-STEP SPATIAL-TEMPORAL-SEMANTIC CLUSTERING METHOD FOR HUMAN ACTIVITY PATTERN ANALYSIS
Directory of Open Access Journals (Sweden)
W. Huang
2016-06-01
Full Text Available How people move in cities and what they do in various locations at different times form human activity patterns. Human activity pattern plays a key role in in urban planning, traffic forecasting, public health and safety, emergency response, friend recommendation, and so on. Therefore, scholars from different fields, such as social science, geography, transportation, physics and computer science, have made great efforts in modelling and analysing human activity patterns or human mobility patterns. One of the essential tasks in such studies is to find the locations or places where individuals stay to perform some kind of activities before further activity pattern analysis. In the era of Big Data, the emerging of social media along with wearable devices enables human activity data to be collected more easily and efficiently. Furthermore, the dimension of the accessible human activity data has been extended from two to three (space or space-time to four dimensions (space, time and semantics. More specifically, not only a location and time that people stay and spend are collected, but also what people “say” for in a location at a time can be obtained. The characteristics of these datasets shed new light on the analysis of human mobility, where some of new methodologies should be accordingly developed to handle them. Traditional methods such as neural networks, statistics and clustering have been applied to study human activity patterns using geosocial media data. Among them, clustering methods have been widely used to analyse spatiotemporal patterns. However, to our best knowledge, few of clustering algorithms are specifically developed for handling the datasets that contain spatial, temporal and semantic aspects all together. In this work, we propose a three-step human activity clustering method based on space, time and semantics to fill this gap. One-year Twitter data, posted in Toronto, Canada, is used to test the clustering-based method. The
AptaCluster - A Method to Cluster HT-SELEX Aptamer Pools and Lessons from its Application.
Hoinka, Jan; Berezhnoy, Alexey; Sauna, Zuben E; Gilboa, Eli; Przytycka, Teresa M
2014-01-01
Systematic Evolution of Ligands by EXponential Enrichment (SELEX) is a well established experimental procedure to identify aptamers - synthetic single-stranded (ribo)nucleic molecules that bind to a given molecular target. Recently, new sequencing technologies have revolutionized the SELEX protocol by allowing for deep sequencing of the selection pools after each cycle. The emergence of High Throughput SELEX (HT-SELEX) has opened the field to new computational opportunities and challenges that are yet to be addressed. To aid the analysis of the results of HT-SELEX and to advance the understanding of the selection process itself, we developed AptaCluster. This algorithm allows for an efficient clustering of whole HT-SELEX aptamer pools; a task that could not be accomplished with traditional clustering algorithms due to the enormous size of such datasets. We performed HT-SELEX with Interleukin 10 receptor alpha chain (IL-10RA) as the target molecule and used AptaCluster to analyze the resulting sequences. AptaCluster allowed for the first survey of the relationships between sequences in different selection rounds and revealed previously not appreciated properties of the SELEX protocol. As the first tool of this kind, AptaCluster enables novel ways to analyze and to optimize the HT-SELEX procedure. Our AptaCluster algorithm is available as a very fast multiprocessor implementation upon request.
A new method to assign galaxy cluster membership using photometric redshifts
Castignani, Gianluca
2016-01-01
We introduce a new effective strategy to assign group and cluster membership probabilities $P_{mem}$ to galaxies using photometric redshift information. Large dynamical ranges both in halo mass and cosmic time are considered. The method takes the magnitude distribution of both cluster and field galaxies as well as the radial distribution of galaxies in clusters into account using a non-parametric formalism and relies on Bayesian inference to take photometric redshift uncertainties into account. We successfully test the method against 1,208 galaxy clusters within redshifts $z=0.05-2.55$ and masses $10^{13.29-14.80}~M_\\odot$ drawn from wide field simulated galaxy mock catalogs developed for the Euclid mission. Median purity $(55^{+17}_{-15})\\%$ and completeness $(95^{+5}_{-10})\\%$ are reached for galaxies brighter than 0.25$L_\\ast$ within $r_{200}$ of each simulated halo and for a statistical photometric redshift accuracy $\\sigma((z_s-z_p)/(1+z_s))=0.03$. The mean values $\\overline{\\mathsf{p}}=56\\%$ and $\\overl...
Directory of Open Access Journals (Sweden)
Deepa Devasenapathy
2015-01-01
Full Text Available The traffic in the road network is progressively increasing at a greater extent. Good knowledge of network traffic can minimize congestions using information pertaining to road network obtained with the aid of communal callers, pavement detectors, and so on. Using these methods, low featured information is generated with respect to the user in the road network. Although the existing schemes obtain urban traffic information, they fail to calculate the energy drain rate of nodes and to locate equilibrium between the overhead and quality of the routing protocol that renders a great challenge. Thus, an energy-efficient cluster-based vehicle detection in road network using the intention numeration method (CVDRN-IN is developed. Initially, sensor nodes that detect a vehicle are grouped into separate clusters. Further, we approximate the strength of the node drain rate for a cluster using polynomial regression function. In addition, the total node energy is estimated by taking the integral over the area. Finally, enhanced data aggregation is performed to reduce the amount of data transmission using digital signature tree. The experimental performance is evaluated with Dodgers loop sensor data set from UCI repository and the performance evaluation outperforms existing work on energy consumption, clustering efficiency, and node drain rate.
Pre-crash scenarios at road junctions: A clustering method for car crash data.
Nitsche, Philippe; Thomas, Pete; Stuetz, Rainer; Welsh, Ruth
2017-08-22
Given the recent advancements in autonomous driving functions, one of the main challenges is safe and efficient operation in complex traffic situations such as road junctions. There is a need for comprehensive testing, either in virtual simulation environments or on real-world test tracks. This paper presents a novel data analysis method including the preparation, analysis and visualization of car crash data, to identify the critical pre-crash scenarios at T- and four-legged junctions as a basis for testing the safety of automated driving systems. The presented method employs k-medoids to cluster historical junction crash data into distinct partitions and then applies the association rules algorithm to each cluster to specify the driving scenarios in more detail. The dataset used consists of 1056 junction crashes in the UK, which were exported from the in-depth "On-the-Spot" database. The study resulted in thirteen crash clusters for T-junctions, and six crash clusters for crossroads. Association rules revealed common crash characteristics, which were the basis for the scenario descriptions. The results support existing findings on road junction accidents and provide benchmark situations for safety performance tests in order to reduce the possible number parameter combinations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Cui Jia
2017-05-01
Full Text Available With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.
Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua
2017-05-01
With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.
Kellner, Ronny; Hanschke, Christian; Begerow, Dominik
2014-01-01
The maintenance of an intimate interaction between plant-biotrophic fungi and their hosts over evolutionary times involves strong selection and adaptative evolution of virulence-related genes. The highly specialised maize pathogen Ustilago maydis is assigned with a high evolutionary capability to overcome host resistances due to its high rates of sexual recombination, large population sizes and long distance dispersal. Unlike most studied fungus-plant interactions, the U. maydis - Zea mays pathosystem lacks a typical gene-for-gene interaction. It exerts a large set of secreted fungal virulence factors that are mostly organised in gene clusters. Their contribution to virulence has been experimentally demonstrated but their genetic diversity within U. maydis remains poorly understood. Here, we report on the intraspecific diversity of 34 potential virulence factor genes of U. maydis. We analysed their sequence polymorphisms in 17 isolates of U. maydis from Europe, North and Latin America. We focused on gene cluster 2A, associated with virulence attenuation, cluster 19A that is crucial for virulence, and the cluster-independent effector gene pep1. Although higher compared to four house-keeping genes, the overall levels of intraspecific genetic variation of virulence clusters 2A and 19A, and pep1 are remarkably low and commensurate to the levels of 14 studied non-virulence genes. In addition, each gene is present in all studied isolates and synteny in cluster 2A is conserved. Furthermore, 7 out of 34 virulence genes contain either no polymorphisms or only synonymous substitutions among all isolates. However, genetic variation of clusters 2A and 19A each resolve the large scale population structure of U. maydis indicating subpopulations with decreased gene flow. Hence, the genetic diversity of these virulence-related genes largely reflect the demographic history of U. maydis populations.
Directory of Open Access Journals (Sweden)
Ronny Kellner
Full Text Available The maintenance of an intimate interaction between plant-biotrophic fungi and their hosts over evolutionary times involves strong selection and adaptative evolution of virulence-related genes. The highly specialised maize pathogen Ustilago maydis is assigned with a high evolutionary capability to overcome host resistances due to its high rates of sexual recombination, large population sizes and long distance dispersal. Unlike most studied fungus-plant interactions, the U. maydis - Zea mays pathosystem lacks a typical gene-for-gene interaction. It exerts a large set of secreted fungal virulence factors that are mostly organised in gene clusters. Their contribution to virulence has been experimentally demonstrated but their genetic diversity within U. maydis remains poorly understood. Here, we report on the intraspecific diversity of 34 potential virulence factor genes of U. maydis. We analysed their sequence polymorphisms in 17 isolates of U. maydis from Europe, North and Latin America. We focused on gene cluster 2A, associated with virulence attenuation, cluster 19A that is crucial for virulence, and the cluster-independent effector gene pep1. Although higher compared to four house-keeping genes, the overall levels of intraspecific genetic variation of virulence clusters 2A and 19A, and pep1 are remarkably low and commensurate to the levels of 14 studied non-virulence genes. In addition, each gene is present in all studied isolates and synteny in cluster 2A is conserved. Furthermore, 7 out of 34 virulence genes contain either no polymorphisms or only synonymous substitutions among all isolates. However, genetic variation of clusters 2A and 19A each resolve the large scale population structure of U. maydis indicating subpopulations with decreased gene flow. Hence, the genetic diversity of these virulence-related genes largely reflect the demographic history of U. maydis populations.
Threshold selection for classification of MR brain images by clustering method
Energy Technology Data Exchange (ETDEWEB)
Moldovanu, Simona [Faculty of Sciences and Environment, Department of Chemistry, Physics and Environment, Dunărea de Jos University of Galaţi, 47 Domnească St., 800008, Romania, Phone: +40 236 460 780 (Romania); Dumitru Moţoc High School, 15 Milcov St., 800509, Galaţi (Romania); Obreja, Cristian; Moraru, Luminita, E-mail: luminita.moraru@ugal.ro [Faculty of Sciences and Environment, Department of Chemistry, Physics and Environment, Dunărea de Jos University of Galaţi, 47 Domnească St., 800008, Romania, Phone: +40 236 460 780 (Romania)
2015-12-07
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.
Health state evaluation of shield tunnel SHM using fuzzy cluster method
Zhou, Fa; Zhang, Wei; Sun, Ke; Shi, Bin
2015-04-01
Shield tunnel SHM is in the path of rapid development currently while massive monitoring data processing and quantitative health grading remain a real challenge, since multiple sensors belonging to different types are employed in SHM system. This paper addressed the fuzzy cluster method based on fuzzy equivalence relationship for the health evaluation of shield tunnel SHM. The method was optimized by exporting the FSV map to automatically generate the threshold value. A new holistic health score(HHS) was proposed and its effectiveness was validated by conducting a pilot test. A case study on Nanjing Yangtze River Tunnel was presented to apply this method. Three types of indicators, namely soil pressure, pore pressure and steel strain, were used to develop the evaluation set U. The clustering results were verified by analyzing the engineering geological conditions; the applicability and validity of the proposed method was also demonstrated. Besides, the advantage of multi-factor evaluation over single-factor model was discussed by using the proposed HHS. This investigation indicated the fuzzy cluster method and HHS is capable of characterizing the fuzziness of tunnel health, and it is beneficial to clarify the tunnel health evaluation uncertainties.
Iterative and variational homogenization methods for filled elastomers
Goudarzi, Taha
Elastomeric composites have increasingly proved invaluable in commercial technological applications due to their unique mechanical properties, especially their ability to undergo large reversible deformation in response to a variety of stimuli (e.g., mechanical forces, electric and magnetic fields, changes in temperature). Modern advances in organic materials science have revealed that elastomeric composites hold also tremendous potential to enable new high-end technologies, especially as the next generation of sensors and actuators featured by their low cost together with their biocompatibility, and processability into arbitrary shapes. This potential calls for an in-depth investigation of the macroscopic mechanical/physical behavior of elastomeric composites directly in terms of their microscopic behavior with the objective of creating the knowledge base needed to guide their bottom-up design. The purpose of this thesis is to generate a mathematical framework to describe, explain, and predict the macroscopic nonlinear elastic behavior of filled elastomers, arguably the most prominent class of elastomeric composites, directly in terms of the behavior of their constituents --- i.e., the elastomeric matrix and the filler particles --- and their microstructure --- i.e., the content, size, shape, and spatial distribution of the filler particles. This will be accomplished via a combination of novel iterative and variational homogenization techniques capable of accounting for interphasial phenomena and finite deformations. Exact and approximate analytical solutions for the fundamental nonlinear elastic response of dilute suspensions of rigid spherical particles (either firmly bonded or bonded through finite size interphases) in Gaussian rubber are first generated. These results are in turn utilized to construct approximate solutions for the nonlinear elastic response of non-Gaussian elastomers filled with a random distribution of rigid particles (again, either firmly
Variational space-time (dis)continuous Galerkin method for linear free surface waves
Ambati, V.R.; Vegt, van der, N.F.A.; Bokhove, O.
2008-01-01
A new variational (dis)continuous Galerkin finite element method is presented for the linear free surface gravity water wave equations. We formulate the space-time finite element discretization based on a variational formulation analogous to Luke's variational principle. The linear algebraic system of equations resulting from the finite element discretization is symmetric with a very compact stencil. To build and solve these equations, we have employed PETSc package in which a block sparse ma...
Renny, Joseph S; Tomasevich, Laura L; Tallmadge, Evan H; Collum, David B
2013-11-11
Applications of the method of continuous variations (MCV or the Method of Job) to problems of interest to organometallic chemists are described. MCV provides qualitative and quantitative insights into the stoichiometries underlying association of m molecules of A and n molecules of B to form A(m)B(n) . Applications to complex ensembles probe associations that form metal clusters and aggregates. Job plots in which reaction rates are monitored provide relative stoichiometries in rate-limiting transition structures. In a specialized variant, ligand- or solvent-dependent reaction rates are dissected into contributions in both the ground states and transition states, which affords insights into the full reaction coordinate from a single Job plot. Gaps in the literature are identified and critiqued.
An application of the KNND method for detecting nearby open clusters based on Gaia-DR1
Gao, Xin-Hua
2017-05-01
This paper presents a preliminary test of the k-th nearest neighbor distance (KNND) method for detecting nearby open clusters based on Gaia-DR1. We select 38 386 nearby stars (< 100 {pc}) from the Gaia-DR1 catalog, and then use the KNND method to detect overdense regions in three-dimensional space. We find two overdense regions (the Hyades and Coma Berenices (Coma Ber) open clusters), and obtain 57 reliable cluster members. Based on these cluster members, the distances to the Hyades and Coma Ber clusters are determined to be 46.0±0.2 and 83.5±0.3 pc, respectively. Our results demonstrate that the KNND method can be used to detect open clusters based on a large volume of astrometry data.
Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C. S.
1985-01-01
The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales.
Energy Technology Data Exchange (ETDEWEB)
Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C.S.
1985-08-01
The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales. 30 references.
[Variation of growth in monozygotic twins analyzed by longitudinal method].
Tzatcheva, L S; Kadanoff, D D; Paskova, D G
1981-01-01
On the base of a "genetic model" of MZ twins a longitudinal investigation was carried out during a period of 12 years. A global dimension - body height - and two basic body proportions - the frontal anterior length of the trunk and the length of the lower limb - were traced by the method of percentage deviation between the twin partners. During the whole period of growth, the global dimension are nearly equal, while the parts of the body, vary up to some limits.
Sarro, L M; Berihuete, A; Bertin, E; Moraux, E; Bouvier, J; Cuillandre, J -C; Barrado, D; Solano, E
2014-01-01
We present a new technique designed to take full advantage of the high dimensionality (photometric, astrometric, temporal) of the DANCe survey to derive self-consistent and robust membership probabilities of the Pleiades cluster. We aim at developing a methodology to infer membership probabilities to the Pleiades cluster from the DANCe multidimensional astro-photometric data set in a consistent way throughout the entire derivation. The determination of the membership probabilities has to be applicable to censored data and must incorporate the measurement uncertainties into the inference procedure. We use Bayes' theorem and a curvilinear forward model for the likelihood of the measurements of cluster members in the colour-magnitude space, to infer posterior membership probabilities. The distribution of the cluster members proper motions and the distribution of contaminants in the full multidimensional astro-photometric space is modelled with a mixture-of-Gaussians likelihood. We analyse several representation ...
Application of Variational Iteration Method to Fractional Hyperbolic Partial Differential Equations
Directory of Open Access Journals (Sweden)
Fadime Dal
2009-01-01
Full Text Available The solution of the fractional hyperbolic partial differential equation is obtained by means of the variational iteration method. Our numerical results are compared with those obtained by the modified Gauss elimination method. Our results reveal that the technique introduced here is very effective, convenient, and quite accurate to one-dimensional fractional hyperbolic partial differential equations. Application of variational iteration technique to this problem has shown the rapid convergence of the sequence constructed by this method to the exact solution.
Michel, Pierre; Baumstarck, Karine; Boyer, Laurent; Fernandez, Oscar; Flachenecker, Peter; Pelletier, Jean; Loundou, Anderson; Ghattas, Badih; Auquier, Pascal
2017-01-01
To enhance the use of quality of life (QoL) measures in clinical practice, it is pertinent to help clinicians interpret QoL scores. The aim of this study was to define clusters of QoL levels from a specific questionnaire (MusiQoL) for multiple sclerosis (MS) patients using a new method of interpretable clustering based on unsupervised binary trees and to test the validity regarding clinical and functional outcomes. In this international, multicenter, cross-sectional study, patients with MS were classified using a hierarchical top-down method of Clustering using Unsupervised Binary Trees. The clustering tree was built using the 9 dimension scores of the MusiQoL in 2 stages, growing and tree reduction (pruning and joining). A 3-group structure was considered, as follows: "high," "moderate," and "low" QoL levels. Clinical and QoL data were compared between the 3 clusters. A total of 1361 patients were analyzed: 87 were classified with "low," 1173 with "moderate," and 101 with "high" QoL levels. The clustering showed satisfactory properties, including repeatability (using bootstrap) and discriminancy (using factor analysis). The 3 clusters consistently differentiated patients based on sociodemographic and clinical characteristics, and the QoL scores were assessed using a generic questionnaire, ensuring the clinical validity of the clustering. The study suggests that Clustering using Unsupervised Binary Trees is an original, innovative, and relevant classification method to define clusters of QoL levels in MS patients.
Charbonnel, C
2016-01-01
Long-lived stars in GCs exhibit chemical peculiarities with respect to their halo counterparts. In particular, Na-enriched stars are identified as belonging to a 2d stellar population born from cluster material contaminated by the H-burning ashes of a 1st stellar population. Their presence and numbers in different locations of the CMDs provide important constraints on the self-enrichment scenarios. In particular, the ratio of Na-poor to Na-rich stars on the AGB has recently been found to vary strongly from cluster to cluster, while it is relatively constant on the RGB. We investigate the impact of both age and metallicity on the theoretical Na spread along the AGB within the framework of the fast rotating massive stars scenario for GC self-enrichment. (tb continued)
The Evaluation of Lane-Changing Behavior in Urban Traffic Stream with Fuzzy Clustering Method
Directory of Open Access Journals (Sweden)
Ali Abdi
2012-11-01
Full Text Available We present a method for The Evaluation of Lane-Changing Behavior in Urban Traffic Stream with Fuzzy Clustering Method. The trends for drivers Lane-Changing with regard to remarkable effects in traffic are regarded as a major variable in traffic engineering. As a result, various algorithms have presented most models of Lane-Changing developed by means of lane information and the manner of vehicle movement mainly obtained from images process not much attention is given to the characteristics of driver. Lane change divided into two parts the first one are compulsory lane including lane change to turn left or turn right. The second type of change is optional and lane change to improve driving condition. A low speed car is a good example, in this study, through focused group discussion method, drivers information can be obtained so that driver’s personality traits are taken into consideration. Then drivers are divided into four groups by means of Algorithm clusters. The four Algorithms suggest that phase typed cluster is a more suitable method for drivers classification based on Lane-Changing. Through notarization of different type of scenarios of lane change in Iran following results released. The percentage of drivers for each group is 17/5, 35, 20 and 27/ %, respectively.
Anda, E.; Chiappe, G.; Busser, C.; Davidovich, M.; Martins, G.; H-Meisner, F.; Dagotto, E.
2008-03-01
A numerical algorithm to study transport properties of highly correlated local structures is proposed. The method, dubbed the Logarithmic Discretization Embedded Cluster Approximation (LDECA), consists of diagonalizing a finite cluster containing the many-body terms of the Hamiltonian and embedding it into the rest of the system, combined with Wilson's ideas of a logarithmic discretization of the representation of the Hamiltonian. LDECA's rapid convergence eliminates finite-size effects commonly present in the embedding cluster approximation (ECA) method. The physics associated with both one embedded dot and a string of two dots side-coupled to leads is discussed. In the former case, our results accurately agree with Bethe ansatz (BA) data, while in the latter, the results are framed in the conceptual background of a two-stage Kondo problem. A diagrammatic expansion provides the theoretical foundation for the method. It is argued that LDECA allows for the study of complex problems that are beyond the reach of currently available numerical methods.
Convolution-variation separation method for efficient modeling of optical lithography.
Liu, Shiyuan; Zhou, Xinjiang; Lv, Wen; Xu, Shuang; Wei, Haiqing
2013-07-01
We propose a general method called convolution-variation separation (CVS) to enable efficient optical imaging calculations without sacrificing accuracy when simulating images for a wide range of process variations. The CVS method is derived from first principles using a series expansion, which consists of a set of predetermined basis functions weighted by a set of predetermined expansion coefficients. The basis functions are independent of the process variations and thus may be computed and stored in advance, while the expansion coefficients depend only on the process variations. Optical image simulations for defocus and aberration variations with applications in robust inverse lithography technology and lens aberration metrology have demonstrated the main concept of the CVS method.
Cluster detection methods applied to the Upper Cape Cod cancer data
Directory of Open Access Journals (Sweden)
Ozonoff David
2005-09-01
Full Text Available Abstract Background A variety of statistical methods have been suggested to assess the degree and/or the location of spatial clustering of disease cases. However, there is relatively little in the literature devoted to comparison and critique of different methods. Most of the available comparative studies rely on simulated data rather than real data sets. Methods We have chosen three methods currently used for examining spatial disease patterns: the M-statistic of Bonetti and Pagano; the Generalized Additive Model (GAM method as applied by Webster; and Kulldorff's spatial scan statistic. We apply these statistics to analyze breast cancer data from the Upper Cape Cancer Incidence Study using three different latency assumptions. Results The three different latency assumptions produced three different spatial patterns of cases and controls. For 20 year latency, all three methods generally concur. However, for 15 year latency and no latency assumptions, the methods produce different results when testing for global clustering. Conclusion The comparative analyses of real data sets by different statistical methods provides insight into directions for further research. We suggest a research program designed around examining real data sets to guide focused investigation of relevant features using simulated data, for the purpose of understanding how to interpret statistical methods applied to epidemiological data with a spatial component.
Institute of Scientific and Technical Information of China (English)
MO Jia-qi; LIN Yi-hua; WANG Hui
2005-01-01
Atmospheric physics is a very complicated natural phenomenon and needs to simplify its basic models for the sea-air oscillator. And it is solved by using the approximate method. The variational iteration method is a simple and valid method. In this paper the coupled system for a sea-air oscillator model of interdecadal climate fluctuations is considered. Firstly, through introducing a set of functions, and computing the variations, the Lagrange multipliers are obtained. And then, the generalized expressions of variational iteration are constructed. Finally, through selecting appropriate initial iteration from the iteration expressions, the approximations of solution for the sea-air oscillator model are solved successively.
A von Neumann Alternating Method for Finding Common Solutions to Variational Inequalities
Censor, Yair; Reich, Simeon
2012-01-01
Modifying von Neumann's alternating projections algorithm, we obtain an alternating method for solving the recently introduced Common Solutions to Variational Inequalities Problem (CSVIP). For simplicity, we mainly confine our attention to the two-set CSVIP, which entails finding common solutions to two unrelated variational inequalities in Hilbert space.
A Variational Iteration Solving Method for a Class of Generalized Boussinesq Equations
Institute of Scientific and Technical Information of China (English)
MO Jia-Qi
2009-01-01
We study a generalized nonlinear Boussinesq equation by introducing a proper functional and constructing the variational iteration sequence with suitable initial approximation.The approximate solution is obtained for the solitary wave of the Boussinesq equation with the variational iteration method.
Tapp, H.S.; Radonjic, M.; Kemsley, E.K.; Thissen, U.
2012-01-01
Genomics-based technologies produce large amounts of data. To interpret the results and identify the most important variates related to phenotypes of interest, various multivariate regression and variate selection methods are used. Although inspected for statistical performance, the relevance of mul
The Sorting Methods of Support Vector Clustering Based on Boundary Extraction and Category Utility
Directory of Open Access Journals (Sweden)
Chen Weigao
2016-01-01
Full Text Available According to the problems of low accuracy and high computational complexity in the classification of unknown radar signals, a method of unsupervised Support Vector Clustering (SVC based on boundary extraction and Category Utility (CU of unknown radar signals is studied. By analyzing the principle of SVC, only the boundary data of data sets contribute to the support vector extracted. Thus firstly, for reducing the data set, at the same time reducing the computational complexity, the algorithm is designed to extract the boundary data through local normal vector. Then using CU select the optimal parameters. At last distinguish different categories and get the sorting results by Cone Cluster Labelling (CCL and Depth-First Search (DFS. Through comparing the simulation results, the proposed method which is based on boundary extraction and CU is proved to have turned out quite good time effectiveness, which not only improves the accuracy of classification, but also reduces the computational complexity greatly.
THEORETICAL AND NUMERICAL COMPARISON ON DOUBLE-PROJECTION METHODS FOR VARIATIONAL INEQUALITIES
Institute of Scientific and Technical Information of China (English)
WANG Yiju; SUN Wenyu
2003-01-01
Recently, double projection methods for solving variational inequalities have received much attention due to their fewer projection times at each iteration. In this paper, we unify these double projection methods within two unified frameworks, which contain the existing double projection methods as special cases. On the basis of this unification, theoretical and numerical comparison between these double projection methods is presented.
Scalable fault tolerant algorithms for linear-scaling coupled-cluster electronic structure methods.
Energy Technology Data Exchange (ETDEWEB)
Leininger, Matthew L.; Nielsen, Ida Marie B.; Janssen, Curtis L.
2004-10-01
By means of coupled-cluster theory, molecular properties can be computed with an accuracy often exceeding that of experiment. The high-degree polynomial scaling of the coupled-cluster method, however, remains a major obstacle in the accurate theoretical treatment of mainstream chemical problems, despite tremendous progress in computer architectures. Although it has long been recognized that this super-linear scaling is non-physical, the development of efficient reduced-scaling algorithms for massively parallel computers has not been realized. We here present a locally correlated, reduced-scaling, massively parallel coupled-cluster algorithm. A sparse data representation for handling distributed, sparse multidimensional arrays has been implemented along with a set of generalized contraction routines capable of handling such arrays. The parallel implementation entails a coarse-grained parallelization, reducing interprocessor communication and distributing the largest data arrays but replicating as many arrays as possible without introducing memory bottlenecks. The performance of the algorithm is illustrated by several series of runs for glycine chains using a Linux cluster with an InfiniBand interconnect.
A Method for Traffic Congestion Clustering Judgment Based on Grey Relational Analysis
Directory of Open Access Journals (Sweden)
Yingya Zhang
2016-05-01
Full Text Available Traffic congestion clustering judgment is a fundamental problem in the study of traffic jam warning. However, it is not satisfactory to judge traffic congestion degrees using only vehicle speed. In this paper, we collect traffic flow information with three properties (traffic flow velocity, traffic flow density and traffic volume of urban trunk roads, which is used to judge the traffic congestion degree. We first define a grey relational clustering model by leveraging grey relational analysis and rough set theory to mine relationships of multidimensional-attribute information. Then, we propose a grey relational membership degree rank clustering algorithm (GMRC to discriminant clustering priority and further analyze the urban traffic congestion degree. Our experimental results show that the average accuracy of the GMRC algorithm is 24.9% greater than that of the K-means algorithm and 30.8% greater than that of the Fuzzy C-Means (FCM algorithm. Furthermore, we find that our method can be more conducive to dynamic traffic warnings.
The IMACS Cluster Building Survey. I. Description of the Survey and Analysis Methods
Oemler Jr., Augustus; Dressler, Alan; Gladders, Michael G.; Rigby, Jane R.; Bai, Lei; Kelson, Daniel; Villanueva, Edward; Fritz, Jacopo; Rieke, George; Poggianti, Bianca M.;
2013-01-01
The IMACS Cluster Building Survey uses the wide field spectroscopic capabilities of the IMACS spectrograph on the 6.5 m Baade Telescope to survey the large-scale environment surrounding rich intermediate-redshift clusters of galaxies. The goal is to understand the processes which may be transforming star-forming field galaxies into quiescent cluster members as groups and individual galaxies fall into the cluster from the surrounding supercluster. This first paper describes the survey: the data taking and reduction methods. We provide new calibrations of star formation rates (SFRs) derived from optical and infrared spectroscopy and photometry. We demonstrate that there is a tight relation between the observed SFR per unit B luminosity, and the ratio of the extinctions of the stellar continuum and the optical emission lines.With this, we can obtain accurate extinction-corrected colors of galaxies. Using these colors as well as other spectral measures, we determine new criteria for the existence of ongoing and recent starbursts in galaxies.
Systems and methods for producing metal clusters; functionalized surfaces; and droplets including solvated metal ions
Energy Technology Data Exchange (ETDEWEB)
Cooks, Robert Graham; Li, Anyin; Luo, Qingjie
2017-08-01
The invention generally relates to systems and methods for producing metal clusters; functionalized surfaces; and droplets including solvated metal ions. In certain aspects, the invention provides methods that involve providing a metal and a solvent. The methods additionally involve applying voltage to the solvated metal to thereby produce solvent droplets including ions of the metal containing compound, and directing the solvent droplets including the metal ions to a target. In certain embodiments, once at the target, the metal ions can react directly or catalyze reactions.
Energy Technology Data Exchange (ETDEWEB)
Cooks, Robert Graham; Li, Anyin; Luo, Qingjie
2017-01-24
The invention generally relates to systems and methods for producing metal clusters; functionalized surfaces; and droplets including solvated metal ions. In certain aspects, the invention provides methods that involve providing a metal and a solvent. The methods additionally involve applying voltage to the solvated metal to thereby produce solvent droplets including ions of the metal containing compound, and directing the solvent droplets including the metal ions to a target. In certain embodiments, once at the target, the metal ions can react directly or catalyze reactions.
The DSUBm approximation scheme for the coupled cluster method and applications to quantum magnets
Directory of Open Access Journals (Sweden)
R.F. Bishop
2009-01-01
Full Text Available A new approximate scheme, DSUBm, is described for the coupled cluster method. We apply it to two well-studied (spin-1/2 Heisenberg antiferromagnet spin-lattice models, namely: the XXZ and the XY models on the square lattice in two dimensions. Results are obtained in each case for the ground-state energy, the sublattice magnetization and the quantum critical point. They are in good agreement with those from such alternative methods as spin-wave theory, series expansions, exact diagonalization techniques, quantum Monte Carlo methods and those from the CCM using the LSUBm scheme.
Oh, Sang Young; Lee, Minho; Seo, Joon Beom; Kim, Namkug; Lee, Sang Min; Lee, Jae Seung; Oh, Yeon Mok
2017-01-01
A novel approach of size-based emphysema clustering has been developed, and the size variation and collapse of holes in emphysema clusters are evaluated at inspiratory and expiratory computed tomography (CT). Thirty patients were visually evaluated for the size-based emphysema clustering technique and a total of 72 patients were evaluated for analyzing collapse of the emphysema hole in this study. A new approach for the size differentiation of emphysema holes was developed using the length scale, Gaussian low-pass filtering, and iteration approach. Then, the volumetric CT results of the emphysema patients were analyzed using the new method, and deformable registration was carried out between inspiratory and expiratory CT. Blind visual evaluations of EI by two readers had significant correlations with the classification using the size-based emphysema clustering method (r-values of reader 1: 0.186, 0.890, 0.915, and 0.941; reader 2: 0.540, 0.667, 0.919, and 0.942). The results of collapse of emphysema holes using deformable registration were compared with the pulmonary function test (PFT) parameters using the Pearson's correlation test. The mean extents of low-attenuation area (LAA), E1 (size variation and collapse of emphysema holes may be useful for understanding the dynamic collapse of emphysema and its functional relation.
Local Correlation Calculations Using Standard and Renormalized Coupled-Cluster Methods
Piecuch, Piotr; Li, Wei; Gour, Jeffrey
2009-03-01
Local correlation variants of the coupled-cluster (CC) theory with singles and doubles (CCSD) and CC methods with singles, doubles, and non-iterative triples, including CCSD(T) and the completely renormalized CR-CC(2,3) approach, are developed. The main idea of the resulting CIM-CCSD, CIM-CCSD(T), and CIM-CR-CC(2,3) methods is the realization of the fact that the total correlation energy of a large system can be obtained as a sum of contributions from the occupied orthonormal localized molecular orbitals and their respective occupied and unoccupied orbital domains. The CIM-CCSD, CIM-CCSD(T), and CIM-CR-CC(2,3) algorithms are characterized by the linear scaling of the total CPU time with the system size and embarrassing parallelism. By comparing the results of the canonical and CIM-CC calculations for normal alkanes and water clusters, it is demonstrated that the CIM-CCSD, CIM-CCSD(T), and CIM-CR-CC(2,3) approaches recover the corresponding canonical CC correlation energies to within 0.1 % or so, while offering savings in the computer effort by orders of magnitude. By examining the dissociation of dodecane into C11H23 and CH3 and several lowest-energy structures of the (H2O)n clusters, it is shown that the CIM-CC methods accurately reproduce the relative energetics of the corresponding canonical CC calculations.
Lü, Xiaozhou; Xie, Kai; Xue, Dongfeng; Zhang, Feng; Qi, Liang; Tao, Yebo; Li, Teng; Bao, Weimin; Wang, Songlin; Li, Xiaoping; Chen, Renjie
2017-10-01
Micro-capacitance sensors are widely applied in industrial applications for the measurement of mechanical variations. The measurement accuracy of micro-capacitance sensors is highly dependent on the capacitance measurement circuit. To overcome the inability of commonly used methods to directly measure capacitance variation and deal with the conflict between the measurement range and accuracy, this paper presents a capacitance variation measurement method which is able to measure the output capacitance variation (relative value) of the micro-capacitance sensor with a continuously variable measuring range. We present the principles and analyze the non-ideal factors affecting this method. To implement the method, we developed a capacitance variation measurement circuit and carried out experiments to test the circuit. The result shows that the circuit is able to measure a capacitance variation range of 0–700 pF linearly with a maximum relative accuracy of 0.05% and a capacitance range of 0–2 nF (with a baseline capacitance of 1 nF) with a constant resolution of 0.03%. The circuit is proposed as a new method to measure capacitance and is expected to have applications in micro-capacitance sensors for measuring capacitance variation with a continuously variable measuring range.
Pivot method for global optimization: A study of structures and phase changes in water clusters
Nigra, Pablo Fernando
In this thesis, we have carried out a study of water clusters. The research work has been developed in two stages. In the first stage, we have investigated the properties of water clusters at zero temperature by means of global optimization. The clusters were modeled by using two well known pairwise potentials having distinct characteristics. One is the Matsuoka-Clementi-Yoshimine potential (MCY) that is an ab initio fitted function based on a rigid-molecule model, the other is the Sillinger-Rahman potential (SR) which is an empirical function based on a flexible-molecule model. The algorithm used for the global optimization of the clusters was the pivot method, which was developed in our group. The results have shown that, under certain conditions, the pivot method may yield optimized structures which are related to one another in such a way that they seem to form structural families. The structures in a family can be thought of as formed from the aggregation of single units. The particular types of structures we have found are quasi-one dimensional tubes built from stacking cyclic units such as tetramers, pentamers, and hexamers. The binding energies of these tubes form sequences that span smooth curves with clear asymptotic behavior; therefore, we have also studied the sequences applying the Bulirsch-Stoer (BST) algorithm to accelerate convergence. In the second stage of the research work, we have studied the thermodynamic properties of a typical water cluster at finite temperatures. The selected cluster was the water octamer which exhibits a definite solid-liquid phase change. The water octamer also has several low lying energy cubic structures with large energetic barriers that cause ergodicity breaking in regular Monte Carlo simulations. For that reason we have simulated the octamer using paralell tempering Monte Carlo combined with the multihistogram method. This has permited us to calculate the heat capacity from very low temperatures up to T = 230 K. We
Directory of Open Access Journals (Sweden)
Reilly John J
2005-06-01
Full Text Available Abstract Background Advances in miniature sensor technology have led to the development of wearable systems that allow one to monitor motor activities in the field. A variety of classifiers have been proposed in the past, but little has been done toward developing systematic approaches to assess the feasibility of discriminating the motor tasks of interest and to guide the choice of the classifier architecture. Methods A technique is introduced to address this problem according to a hierarchical framework and its use is demonstrated for the application of detecting motor activities in patients with chronic obstructive pulmonary disease (COPD undergoing pulmonary rehabilitation. Accelerometers were used to collect data for 10 different classes of activity. Features were extracted to capture essential properties of the data set and reduce the dimensionality of the problem at hand. Cluster measures were utilized to find natural groupings in the data set and then construct a hierarchy of the relationships between clusters to guide the process of merging clusters that are too similar to distinguish reliably. It provides a means to assess whether the benefits of merging for performance of a classifier outweigh the loss of resolution incurred through merging. Results Analysis of the COPD data set demonstrated that motor tasks related to ambulation can be reliably discriminated from tasks performed in a seated position with the legs in motion or stationary using two features derived from one accelerometer. Classifying motor tasks within the category of activities related to ambulation requires more advanced techniques. While in certain cases all the tasks could be accurately classified, in others merging clusters associated with different motor tasks was necessary. When merging clusters, it was found that the proposed method could lead to more than 12% improvement in classifier accuracy while retaining resolution of 4 tasks. Conclusion Hierarchical
Sherrill, Delsey M; Moy, Marilyn L; Reilly, John J; Bonato, Paolo
2005-01-01
Background Advances in miniature sensor technology have led to the development of wearable systems that allow one to monitor motor activities in the field. A variety of classifiers have been proposed in the past, but little has been done toward developing systematic approaches to assess the feasibility of discriminating the motor tasks of interest and to guide the choice of the classifier architecture. Methods A technique is introduced to address this problem according to a hierarchical framework and its use is demonstrated for the application of detecting motor activities in patients with chronic obstructive pulmonary disease (COPD) undergoing pulmonary rehabilitation. Accelerometers were used to collect data for 10 different classes of activity. Features were extracted to capture essential properties of the data set and reduce the dimensionality of the problem at hand. Cluster measures were utilized to find natural groupings in the data set and then construct a hierarchy of the relationships between clusters to guide the process of merging clusters that are too similar to distinguish reliably. It provides a means to assess whether the benefits of merging for performance of a classifier outweigh the loss of resolution incurred through merging. Results Analysis of the COPD data set demonstrated that motor tasks related to ambulation can be reliably discriminated from tasks performed in a seated position with the legs in motion or stationary using two features derived from one accelerometer. Classifying motor tasks within the category of activities related to ambulation requires more advanced techniques. While in certain cases all the tasks could be accurately classified, in others merging clusters associated with different motor tasks was necessary. When merging clusters, it was found that the proposed method could lead to more than 12% improvement in classifier accuracy while retaining resolution of 4 tasks. Conclusion Hierarchical clustering methods are relevant
27 CFR 22.22 - Alternate methods or procedures; and emergency variations from requirements.
2010-04-01
... OF TAX-FREE ALCOHOL Administrative Provisions Authorities § 22.22 Alternate methods or procedures..., conditions or limitations set forth in the approval, authority for the variation from requirements...
27 CFR 20.22 - Alternate methods or procedures; and emergency variations from requirements.
2010-04-01
... OF DENATURED ALCOHOL AND RUM Administrative Provisions Authorities § 20.22 Alternate methods or... forth in the approval, authority for the variation from requirements is automatically terminated and...
Directory of Open Access Journals (Sweden)
Ali Sevimlican
2010-01-01
Full Text Available He's variational iteration method (VIM is used for solving space and time fractional telegraph equations. Numerical examples are presented in this paper. The obtained results show that VIM is effective and convenient.
A Novel Method for Analyzing and Interpreting GCM Results Using Clustered Climate Regimes
Hoffman, F. M.; Hargrove, W. W.; Erickson, D. J.; Oglesby, R. J.
2003-12-01
A high-performance parallel clustering algorithm has been developed for analyzing and comparing climate model results and long time series climate measurements. Designed to identify biases and detect trends in disparate climate change data sets, this tool combines and simplifies large temporally-varying data sets from atmospheric measurements to multi-century climate model output. Clustering is a statistical procedure which provides an objective method for grouping multivariate conditions into a set of states or regimes within a given level of statistical tolerance. The groups or clusters--statistically defined across space and through time--possess centroids which represent the synoptic conditions of observations or model results contained in each state no matter when or where they occurred. The clustering technique was applied to five business-as-usual (BAU) scenarios from the Parallel Climate Model (PCM). Three fields of significance (surface temperature, precipitation, and soil moisture) were clustered from 2000 through 2098. Our analysis shows an increase in spatial area occupied by the cluster or climate regime which typifies desert regions (i.e., an increase in desertification) and a decrease in the spatial area occupied by the climate regime typifying winter-time high latitude perma-frost regions. The same analysis subsequently applied to the ensemble as a whole demonstrates the consistency and variability of trends from each ensemble member. The patterns of cluster changes can be used to show predicted variability in climate on global and continental scales. Novel three-dimensional phase space representations of these climate regimes show the portion of this phase space occupied by the land surface at all points in space and time. Any single spot on the globe will exist in one of these climate regimes at any single point in time, and by incrementing time, that same spot will trace out a trajectory or orbit among these climate regimes in phase space. When a
Directory of Open Access Journals (Sweden)
Muhammad Aslam Noor
2008-01-01
Full Text Available We suggest and analyze a technique by combining the variational iteration method and the homotopy perturbation method. This method is called the variational homotopy perturbation method (VHPM. We use this method for solving higher dimensional initial boundary value problems with variable coefficients. The developed algorithm is quite efficient and is practically well suited for use in these problems. The proposed scheme finds the solution without any discritization, transformation, or restrictive assumptions and avoids the round-off errors. Several examples are given to check the reliability and efficiency of the proposed technique.
Partial differential equations with variable exponents variational methods and qualitative analysis
Radulescu, Vicentiu D
2015-01-01
Partial Differential Equations with Variable Exponents: Variational Methods and Qualitative Analysis provides researchers and graduate students with a thorough introduction to the theory of nonlinear partial differential equations (PDEs) with a variable exponent, particularly those of elliptic type. The book presents the most important variational methods for elliptic PDEs described by nonhomogeneous differential operators and containing one or more power-type nonlinearities with a variable exponent. The authors give a systematic treatment of the basic mathematical theory and constructive meth
Variational Iteration Method for Singular Perturbation Initial Value Problems with Delays
Directory of Open Access Journals (Sweden)
Yongxiang Zhao
2014-01-01
Full Text Available The variational iteration method (VIM is applied to solve singular perturbation initial value problems with delays (SPIVPDs. Some convergence results of VIM for solving SPIVPDs are given. The obtained sequence of iterates is based on the use of general Lagrange multipliers; the multipliers in the functionals can be identified by the variational theory. Moreover, the numerical examples show the efficiency of the method.
Combining Few-Body Cluster Structures with Many-Body Mean-Field Methods
Hove, D.; Garrido, E.; Jensen, A. S.; Sarriguren, P.; Fynbo, H. O. U.; Fedorov, D. V.; Zinner, N. T.
2017-03-01
Nuclear cluster physics implicitly assumes a distinction between groups of degrees-of-freedom, that is the (frozen) intrinsic and (explicitly treated) relative cluster motion. We formulate a realistic and practical method to describe the coupled motion of these two sets of degrees-of-freedom. We derive a coupled set of differential equations for the system using the phenomenologically adjusted effective in-medium Skyrme type of nucleon-nucleon interaction. We select a two-nucleon plus core system where the mean-field approximation corresponding to the Skyrme interaction is used for the core. A hyperspherical adiabatic expansion of the Faddeev equations is used for the relative cluster motion. We shall specifically compare both the structure and the decay mechanism found from the traditional three-body calculations with the result using the new boundary condition provided by the full microscopic structure at small distance. The extended Hilbert space guaranties an improved wave function compared to both mean-field and three-body solutions. We shall investigate the structures and decay mechanism of ^{22}C (^{20}C+n+n). In conclusion, we have developed a method combining nuclear few- and many-body techniques without losing the descriptive power of each approximation at medium-to-large distances and small distances respectively. The coupled set of equations are solved self-consistently, and both structure and dynamic evolution are studied.
Clustering as an EDA Method: The Case of Pedestrian Directional Flow Behavior
Directory of Open Access Journals (Sweden)
Ma. Regina E. Estuar
2010-01-01
Full Text Available Given the data of pedestrian trajectories in NTXY format, three clustering methods of K Means, Expectation Maximization (EM and Affinity Propagation were utilized as Exploratory Data Analysis to find the pattern of pedestrian directional flow behavior. The analysis begins without a prior notion regarding the structure of the pattern and it consequentially infers the structure of directional flow pattern. Significant similarities in patterns for both individual and instantaneous walking angles based on EDA method are reported and explained in case studies
Singlet-triplet gaps in substituted carbenes predicted from block-correlated coupled cluster method
Institute of Scientific and Technical Information of China (English)
2008-01-01
The block correlated coupled cluster (BCCC) method, with the complete active-space self-consistent-field (CASSCF) reference function, has been applied to investigating the singlet-triplet gaps in several substituted carbenes including four halocarbenes (CHCl, CF2, CCl2, and CBr2) and two hydroxycar-benes (CHOH and C(OH)2). A comparison of our results with the experimental data and other theoretical estimates shows that the present approach can provide quantitative descriptions for all the studied carbenes. It is demonstrated that the CAS-BCCC method is a promising theoretical tool for calculating the electronic structures of diradicals.
Masiero, Joseph R; Bauer, J M; Grav, T; Nugent, C R; Stevenson, R
2013-01-01
Using albedos from WISE/NEOWISE to separate distinct albedo groups within the Main Belt asteroids, we apply the Hierarchical Clustering Method to these subpopulations and identify dynamically associated clusters of asteroids. While this survey is limited to the ~35% of known Main Belt asteroids that were detected by NEOWISE, we present the families linked from these objects as higher confidence associations than can be obtained from dynamical linking alone. We find that over one-third of the observed population of the Main Belt is represented in the high-confidence cores of dynamical families. The albedo distribution of family members differs significantly from the albedo distribution of background objects in the same region of the Main Belt, however interpretation of this effect is complicated by the incomplete identification of lower-confidence family members. In total we link 38,298 asteroids into 76 distinct families. This work represents a critical step necessary to debias the albedo and size distributio...
A hybrid SPH/N-body method for star cluster simulations
Hubber, D A; Smith, R; Goodwin, S P
2013-01-01
We present a new hybrid Smoothed Particle Hydrodynamics (SPH)/N-body method for modelling the collisional stellar dynamics of young clusters in a live gas background. By deriving the equations of motion from Lagrangian mechanics we obtain a formally conservative combined SPH/N-body scheme. The SPH gas particles are integrated with a 2nd order Leapfrog, and the stars with a 4th order Hermite scheme. Our new approach is intended to bridge the divide between the detailed, but expensive, full hydrodynamical simulations of star formation, and pure N-body simulations of gas-free star clusters. We have implemented this hybrid approach in the SPH code SEREN (Hubber et al. 2011) and perform a series of simple tests to demonstrate the fidelity of the algorithm and its conservation properties. We investigate and present resolution criteria to adequately resolve the density field and to prevent strong numerical scattering effects. Future developments will include a more sophisticated treatment of binaries.
A novel intrusion detection method based on OCSVM and K-means recursive clustering
Directory of Open Access Journals (Sweden)
Leandros A. Maglaras
2015-01-01
Full Text Available In this paper we present an intrusion detection module capable of detecting malicious network traffic in a SCADA (Supervisory Control and Data Acquisition system, based on the combination of One-Class Support Vector Machine (OCSVM with RBF kernel and recursive k-means clustering. Important parameters of OCSVM, such as Gaussian width o and parameter v affect the performance of the classifier. Tuning of these parameters is of great importance in order to avoid false positives and over fitting. The combination of OCSVM with recursive k- means clustering leads the proposed intrusion detection module to distinguish real alarms from possible attacks regardless of the values of parameters o and v, making it ideal for real-time intrusion detection mechanisms for SCADA systems. Extensive simulations have been conducted with datasets extracted from small and medium sized HTB SCADA testbeds, in order to compare the accuracy, false alarm rate and execution time against the base line OCSVM method.
Directory of Open Access Journals (Sweden)
Ping Zhang
2014-01-01
Full Text Available The variational multiscale element free Galerkin method is extended to simulate the Stokes flow problems in a circular cavity as an irregular geometry. The method is combined with Hughes’s variational multiscale formulation and element free Galerkin method; thus it inherits the advantages of variational multiscale and meshless methods. Meanwhile, a simple technique is adopted to impose the essential boundary conditions which makes it easy to solve problems with complex area. Finally, two examples are solved and good results are obtained as compared with solutions of analytical and numerical methods, which demonstrates that the proposed method is an attractive approach for solving incompressible fluid flow problems in terms of accuracy and stability, even for complex irregular boundaries.