WorldWideScience

Sample records for cluster variation method

  1. The Cluster Variation Method: A Primer for Neuroscientists.

    Science.gov (United States)

    Maren, Alianna J

    2016-09-30

    Effective Brain-Computer Interfaces (BCIs) require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM) offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables , is defined in terms of a single interaction enthalpy parameter ( h ) for the case of an equiprobable distribution of bistate (neural/neural ensemble) units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution) yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found.

  2. The Cluster Variation Method: A Primer for Neuroscientists

    Directory of Open Access Journals (Sweden)

    Alianna J. Maren

    2016-09-01

    Full Text Available Effective Brain–Computer Interfaces (BCIs require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables, is defined in terms of a single interaction enthalpy parameter (h for the case of an equiprobable distribution of bistate (neural/neural ensemble units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found.

  3. A model-based clustering method to detect infectious disease transmission outbreaks from sequence variation.

    Directory of Open Access Journals (Sweden)

    Rosemary M McCloskey

    2017-11-01

    Full Text Available Clustering infections by genetic similarity is a popular technique for identifying potential outbreaks of infectious disease, in part because sequences are now routinely collected for clinical management of many infections. A diverse number of nonparametric clustering methods have been developed for this purpose. These methods are generally intuitive, rapid to compute, and readily scale with large data sets. However, we have found that nonparametric clustering methods can be biased towards identifying clusters of diagnosis-where individuals are sampled sooner post-infection-rather than the clusters of rapid transmission that are meant to be potential foci for public health efforts. We develop a fundamentally new approach to genetic clustering based on fitting a Markov-modulated Poisson process (MMPP, which represents the evolution of transmission rates along the tree relating different infections. We evaluated this model-based method alongside five nonparametric clustering methods using both simulated and actual HIV sequence data sets. For simulated clusters of rapid transmission, the MMPP clustering method obtained higher mean sensitivity (85% and specificity (91% than the nonparametric methods. When we applied these clustering methods to published sequences from a study of HIV-1 genetic clusters in Seattle, USA, we found that the MMPP method categorized about half (46% as many individuals to clusters compared to the other methods. Furthermore, the mean internal branch lengths that approximate transmission rates were significantly shorter in clusters extracted using MMPP, but not by other methods. We determined that the computing time for the MMPP method scaled linearly with the size of trees, requiring about 30 seconds for a tree of 1,000 tips and about 20 minutes for 50,000 tips on a single computer. This new approach to genetic clustering has significant implications for the application of pathogen sequence analysis to public health, where

  4. Application of the cluster variation method to ordering in an interstitital solid solution

    DEFF Research Database (Denmark)

    Pekelharing, Marjon I.; Böttger, Amarante; Somers, Marcel A. J.

    1999-01-01

    The tetrahedron approximation of the cluster variation method (CVM) was applied to describe the ordering on the fcc interstitial sublattice of gamma-Fe[N] and gamma'-Fe4N1-x. A Lennard-Jones potential was used to describe the dominantly strain-induced interactions, caused by misfitting of the N...... atoms in the interstitial octahedral sites. The gamma-Fe[N]/gamma'-Fe4N1-x miscibility gap, short range ordering (SRO), and long-range ordering (LRO) of nitrogen in gamma-Fe[N] and gamma'-Fe4N1-x, respectively, and lattice parameters of gamma and gamm' were calculated. For the first time, N distribution...... parameters,as calculated by CVM, were compared directly to Mössbauer data for specific surroundings of Fe atoms....

  5. Cycle-Based Cluster Variational Method for Direct and Inverse Inference

    Science.gov (United States)

    Furtlehner, Cyril; Decelle, Aurélien

    2016-08-01

    Large scale inference problems of practical interest can often be addressed with help of Markov random fields. This requires to solve in principle two related problems: the first one is to find offline the parameters of the MRF from empirical data (inverse problem); the second one (direct problem) is to set up the inference algorithm to make it as precise, robust and efficient as possible. In this work we address both the direct and inverse problem with mean-field methods of statistical physics, going beyond the Bethe approximation and associated belief propagation algorithm. We elaborate on the idea that loop corrections to belief propagation can be dealt with in a systematic way on pairwise Markov random fields, by using the elements of a cycle basis to define regions in a generalized belief propagation setting. For the direct problem, the region graph is specified in such a way as to avoid feed-back loops as much as possible by selecting a minimal cycle basis. Following this line we are led to propose a two-level algorithm, where a belief propagation algorithm is run alternatively at the level of each cycle and at the inter-region level. Next we observe that the inverse problem can be addressed region by region independently, with one small inverse problem per region to be solved. It turns out that each elementary inverse problem on the loop geometry can be solved efficiently. In particular in the random Ising context we propose two complementary methods based respectively on fixed point equations and on a one-parameter log likelihood function minimization. Numerical experiments confirm the effectiveness of this approach both for the direct and inverse MRF inference. Heterogeneous problems of size up to 10^5 are addressed in a reasonable computational time, notably with better convergence properties than ordinary belief propagation.

  6. Variation in verb cluster interruption

    NARCIS (Netherlands)

    Hendriks, Lotte

    2014-01-01

    Except for finite verbs in main clauses, verbs in Standard Dutch cluster together in a clause-final position. In certain Dutch dialects, non-verbal material can occur within this verb cluster (Verhasselt 1961; Koelmans 1965, among many others). These dialects vary with respect to which types of

  7. Variational cluster perturbation theory for Bose-Hubbard models

    International Nuclear Information System (INIS)

    Koller, W; Dupuis, N

    2006-01-01

    We discuss the application of the variational cluster perturbation theory (VCPT) to the Mott-insulator-to-superfluid transition in the Bose-Hubbard model. We show how the VCPT can be formulated in such a way that it gives a translation invariant excitation spectrum-free of spurious gaps-despite the fact that it formally breaks translation invariance. The phase diagram and the single-particle Green function in the insulating phase are obtained for one-dimensional systems. When the chemical potential of the cluster is taken as a variational parameter, the VCPT reproduces the dimensional dependence of the phase diagram even for one-site clusters. We find a good quantitative agreement with the results of the density-matrix renormalization group when the number of sites in the cluster becomes of order 10. The extension of the method to the superfluid phase is discussed

  8. Document clustering methods, document cluster label disambiguation methods, document clustering apparatuses, and articles of manufacture

    Science.gov (United States)

    Sanfilippo, Antonio [Richland, WA; Calapristi, Augustin J [West Richland, WA; Crow, Vernon L [Richland, WA; Hetzler, Elizabeth G [Kennewick, WA; Turner, Alan E [Kennewick, WA

    2009-12-22

    Document clustering methods, document cluster label disambiguation methods, document clustering apparatuses, and articles of manufacture are described. In one aspect, a document clustering method includes providing a document set comprising a plurality of documents, providing a cluster comprising a subset of the documents of the document set, using a plurality of terms of the documents, providing a cluster label indicative of subject matter content of the documents of the cluster, wherein the cluster label comprises a plurality of word senses, and selecting one of the word senses of the cluster label.

  9. Conformable variational iteration method

    Directory of Open Access Journals (Sweden)

    Omer Acan

    2017-02-01

    Full Text Available In this study, we introduce the conformable variational iteration method based on new defined fractional derivative called conformable fractional derivative. This new method is applied two fractional order ordinary differential equations. To see how the solutions of this method, linear homogeneous and non-linear non-homogeneous fractional ordinary differential equations are selected. Obtained results are compared the exact solutions and their graphics are plotted to demonstrate efficiency and accuracy of the method.

  10. Semi-supervised clustering methods.

    Science.gov (United States)

    Bair, Eric

    2013-01-01

    Cluster analysis methods seek to partition a data set into homogeneous subgroups. It is useful in a wide variety of applications, including document processing and modern genetics. Conventional clustering methods are unsupervised, meaning that there is no outcome variable nor is anything known about the relationship between the observations in the data set. In many situations, however, information about the clusters is available in addition to the values of the features. For example, the cluster labels of some observations may be known, or certain observations may be known to belong to the same cluster. In other cases, one may wish to identify clusters that are associated with a particular outcome variable. This review describes several clustering algorithms (known as "semi-supervised clustering" methods) that can be applied in these situations. The majority of these methods are modifications of the popular k-means clustering method, and several of them will be described in detail. A brief description of some other semi-supervised clustering algorithms is also provided.

  11. Splines and variational methods

    CERN Document Server

    Prenter, P M

    2008-01-01

    One of the clearest available introductions to variational methods, this text requires only a minimal background in calculus and linear algebra. Its self-contained treatment explains the application of theoretic notions to the kinds of physical problems that engineers regularly encounter. The text's first half concerns approximation theoretic notions, exploring the theory and computation of one- and two-dimensional polynomial and other spline functions. Later chapters examine variational methods in the solution of operator equations, focusing on boundary value problems in one and two dimension

  12. Semi-supervised clustering methods

    Science.gov (United States)

    Bair, Eric

    2013-01-01

    Cluster analysis methods seek to partition a data set into homogeneous subgroups. It is useful in a wide variety of applications, including document processing and modern genetics. Conventional clustering methods are unsupervised, meaning that there is no outcome variable nor is anything known about the relationship between the observations in the data set. In many situations, however, information about the clusters is available in addition to the values of the features. For example, the cluster labels of some observations may be known, or certain observations may be known to belong to the same cluster. In other cases, one may wish to identify clusters that are associated with a particular outcome variable. This review describes several clustering algorithms (known as “semi-supervised clustering” methods) that can be applied in these situations. The majority of these methods are modifications of the popular k-means clustering method, and several of them will be described in detail. A brief description of some other semi-supervised clustering algorithms is also provided. PMID:24729830

  13. Multicritical phase diagrams of the ferromagnetic spin-3/2 Blume-Emery-Griffiths model with repulsive biquadratic coupling including metastable phases: The cluster variation method and the path probability method with the point distribution

    Energy Technology Data Exchange (ETDEWEB)

    Keskin, Mustafa [Department of Physics, Erciyes University, 38039 Kayseri (Turkey)], E-mail: keskin@erciyes.edu.tr; Canko, Osman [Department of Physics, Erciyes University, 38039 Kayseri (Turkey)

    2008-01-15

    We study the thermal variations of the ferromagnetic spin-3/2 Blume-Emery-Griffiths (BEG) model with repulsive biquadratic coupling by using the lowest approximation of the cluster variation method (LACVM) in the absence and presence of the external magnetic field. We obtain metastable and unstable branches of the order parameters besides the stable branches and phase transitions of these branches are investigated extensively. The classification of the stable, metastable and unstable states is made by comparing the free energy values of these states. We also study the dynamics of the model by using the path probability method (PPM) with the point distribution in order to make sure that we find and define the metastable and unstable branches of the order parameters completely and correctly. We present the metastable phase diagrams in addition to the equilibrium phase diagrams in the (kT/J, K/J) and (kT/J, D/J) planes. It is found that the metastable phase diagrams always exist at the low temperatures, which are consistent with experimental and theoretical works.

  14. Multicritical phase diagrams of the ferromagnetic spin-3/2 Blume-Emery-Griffiths model with repulsive biquadratic coupling including metastable phases: The cluster variation method and the path probability method with the point distribution

    International Nuclear Information System (INIS)

    Keskin, Mustafa; Canko, Osman

    2008-01-01

    We study the thermal variations of the ferromagnetic spin-3/2 Blume-Emery-Griffiths (BEG) model with repulsive biquadratic coupling by using the lowest approximation of the cluster variation method (LACVM) in the absence and presence of the external magnetic field. We obtain metastable and unstable branches of the order parameters besides the stable branches and phase transitions of these branches are investigated extensively. The classification of the stable, metastable and unstable states is made by comparing the free energy values of these states. We also study the dynamics of the model by using the path probability method (PPM) with the point distribution in order to make sure that we find and define the metastable and unstable branches of the order parameters completely and correctly. We present the metastable phase diagrams in addition to the equilibrium phase diagrams in the (kT/J, K/J) and (kT/J, D/J) planes. It is found that the metastable phase diagrams always exist at the low temperatures, which are consistent with experimental and theoretical works

  15. Integration K-Means Clustering Method and Elbow Method For Identification of The Best Customer Profile Cluster

    Science.gov (United States)

    Syakur, M. A.; Khotimah, B. K.; Rochman, E. M. S.; Satoto, B. D.

    2018-04-01

    Clustering is a data mining technique used to analyse data that has variations and the number of lots. Clustering was process of grouping data into a cluster, so they contained data that is as similar as possible and different from other cluster objects. SMEs Indonesia has a variety of customers, but SMEs do not have the mapping of these customers so they did not know which customers are loyal or otherwise. Customer mapping is a grouping of customer profiling to facilitate analysis and policy of SMEs in the production of goods, especially batik sales. Researchers will use a combination of K-Means method with elbow to improve efficient and effective k-means performance in processing large amounts of data. K-Means Clustering is a localized optimization method that is sensitive to the selection of the starting position from the midpoint of the cluster. So choosing the starting position from the midpoint of a bad cluster will result in K-Means Clustering algorithm resulting in high errors and poor cluster results. The K-means algorithm has problems in determining the best number of clusters. So Elbow looks for the best number of clusters on the K-means method. Based on the results obtained from the process in determining the best number of clusters with elbow method can produce the same number of clusters K on the amount of different data. The result of determining the best number of clusters with elbow method will be the default for characteristic process based on case study. Measurement of k-means value of k-means has resulted in the best clusters based on SSE values on 500 clusters of batik visitors. The result shows the cluster has a sharp decrease is at K = 3, so K as the cut-off point as the best cluster.

  16. Clustering methods for the optimization of atomic cluster structure

    Science.gov (United States)

    Bagattini, Francesco; Schoen, Fabio; Tigli, Luca

    2018-04-01

    In this paper, we propose a revised global optimization method and apply it to large scale cluster conformation problems. In the 1990s, the so-called clustering methods were considered among the most efficient general purpose global optimization techniques; however, their usage has quickly declined in recent years, mainly due to the inherent difficulties of clustering approaches in large dimensional spaces. Inspired from the machine learning literature, we redesigned clustering methods in order to deal with molecular structures in a reduced feature space. Our aim is to show that by suitably choosing a good set of geometrical features coupled with a very efficient descent method, an effective optimization tool is obtained which is capable of finding, with a very high success rate, all known putative optima for medium size clusters without any prior information, both for Lennard-Jones and Morse potentials. The main result is that, beyond being a reliable approach, the proposed method, based on the idea of starting a computationally expensive deep local search only when it seems worth doing so, is capable of saving a huge amount of searches with respect to an analogous algorithm which does not employ a clustering phase. In this paper, we are not claiming the superiority of the proposed method compared to specific, refined, state-of-the-art procedures, but rather indicating a quite straightforward way to save local searches by means of a clustering scheme working in a reduced variable space, which might prove useful when included in many modern methods.

  17. Variational linear algebraic equations method

    International Nuclear Information System (INIS)

    Moiseiwitsch, B.L.

    1982-01-01

    A modification of the linear algebraic equations method is described which ensures a variational bound on the phaseshifts for potentials having a definite sign at all points. The method is illustrated by the elastic scattering of s-wave electrons by the static field of atomic hydrogen. (author)

  18. Statistical analysis of activation and reaction energies with quasi-variational coupled-cluster theory

    Science.gov (United States)

    Black, Joshua A.; Knowles, Peter J.

    2018-06-01

    The performance of quasi-variational coupled-cluster (QV) theory applied to the calculation of activation and reaction energies has been investigated. A statistical analysis of results obtained for six different sets of reactions has been carried out, and the results have been compared to those from standard single-reference methods. In general, the QV methods lead to increased activation energies and larger absolute reaction energies compared to those obtained with traditional coupled-cluster theory.

  19. Variational methods in molecular modeling

    CERN Document Server

    2017-01-01

    This book presents tutorial overviews for many applications of variational methods to molecular modeling. Topics discussed include the Gibbs-Bogoliubov-Feynman variational principle, square-gradient models, classical density functional theories, self-consistent-field theories, phase-field methods, Ginzburg-Landau and Helfrich-type phenomenological models, dynamical density functional theory, and variational Monte Carlo methods. Illustrative examples are given to facilitate understanding of the basic concepts and quantitative prediction of the properties and rich behavior of diverse many-body systems ranging from inhomogeneous fluids, electrolytes and ionic liquids in micropores, colloidal dispersions, liquid crystals, polymer blends, lipid membranes, microemulsions, magnetic materials and high-temperature superconductors. All chapters are written by leading experts in the field and illustrated with tutorial examples for their practical applications to specific subjects. With emphasis placed on physical unders...

  20. The smart cluster method. Adaptive earthquake cluster identification and analysis in strong seismic regions

    Science.gov (United States)

    Schaefer, Andreas M.; Daniell, James E.; Wenzel, Friedemann

    2017-07-01

    Earthquake clustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation for probabilistic seismic hazard assessment. This study introduces the Smart Cluster Method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal cluster identification. It utilises the magnitude-dependent spatio-temporal earthquake density to adjust the search properties, subsequently analyses the identified clusters to determine directional variation and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010-2011 Darfield-Christchurch sequence, a reclassification procedure is applied to disassemble subsequent ruptures using near-field searches, nearest neighbour classification and temporal splitting. The method is capable of identifying and classifying earthquake clusters in space and time. It has been tested and validated using earthquake data from California and New Zealand. A total of more than 1500 clusters have been found in both regions since 1980 with M m i n = 2.0. Utilising the knowledge of cluster classification, the method has been adjusted to provide an earthquake declustering algorithm, which has been compared to existing methods. Its performance is comparable to established methodologies. The analysis of earthquake clustering statistics lead to various new and updated correlation functions, e.g. for ratios between mainshock and strongest aftershock and general aftershock activity metrics.

  1. Linking numbers and variational method

    International Nuclear Information System (INIS)

    Oda, I.; Yahikozawa, S.

    1989-09-01

    The ordinary and generalized linking numbers for two surfaces of dimension p and n-p-1 in an n dimensional manifold are derived. We use a variational method based on the properties of topological quantum field theory in order to derive them. (author). 13 refs, 2 figs

  2. An Experimental Observation of Axial Variation of Average Size of Methane Clusters in a Gas Jet

    International Nuclear Information System (INIS)

    Ji-Feng, Han; Chao-Wen, Yang; Jing-Wei, Miao; Jian-Feng, Lu; Meng, Liu; Xiao-Bing, Luo; Mian-Gong, Shi

    2010-01-01

    Axial variation of average size of methane clusters in a gas jet produced by supersonic expansion of methane through a cylindrical nozzle of 0.8 mm in diameter is observed using a Rayleigh scattering method. The scattered light intensity exhibits a power scaling on the backing pressure ranging from 16 to 50 bar, and the power is strongly Z dependent varying from 8.4 (Z = 3 mm) to 5.4 (Z = 11 mm), which is much larger than that of the argon cluster. The scattered light intensity versus axial position shows that the position of 5 mm has the maximum signal intensity. The estimation of the average cluster size on axial position Z indicates that the cluster growth process goes forward until the maximum average cluster size is reached at Z = 9 mm, and the average cluster size will decrease gradually for Z > 9 mm

  3. Microscopic Electron Variations Measured Simultaneously By The Cluster Spacecraft

    Science.gov (United States)

    Buckley, A. M.; Carozzi, T. D.; Gough, M. P.; Beloff, N.

    Data is used from the Particle Correlator experiments running on each of the four Cluster spacecraft so as to determine common microscopic behaviour in the elec- tron population observed over the macroscopic Cluster separations. The Cluster par- ticle correlator experiments operate by forming on board Auto Correlation Functions (ACFs) generated from short time series of electron counts obtained, as a function of electron energy, from the PEACE HEEA sensor. The information on the microscopic variation of the electron flux covers the frequency range DC up to 41 kHz (encom- passing typical electron plasma frequencies and electron gyro frequencies and their harmonics), the electron energy range is that covered by the PEACE HEEA sensor (within the range 1 eV to 26 keV). Results are presented of coherent electron struc- tures observed simultaneously by the four spacecraft in the differing plasma interac- tion regions and boundaries encountered by Cluster. As an aid to understanding the plasma interactions, use is made of numerical simulations which model both the un- derlying statistical properties of the electrons and also the manner in which particle correlator experiments operate.

  4. Variational methods for field theories

    Energy Technology Data Exchange (ETDEWEB)

    Ben-Menahem, S.

    1986-09-01

    Four field theory models are studied: Periodic Quantum Electrodynamics (PQED) in (2 + 1) dimensions, free scalar field theory in (1 + 1) dimensions, the Quantum XY model in (1 + 1) dimensions, and the (1 + 1) dimensional Ising model in a transverse magnetic field. The last three parts deal exclusively with variational methods; the PQED part involves mainly the path-integral approach. The PQED calculation results in a better understanding of the connection between electric confinement through monopole screening, and confinement through tunneling between degenerate vacua. This includes a better quantitative agreement for the string tensions in the two approaches. Free field theory is used as a laboratory for a new variational blocking-truncation approximation, in which the high-frequency modes in a block are truncated to wave functions that depend on the slower background modes (Boron-Oppenheimer approximation). This ''adiabatic truncation'' method gives very accurate results for ground-state energy density and correlation functions. Various adiabatic schemes, with one variable kept per site and then two variables per site, are used. For the XY model, several trial wave functions for the ground state are explored, with an emphasis on the periodic Gaussian. A connection is established with the vortex Coulomb gas of the Euclidean path integral approach. The approximations used are taken from the realms of statistical mechanics (mean field approximation, transfer-matrix methods) and of quantum mechanics (iterative blocking schemes). In developing blocking schemes based on continuous variables, problems due to the periodicity of the model were solved. Our results exhibit an order-disorder phase transition. The transfer-matrix method is used to find a good (non-blocking) trial ground state for the Ising model in a transverse magnetic field in (1 + 1) dimensions.

  5. Comparing the performance of biomedical clustering methods

    DEFF Research Database (Denmark)

    Wiwie, Christian; Baumbach, Jan; Röttger, Richard

    2015-01-01

    expression to protein domains. Performance was judged on the basis of 13 common cluster validity indices. We developed a clustering analysis platform, ClustEval (http://clusteval.mpi-inf.mpg.de), to promote streamlined evaluation, comparison and reproducibility of clustering results in the future......Identifying groups of similar objects is a popular first step in biomedical data analysis, but it is error-prone and impossible to perform manually. Many computational methods have been developed to tackle this problem. Here we assessed 13 well-known methods using 24 data sets ranging from gene....... This allowed us to objectively evaluate the performance of all tools on all data sets with up to 1,000 different parameter sets each, resulting in a total of more than 4 million calculated cluster validity indices. We observed that there was no universal best performer, but on the basis of this wide...

  6. The polarizable embedding coupled cluster method

    DEFF Research Database (Denmark)

    Sneskov, Kristian; Schwabe, Tobias; Kongsted, Jacob

    2011-01-01

    We formulate a new combined quantum mechanics/molecular mechanics (QM/MM) method based on a self-consistent polarizable embedding (PE) scheme. For the description of the QM region, we apply the popular coupled cluster (CC) method detailing the inclusion of electrostatic and polarization effects...

  7. METHOD OF CONSTRUCTION OF GENETIC DATA CLUSTERS

    Directory of Open Access Journals (Sweden)

    N. A. Novoselova

    2016-01-01

    Full Text Available The paper presents a method of construction of genetic data clusters (functional modules using the randomized matrices. To build the functional modules the selection and analysis of the eigenvalues of the gene profiles correlation matrix is performed. The principal components, corresponding to the eigenvalues, which are significantly different from those obtained for the randomly generated correlation matrix, are used for the analysis. Each selected principal component forms gene cluster. In a comparative experiment with the analogs the proposed method shows the advantage in allocating statistically significant different-sized clusters, the ability to filter non- informative genes and to extract the biologically interpretable functional modules matching the real data structure.

  8. Revisiting the variation of clustering coefficient of biological networks suggests new modular structure.

    Science.gov (United States)

    Hao, Dapeng; Ren, Cong; Li, Chuanxing

    2012-05-01

    A central idea in biology is the hierarchical organization of cellular processes. A commonly used method to identify the hierarchical modular organization of network relies on detecting a global signature known as variation of clustering coefficient (so-called modularity scaling). Although several studies have suggested other possible origins of this signature, it is still widely used nowadays to identify hierarchical modularity, especially in the analysis of biological networks. Therefore, a further and systematical investigation of this signature for different types of biological networks is necessary. We analyzed a variety of biological networks and found that the commonly used signature of hierarchical modularity is actually the reflection of spoke-like topology, suggesting a different view of network architecture. We proved that the existence of super-hubs is the origin that the clustering coefficient of a node follows a particular scaling law with degree k in metabolic networks. To study the modularity of biological networks, we systematically investigated the relationship between repulsion of hubs and variation of clustering coefficient. We provided direct evidences for repulsion between hubs being the underlying origin of the variation of clustering coefficient, and found that for biological networks having no anti-correlation between hubs, such as gene co-expression network, the clustering coefficient doesn't show dependence of degree. Here we have shown that the variation of clustering coefficient is neither sufficient nor exclusive for a network to be hierarchical. Our results suggest the existence of spoke-like modules as opposed to "deterministic model" of hierarchical modularity, and suggest the need to reconsider the organizational principle of biological hierarchy.

  9. Revisiting the variation of clustering coefficient of biological networks suggests new modular structure

    Directory of Open Access Journals (Sweden)

    Hao Dapeng

    2012-05-01

    Full Text Available Abstract Background A central idea in biology is the hierarchical organization of cellular processes. A commonly used method to identify the hierarchical modular organization of network relies on detecting a global signature known as variation of clustering coefficient (so-called modularity scaling. Although several studies have suggested other possible origins of this signature, it is still widely used nowadays to identify hierarchical modularity, especially in the analysis of biological networks. Therefore, a further and systematical investigation of this signature for different types of biological networks is necessary. Results We analyzed a variety of biological networks and found that the commonly used signature of hierarchical modularity is actually the reflection of spoke-like topology, suggesting a different view of network architecture. We proved that the existence of super-hubs is the origin that the clustering coefficient of a node follows a particular scaling law with degree k in metabolic networks. To study the modularity of biological networks, we systematically investigated the relationship between repulsion of hubs and variation of clustering coefficient. We provided direct evidences for repulsion between hubs being the underlying origin of the variation of clustering coefficient, and found that for biological networks having no anti-correlation between hubs, such as gene co-expression network, the clustering coefficient doesn’t show dependence of degree. Conclusions Here we have shown that the variation of clustering coefficient is neither sufficient nor exclusive for a network to be hierarchical. Our results suggest the existence of spoke-like modules as opposed to “deterministic model” of hierarchical modularity, and suggest the need to reconsider the organizational principle of biological hierarchy.

  10. Population clustering based on copy number variations detected from next generation sequencing data.

    Science.gov (United States)

    Duan, Junbo; Zhang, Ji-Gang; Wan, Mingxi; Deng, Hong-Wen; Wang, Yu-Ping

    2014-08-01

    Copy number variations (CNVs) can be used as significant bio-markers and next generation sequencing (NGS) provides a high resolution detection of these CNVs. But how to extract features from CNVs and further apply them to genomic studies such as population clustering have become a big challenge. In this paper, we propose a novel method for population clustering based on CNVs from NGS. First, CNVs are extracted from each sample to form a feature matrix. Then, this feature matrix is decomposed into the source matrix and weight matrix with non-negative matrix factorization (NMF). The source matrix consists of common CNVs that are shared by all the samples from the same group, and the weight matrix indicates the corresponding level of CNVs from each sample. Therefore, using NMF of CNVs one can differentiate samples from different ethnic groups, i.e. population clustering. To validate the approach, we applied it to the analysis of both simulation data and two real data set from the 1000 Genomes Project. The results on simulation data demonstrate that the proposed method can recover the true common CNVs with high quality. The results on the first real data analysis show that the proposed method can cluster two family trio with different ancestries into two ethnic groups and the results on the second real data analysis show that the proposed method can be applied to the whole-genome with large sample size consisting of multiple groups. Both results demonstrate the potential of the proposed method for population clustering.

  11. Radionuclide identification using subtractive clustering method

    International Nuclear Information System (INIS)

    Farias, Marcos Santana; Mourelle, Luiza de Macedo

    2011-01-01

    Radionuclide identification is crucial to planning protective measures in emergency situations. This paper presents the application of a method for a classification system of radioactive elements with a fast and efficient response. To achieve this goal is proposed the application of subtractive clustering algorithm. The proposed application can be implemented in reconfigurable hardware, a flexible medium to implement digital hardware circuits. (author)

  12. Metallicity Variations in the Type II Globular Cluster NGC 6934

    Science.gov (United States)

    Marino, A. F.; Yong, D.; Milone, A. P.; Piotto, G.; Lundquist, M.; Bedin, L. R.; Chené, A.-N.; Da Costa, G.; Asplund, M.; Jerjen, H.

    2018-06-01

    The Hubble Space Telescope photometric survey of Galactic globular clusters (GCs) has revealed a peculiar “chromosome map” for NGC 6934. In addition to a typical sequence, similar to that observed in Type I GCs, NGC 6934 displays additional stars on the red side, analogous to the anomalous Type II GCs, as defined in our previous work. We present a chemical abundance analysis of four red giants in this GC. Two stars are located on the chromosome map sequence common to all GCs, and another two lie on the additional sequence. We find (i) star-to-star Fe variations, with the two anomalous stars being enriched by ∼0.2 dex. Because of our small-size sample, this difference is at the ∼2.5σ level. (ii) There is no evidence for variations in the slow neutron-capture abundances over Fe, at odds with what is often observed in anomalous Type II GCs, e.g., M 22 and ω Centauri (iii) no large variations in light elements C, O, and Na, compatible with locations of the targets on the lower part of the chromosome map where such variations are not expected. Since the analyzed stars are homogeneous in light elements, the only way to reproduce the photometric splits on the sub-giant (SGB) and the red giant (RGB) branches is to assume that red RGB/faint SGB stars are enhanced in [Fe/H] by ∼0.2. This fact corroborates the spectroscopic evidence of a metallicity variation in NGC 6934. The observed chemical pattern resembles only partially the other Type II GCs, suggesting that NGC 6934 might belong either to a third class of GCs, or be a link between normal Type I and anomalous Type II GCs. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555. This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile, and Gemini Telescope at Canada–France–Hawaii Telescope.

  13. 3D variational brain tumor segmentation using Dirichlet priors on a clustered feature set.

    Science.gov (United States)

    Popuri, Karteek; Cobzas, Dana; Murtha, Albert; Jägersand, Martin

    2012-07-01

    Brain tumor segmentation is a required step before any radiation treatment or surgery. When performed manually, segmentation is time consuming and prone to human errors. Therefore, there have been significant efforts to automate the process. But, automatic tumor segmentation from MRI data is a particularly challenging task. Tumors have a large diversity in shape and appearance with intensities overlapping the normal brain tissues. In addition, an expanding tumor can also deflect and deform nearby tissue. In our work, we propose an automatic brain tumor segmentation method that addresses these last two difficult problems. We use the available MRI modalities (T1, T1c, T2) and their texture characteristics to construct a multidimensional feature set. Then, we extract clusters which provide a compact representation of the essential information in these features. The main idea in this work is to incorporate these clustered features into the 3D variational segmentation framework. In contrast to previous variational approaches, we propose a segmentation method that evolves the contour in a supervised fashion. The segmentation boundary is driven by the learned region statistics in the cluster space. We incorporate prior knowledge about the normal brain tissue appearance during the estimation of these region statistics. In particular, we use a Dirichlet prior that discourages the clusters from the normal brain region to be in the tumor region. This leads to a better disambiguation of the tumor from brain tissue. We evaluated the performance of our automatic segmentation method on 15 real MRI scans of brain tumor patients, with tumors that are inhomogeneous in appearance, small in size and in proximity to the major structures in the brain. Validation with the expert segmentation labels yielded encouraging results: Jaccard (58%), Precision (81%), Recall (67%), Hausdorff distance (24 mm). Using priors on the brain/tumor appearance, our proposed automatic 3D variational

  14. Recent advances in coupled-cluster methods

    CERN Document Server

    Bartlett, Rodney J

    1997-01-01

    Today, coupled-cluster (CC) theory has emerged as the most accurate, widely applicable approach for the correlation problem in molecules. Furthermore, the correct scaling of the energy and wavefunction with size (i.e. extensivity) recommends it for studies of polymers and crystals as well as molecules. CC methods have also paid dividends for nuclei, and for certain strongly correlated systems of interest in field theory.In order for CC methods to have achieved this distinction, it has been necessary to formulate new, theoretical approaches for the treatment of a variety of essential quantities

  15. Advanced cluster methods for correlated-electron systems

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, Andre

    2015-04-27

    In this thesis, quantum cluster methods are used to calculate electronic properties of correlated-electron systems. A special focus lies in the determination of the ground state properties of a 3/4 filled triangular lattice within the one-band Hubbard model. At this filling, the electronic density of states exhibits a so-called van Hove singularity and the Fermi surface becomes perfectly nested, causing an instability towards a variety of spin-density-wave (SDW) and superconducting states. While chiral d+id-wave superconductivity has been proposed as the ground state in the weak coupling limit, the situation towards strong interactions is unclear. Additionally, quantum cluster methods are used here to investigate the interplay of Coulomb interactions and symmetry-breaking mechanisms within the nematic phase of iron-pnictide superconductors. The transition from a tetragonal to an orthorhombic phase is accompanied by a significant change in electronic properties, while long-range magnetic order is not established yet. The driving force of this transition may not only be phonons but also magnetic or orbital fluctuations. The signatures of these scenarios are studied with quantum cluster methods to identify the most important effects. Here, cluster perturbation theory (CPT) and its variational extention, the variational cluster approach (VCA) are used to treat the respective systems on a level beyond mean-field theory. Short-range correlations are incorporated numerically exactly by exact diagonalization (ED). In the VCA, long-range interactions are included by variational optimization of a fictitious symmetry-breaking field based on a self-energy functional approach. Due to limitations of ED, cluster sizes are limited to a small number of degrees of freedom. For the 3/4 filled triangular lattice, the VCA is performed for different cluster symmetries. A strong symmetry dependence and finite-size effects make a comparison of the results from different clusters difficult

  16. Constraints on a possible variation of the fine structure constant from galaxy cluster data

    Energy Technology Data Exchange (ETDEWEB)

    Holanda, R.F.L. [Departamento de Física, Universidade Estadual da Paraíba, 58429-500, Campina Grande – PB (Brazil); Landau, S.J.; Sánchez G, I.E. [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, and IFIBA, CONICET, Ciudad Universitaria – PabI, Buenos Aires 1428 (Argentina); Alcaniz, J.S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro – RJ (Brazil); Busti, V.C., E-mail: holanda@uepb.edu.br, E-mail: slandau@df.uba.ar, E-mail: alcaniz@on.br, E-mail: isg.cos@gmail.com, E-mail: vinicius.busti@astro.iag.usp.br [Departamento de Física Matemática, Instituto de Física, Universidade de São Paulo, CP 66318, 05508-090, São Paulo – SP (Brazil)

    2016-05-01

    We propose a new method to probe a possible time evolution of the fine structure constant α from X-ray and Sunyaev-Zel'dovich measurements of the gas mass fraction ( f {sub gas}) in galaxy clusters. Taking into account a direct relation between variations of α and violations of the distance-duality relation, we discuss constraints on α for a class of dilaton runaway models. Although not yet competitive with bounds from high- z quasar absorption systems, our constraints, considering a sample of 29 measurements of f {sub gas}, in the redshift interval 0.14 < z < 0.89, provide an independent estimate of α variation at low and intermediate redshifts. Furthermore, current and planned surveys will provide a larger amount of data and thus allow to improve the limits on α variation obtained in the present analysis.

  17. The adjoint variational nodal method

    International Nuclear Information System (INIS)

    Laurin-Kovitz, K.; Lewis, E.E.

    1993-01-01

    The widespread use of nodal methods for reactor core calculations in both diffusion and transport approximations has created a demand for the corresponding adjoint solutions as a prerequisite for performing perturbation calculations. With some computational methods, however, the solution of the adjoint problem presents a difficulty; the physical adjoint obtained by discretizing the adjoint equation is not the same as the mathematical adjoint obtained by taking the transpose of the coefficient matrix, which results from the discretization of the forward equation. This difficulty arises, in particular, when interface current nodal methods based on quasi-one-dimensional solution of the diffusion or transport equation are employed. The mathematical adjoint is needed to perform perturbation calculations. The utilization of existing nodal computational algorithms, however, requires the physical adjoint. As a result, similarity transforms or related techniques must be utilized to relate physical and mathematical adjoints. Thus far, such techniques have been developed only for diffusion theory

  18. Unbiased methods for removing systematics from galaxy clustering measurements

    Science.gov (United States)

    Elsner, Franz; Leistedt, Boris; Peiris, Hiranya V.

    2016-02-01

    Measuring the angular clustering of galaxies as a function of redshift is a powerful method for extracting information from the three-dimensional galaxy distribution. The precision of such measurements will dramatically increase with ongoing and future wide-field galaxy surveys. However, these are also increasingly sensitive to observational and astrophysical contaminants. Here, we study the statistical properties of three methods proposed for controlling such systematics - template subtraction, basic mode projection, and extended mode projection - all of which make use of externally supplied template maps, designed to characterize and capture the spatial variations of potential systematic effects. Based on a detailed mathematical analysis, and in agreement with simulations, we find that the template subtraction method in its original formulation returns biased estimates of the galaxy angular clustering. We derive closed-form expressions that should be used to correct results for this shortcoming. Turning to the basic mode projection algorithm, we prove it to be free of any bias, whereas we conclude that results computed with extended mode projection are biased. Within a simplified setup, we derive analytical expressions for the bias and discuss the options for correcting it in more realistic configurations. Common to all three methods is an increased estimator variance induced by the cleaning process, albeit at different levels. These results enable unbiased high-precision clustering measurements in the presence of spatially varying systematics, an essential step towards realizing the full potential of current and planned galaxy surveys.

  19. Membership determination of open clusters based on a spectral clustering method

    Science.gov (United States)

    Gao, Xin-Hua

    2018-06-01

    We present a spectral clustering (SC) method aimed at segregating reliable members of open clusters in multi-dimensional space. The SC method is a non-parametric clustering technique that performs cluster division using eigenvectors of the similarity matrix; no prior knowledge of the clusters is required. This method is more flexible in dealing with multi-dimensional data compared to other methods of membership determination. We use this method to segregate the cluster members of five open clusters (Hyades, Coma Ber, Pleiades, Praesepe, and NGC 188) in five-dimensional space; fairly clean cluster members are obtained. We find that the SC method can capture a small number of cluster members (weak signal) from a large number of field stars (heavy noise). Based on these cluster members, we compute the mean proper motions and distances for the Hyades, Coma Ber, Pleiades, and Praesepe clusters, and our results are in general quite consistent with the results derived by other authors. The test results indicate that the SC method is highly suitable for segregating cluster members of open clusters based on high-precision multi-dimensional astrometric data such as Gaia data.

  20. Unusual clustering of coefficients of variation in published articles from a medical biochemistry department in India.

    Science.gov (United States)

    Hudes, Mark L; McCann, Joyce C; Ames, Bruce N

    2009-03-01

    A simple statistical method is described to test whether data are consistent with minimum statistical variability expected in a biological experiment. The method is applied to data presented in data tables in a subset of 84 articles among more than 200 published by 3 investigators in a small medical biochemistry department at a major university in India and to 29 "control" articles selected by key word PubMed searches. Major conclusions include: 1) unusual clustering of coefficients of variation (CVs) was observed for data from the majority of articles analyzed that were published by the 3 investigators from 2000-2007; unusual clustering was not observed for data from any of their articles examined that were published between 1992 and 1999; and 2) among a group of 29 control articles retrieved by PubMed key word, title, or title/abstract searches, unusually clustered CVs were observed in 3 articles. Two of these articles were coauthored by 1 of the 3 investigators, and 1 was from the same university but a different department. We are unable to offer a statistical or biological explanation for the unusual clustering observed.

  1. Variational method for integrating radial gradient field

    Science.gov (United States)

    Legarda-Saenz, Ricardo; Brito-Loeza, Carlos; Rivera, Mariano; Espinosa-Romero, Arturo

    2014-12-01

    We propose a variational method for integrating information obtained from circular fringe pattern. The proposed method is a suitable choice for objects with radial symmetry. First, we analyze the information contained in the fringe pattern captured by the experimental setup and then move to formulate the problem of recovering the wavefront using techniques from calculus of variations. The performance of the method is demonstrated by numerical experiments with both synthetic and real data.

  2. Hybrid Tracking Algorithm Improvements and Cluster Analysis Methods.

    Science.gov (United States)

    1982-02-26

    UPGMA ), and Ward’s method. Ling’s papers describe a (k,r) clustering method. Each of these methods have individual characteristics which make them...Reference 7), UPGMA is probably the most frequently used clustering strategy. UPGMA tries to group new points into an existing cluster by using an

  3. MANNER OF STOCKS SORTING USING CLUSTER ANALYSIS METHODS

    Directory of Open Access Journals (Sweden)

    Jana Halčinová

    2014-06-01

    Full Text Available The aim of the present article is to show the possibility of using the methods of cluster analysis in classification of stocks of finished products. Cluster analysis creates groups (clusters of finished products according to similarity in demand i.e. customer requirements for each product. Manner stocks sorting of finished products by clusters is described a practical example. The resultants clusters are incorporated into the draft layout of the distribution warehouse.

  4. Cluster temperature. Methods for its measurement and stabilization

    International Nuclear Information System (INIS)

    Makarov, G N

    2008-01-01

    Cluster temperature is an important material parameter essential to many physical and chemical processes involving clusters and cluster beams. Because of the diverse methods by which clusters can be produced, excited, and stabilized, and also because of the widely ranging values of atomic and molecular binding energies (approximately from 10 -5 to 10 eV) and numerous energy relaxation channels in clusters, cluster temperature (internal energy) ranges from 10 -3 to about 10 8 K. This paper reviews research on cluster temperature and describes methods for its measurement and stabilization. The role of cluster temperature in and its influence on physical and chemical processes is discussed. Results on the temperature dependence of cluster properties are presented. The way in which cluster temperature relates to cluster structure and to atomic and molecular interaction potentials in clusters is addressed. Methods for strong excitation of clusters and channels for their energy relaxation are discussed. Some applications of clusters and cluster beams are considered. (reviews of topical problems)

  5. Clustering Methods Application for Customer Segmentation to Manage Advertisement Campaign

    OpenAIRE

    Maciej Kutera; Mirosława Lasek

    2010-01-01

    Clustering methods are recently so advanced elaborated algorithms for large collection data analysis that they have been already included today to data mining methods. Clustering methods are nowadays larger and larger group of methods, very quickly evolving and having more and more various applications. In the article, our research concerning usefulness of clustering methods in customer segmentation to manage advertisement campaign is presented. We introduce results obtained by using four sel...

  6. Integrated management of thesis using clustering method

    Science.gov (United States)

    Astuti, Indah Fitri; Cahyadi, Dedy

    2017-02-01

    Thesis is one of major requirements for student in pursuing their bachelor degree. In fact, finishing the thesis involves a long process including consultation, writing manuscript, conducting the chosen method, seminar scheduling, searching for references, and appraisal process by the board of mentors and examiners. Unfortunately, most of students find it hard to match all the lecturers' free time to sit together in a seminar room in order to examine the thesis. Therefore, seminar scheduling process should be on the top of priority to be solved. Manual mechanism for this task no longer fulfills the need. People in campus including students, staffs, and lecturers demand a system in which all the stakeholders can interact each other and manage the thesis process without conflicting their timetable. A branch of computer science named Management Information System (MIS) could be a breakthrough in dealing with thesis management. This research conduct a method called clustering to distinguish certain categories using mathematics formulas. A system then be developed along with the method to create a well-managed tool in providing some main facilities such as seminar scheduling, consultation and review process, thesis approval, assessment process, and also a reliable database of thesis. The database plays an important role in present and future purposes.

  7. A Variational Level Set Model Combined with FCMS for Image Clustering Segmentation

    Directory of Open Access Journals (Sweden)

    Liming Tang

    2014-01-01

    Full Text Available The fuzzy C means clustering algorithm with spatial constraint (FCMS is effective for image segmentation. However, it lacks essential smoothing constraints to the cluster boundaries and enough robustness to the noise. Samson et al. proposed a variational level set model for image clustering segmentation, which can get the smooth cluster boundaries and closed cluster regions due to the use of level set scheme. However it is very sensitive to the noise since it is actually a hard C means clustering model. In this paper, based on Samson’s work, we propose a new variational level set model combined with FCMS for image clustering segmentation. Compared with FCMS clustering, the proposed model can get smooth cluster boundaries and closed cluster regions due to the use of level set scheme. In addition, a block-based energy is incorporated into the energy functional, which enables the proposed model to be more robust to the noise than FCMS clustering and Samson’s model. Some experiments on the synthetic and real images are performed to assess the performance of the proposed model. Compared with some classical image segmentation models, the proposed model has a better performance for the images contaminated by different noise levels.

  8. Performance Analysis of Entropy Methods on K Means in Clustering Process

    Science.gov (United States)

    Dicky Syahputra Lubis, Mhd.; Mawengkang, Herman; Suwilo, Saib

    2017-12-01

    K Means is a non-hierarchical data clustering method that attempts to partition existing data into one or more clusters / groups. This method partitions the data into clusters / groups so that data that have the same characteristics are grouped into the same cluster and data that have different characteristics are grouped into other groups.The purpose of this data clustering is to minimize the objective function set in the clustering process, which generally attempts to minimize variation within a cluster and maximize the variation between clusters. However, the main disadvantage of this method is that the number k is often not known before. Furthermore, a randomly chosen starting point may cause two points to approach the distance to be determined as two centroids. Therefore, for the determination of the starting point in K Means used entropy method where this method is a method that can be used to determine a weight and take a decision from a set of alternatives. Entropy is able to investigate the harmony in discrimination among a multitude of data sets. Using Entropy criteria with the highest value variations will get the highest weight. Given this entropy method can help K Means work process in determining the starting point which is usually determined at random. Thus the process of clustering on K Means can be more quickly known by helping the entropy method where the iteration process is faster than the K Means Standard process. Where the postoperative patient dataset of the UCI Repository Machine Learning used and using only 12 data as an example of its calculations is obtained by entropy method only with 2 times iteration can get the desired end result.

  9. First-principles cluster variation calculations of tetragonal-cubic transition in ZrO2

    International Nuclear Information System (INIS)

    Mohri, Tetsuo; Chen, Ying; Kiyokane, Naoya

    2013-01-01

    Highlights: ► Cluster variation method is extended to study displacive transition. ► Electronic structure total energy calculations are performed on ZrO2. ► Tetragonal-cubic transition is studied within the framework of order -disorder transition. -- Abstract: It is attempted to extend the basic idea of continuous displacement cluster variation method (CDCVM) to the study of a displacive phase transition. As a preliminary study, we focus on cubic to tetragonal transition in ZrO 2 in which oxygen atoms on the cubic lattice are displaced alternatively in the opposite direction (upward and downward) along the tetragonal axis. Within the CDCVM, displaced atoms are regarded as different atomic species, and two distinguished atoms, A-oxygen (upward shifting) and B-oxygen (downward shifting), are introduced in the description of the free energy. FLAPW electronic structure total energy calculations are performed to extract effective interaction energies among displaced oxygen atoms, and by combing them with CDCVM, the transition temperature is calculated from the first-principles

  10. AutoSOME: a clustering method for identifying gene expression modules without prior knowledge of cluster number

    Directory of Open Access Journals (Sweden)

    Cooper James B

    2010-03-01

    Full Text Available Abstract Background Clustering the information content of large high-dimensional gene expression datasets has widespread application in "omics" biology. Unfortunately, the underlying structure of these natural datasets is often fuzzy, and the computational identification of data clusters generally requires knowledge about cluster number and geometry. Results We integrated strategies from machine learning, cartography, and graph theory into a new informatics method for automatically clustering self-organizing map ensembles of high-dimensional data. Our new method, called AutoSOME, readily identifies discrete and fuzzy data clusters without prior knowledge of cluster number or structure in diverse datasets including whole genome microarray data. Visualization of AutoSOME output using network diagrams and differential heat maps reveals unexpected variation among well-characterized cancer cell lines. Co-expression analysis of data from human embryonic and induced pluripotent stem cells using AutoSOME identifies >3400 up-regulated genes associated with pluripotency, and indicates that a recently identified protein-protein interaction network characterizing pluripotency was underestimated by a factor of four. Conclusions By effectively extracting important information from high-dimensional microarray data without prior knowledge or the need for data filtration, AutoSOME can yield systems-level insights from whole genome microarray expression studies. Due to its generality, this new method should also have practical utility for a variety of data-intensive applications, including the results of deep sequencing experiments. AutoSOME is available for download at http://jimcooperlab.mcdb.ucsb.edu/autosome.

  11. Heterogeneous treatment in the variational nodal method

    International Nuclear Information System (INIS)

    Fanning, T.H.

    1995-01-01

    The variational nodal transport method is reduced to its diffusion form and generalized for the treatment of heterogeneous nodes while maintaining nodal balances. Adapting variational methods to heterogeneous nodes requires the ability to integrate over a node with discontinuous cross sections. In this work, integrals are evaluated using composite gaussian quadrature rules, which permit accurate integration while minimizing computing time. Allowing structure within a nodal solution scheme avoids some of the necessity of cross section homogenization, and more accurately defines the intra-nodal flux shape. Ideally, any desired heterogeneity can be constructed within the node; but in reality, the finite set of basis functions limits the practical resolution to which fine detail can be defined within the node. Preliminary comparison tests show that the heterogeneous variational nodal method provides satisfactory results even if some improvements are needed for very difficult, configurations

  12. A variational synthesis nodal discrete ordinates method

    International Nuclear Information System (INIS)

    Favorite, J.A.; Stacey, W.M.

    1999-01-01

    A self-consistent nodal approximation method for computing discrete ordinates neutron flux distributions has been developed from a variational functional for neutron transport theory. The advantage of the new nodal method formulation is that it is self-consistent in its definition of the homogenized nodal parameters, the construction of the global nodal equations, and the reconstruction of the detailed flux distribution. The efficacy of the method is demonstrated by two-dimensional test problems

  13. Efficient computation of the elastography inverse problem by combining variational mesh adaption and a clustering technique

    International Nuclear Information System (INIS)

    Arnold, Alexander; Bruhns, Otto T; Reichling, Stefan; Mosler, Joern

    2010-01-01

    This paper is concerned with an efficient implementation suitable for the elastography inverse problem. More precisely, the novel algorithm allows us to compute the unknown stiffness distribution in soft tissue by means of the measured displacement field by considerably reducing the numerical cost compared to previous approaches. This is realized by combining and further elaborating variational mesh adaption with a clustering technique similar to those known from digital image compression. Within the variational mesh adaption, the underlying finite element discretization is only locally refined if this leads to a considerable improvement of the numerical solution. Additionally, the numerical complexity is reduced by the aforementioned clustering technique, in which the parameters describing the stiffness of the respective soft tissue are sorted according to a predefined number of intervals. By doing so, the number of unknowns associated with the elastography inverse problem can be chosen explicitly. A positive side effect of this method is the reduction of artificial noise in the data (smoothing of the solution). The performance and the rate of convergence of the resulting numerical formulation are critically analyzed by numerical examples.

  14. The variational celular method - the code implantation

    International Nuclear Information System (INIS)

    Rosato, A.; Lima, M.A.P.

    1980-12-01

    The process to determine the potential energy curve for diatomic molecules by the Variational Cellular Method is discussed. An analysis of the determination of the electronic eigenenergies and the electrostatic energy of these molecules is made. An explanation of the input data and their meaning is also presented. (Author) [pt

  15. Variational method for lattice spectroscopy with ghosts

    International Nuclear Information System (INIS)

    Burch, Tommy; Hagen, Christian; Gattringer, Christof; Glozman, Leonid Ya.; Lang, C.B.

    2006-01-01

    We discuss the variational method used in lattice spectroscopy calculations. In particular we address the role of ghost contributions which appear in quenched or partially quenched simulations and have a nonstandard euclidean time dependence. We show that the ghosts can be separated from the physical states. Our result is illustrated with numerical data for the scalar meson

  16. Homological methods, representation theory, and cluster algebras

    CERN Document Server

    Trepode, Sonia

    2018-01-01

    This text presents six mini-courses, all devoted to interactions between representation theory of algebras, homological algebra, and the new ever-expanding theory of cluster algebras. The interplay between the topics discussed in this text will continue to grow and this collection of courses stands as a partial testimony to this new development. The courses are useful for any mathematician who would like to learn more about this rapidly developing field; the primary aim is to engage graduate students and young researchers. Prerequisites include knowledge of some noncommutative algebra or homological algebra. Homological algebra has always been considered as one of the main tools in the study of finite-dimensional algebras. The strong relationship with cluster algebras is more recent and has quickly established itself as one of the important highlights of today’s mathematical landscape. This connection has been fruitful to both areas—representation theory provides a categorification of cluster algebras, wh...

  17. CCM: A Text Classification Method by Clustering

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    In this paper, a new Cluster based Classification Model (CCM) for suspicious email detection and other text classification tasks, is presented. Comparative experiments of the proposed model against traditional classification models and the boosting algorithm are also discussed. Experimental results...... show that the CCM outperforms traditional classification models as well as the boosting algorithm for the task of suspicious email detection on terrorism domain email dataset and topic categorization on the Reuters-21578 and 20 Newsgroups datasets. The overall finding is that applying a cluster based...

  18. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    In unsupervised classification, kernel -means clustering method has been shown to perform better than conventional -means clustering method in ... 518501, India; Department of Computer Science and Engineering, Jawaharlal Nehru Technological University, Anantapur College of Engineering, Anantapur 515002, India ...

  19. Improving local clustering based top-L link prediction methods via asymmetric link clustering information

    Science.gov (United States)

    Wu, Zhihao; Lin, Youfang; Zhao, Yiji; Yan, Hongyan

    2018-02-01

    Networks can represent a wide range of complex systems, such as social, biological and technological systems. Link prediction is one of the most important problems in network analysis, and has attracted much research interest recently. Many link prediction methods have been proposed to solve this problem with various techniques. We can note that clustering information plays an important role in solving the link prediction problem. In previous literatures, we find node clustering coefficient appears frequently in many link prediction methods. However, node clustering coefficient is limited to describe the role of a common-neighbor in different local networks, because it cannot distinguish different clustering abilities of a node to different node pairs. In this paper, we shift our focus from nodes to links, and propose the concept of asymmetric link clustering (ALC) coefficient. Further, we improve three node clustering based link prediction methods via the concept of ALC. The experimental results demonstrate that ALC-based methods outperform node clustering based methods, especially achieving remarkable improvements on food web, hamster friendship and Internet networks. Besides, comparing with other methods, the performance of ALC-based methods are very stable in both globalized and personalized top-L link prediction tasks.

  20. Prediction of Solvent Physical Properties using the Hierarchical Clustering Method

    Science.gov (United States)

    Recently a QSAR (Quantitative Structure Activity Relationship) method, the hierarchical clustering method, was developed to estimate acute toxicity values for large, diverse datasets. This methodology has now been applied to the estimate solvent physical properties including sur...

  1. New Constraints on Spatial Variations of the Fine Structure Constant from Clusters of Galaxies

    Directory of Open Access Journals (Sweden)

    Ivan De Martino

    2016-12-01

    Full Text Available We have constrained the spatial variation of the fine structure constant using multi-frequency measurements of the thermal Sunyaev-Zeldovich effect of 618 X-ray selected clusters. Although our results are not competitive with the ones from quasar absorption lines, we improved by a factor 10 and ∼2.5 previous results from Cosmic Microwave Background power spectrum and from galaxy clusters, respectively.

  2. A Web service substitution method based on service cluster nets

    Science.gov (United States)

    Du, YuYue; Gai, JunJing; Zhou, MengChu

    2017-11-01

    Service substitution is an important research topic in the fields of Web services and service-oriented computing. This work presents a novel method to analyse and substitute Web services. A new concept, called a Service Cluster Net Unit, is proposed based on Web service clusters. A service cluster is converted into a Service Cluster Net Unit. Then it is used to analyse whether the services in the cluster can satisfy some service requests. Meanwhile, the substitution methods of an atomic service and a composite service are proposed. The correctness of the proposed method is proved, and the effectiveness is shown and compared with the state-of-the-art method via an experiment. It can be readily applied to e-commerce service substitution to meet the business automation needs.

  3. Fuzzy C-means method for clustering microarray data.

    Science.gov (United States)

    Dembélé, Doulaye; Kastner, Philippe

    2003-05-22

    Clustering analysis of data from DNA microarray hybridization studies is essential for identifying biologically relevant groups of genes. Partitional clustering methods such as K-means or self-organizing maps assign each gene to a single cluster. However, these methods do not provide information about the influence of a given gene for the overall shape of clusters. Here we apply a fuzzy partitioning method, Fuzzy C-means (FCM), to attribute cluster membership values to genes. A major problem in applying the FCM method for clustering microarray data is the choice of the fuzziness parameter m. We show that the commonly used value m = 2 is not appropriate for some data sets, and that optimal values for m vary widely from one data set to another. We propose an empirical method, based on the distribution of distances between genes in a given data set, to determine an adequate value for m. By setting threshold levels for the membership values, genes which are tigthly associated to a given cluster can be selected. Using a yeast cell cycle data set as an example, we show that this selection increases the overall biological significance of the genes within the cluster. Supplementary text and Matlab functions are available at http://www-igbmc.u-strasbg.fr/fcm/

  4. Progeny Clustering: A Method to Identify Biological Phenotypes

    Science.gov (United States)

    Hu, Chenyue W.; Kornblau, Steven M.; Slater, John H.; Qutub, Amina A.

    2015-01-01

    Estimating the optimal number of clusters is a major challenge in applying cluster analysis to any type of dataset, especially to biomedical datasets, which are high-dimensional and complex. Here, we introduce an improved method, Progeny Clustering, which is stability-based and exceptionally efficient in computing, to find the ideal number of clusters. The algorithm employs a novel Progeny Sampling method to reconstruct cluster identity, a co-occurrence probability matrix to assess the clustering stability, and a set of reference datasets to overcome inherent biases in the algorithm and data space. Our method was shown successful and robust when applied to two synthetic datasets (datasets of two-dimensions and ten-dimensions containing eight dimensions of pure noise), two standard biological datasets (the Iris dataset and Rat CNS dataset) and two biological datasets (a cell phenotype dataset and an acute myeloid leukemia (AML) reverse phase protein array (RPPA) dataset). Progeny Clustering outperformed some popular clustering evaluation methods in the ten-dimensional synthetic dataset as well as in the cell phenotype dataset, and it was the only method that successfully discovered clinically meaningful patient groupings in the AML RPPA dataset. PMID:26267476

  5. Variations in 137Cs activity concentrations in individual fruitbodies in clusters of Cantharellus tubaeformis

    International Nuclear Information System (INIS)

    Nikolova, I.; Johanson, K.J.

    1999-01-01

    Fruitbodies of Cantharellus tubaeformis grown in clusters were collected from a normal Swedish coniferous forest. In the laboratory, individual fruitbodies were transferred into plastic vials and the 137 Cs activity concentrations were determined and expressed as Bq kg -1 fresh weight. After drying at 55 o , the 137 CS levels were recalculated and expressed as Bq kg -1 dry weight. Large variations of 137 Cs activity concentrations between individual fruitbodies within the clusters were observed. In 1995, the range in the 137 CS levels of individual fruitbodies in one cluster were from 9,194 to 164,811 and in another cluster from 2,338 to 38,377 Bq kg -1 . The mean values for these two clusters were 90,294 and 13,556 Bq kg -1 respectively. In 1998, the mean value for eight clusters showed a range from 26,373 to 67,281 Bq kg -1 . The largest variations between individual fruitbodies within a cluster were from 11,875 to 107,160 Bq kg -1 . Refs. 10 (author)

  6. Photon energy dependent intensity variations observed in Auger spectra of free argon clusters

    International Nuclear Information System (INIS)

    Lundwall, M; Lindblad, A; Bergersen, H; Rander, T; Oehrwall, G; Tchaplyguine, M; Peredkov, S; Svensson, S; Bjoerneholm, O

    2006-01-01

    Photon energy dependent intensity variations are experimentally observed in the L 2,3 M 2,3 M 2,3 Auger spectra of argon clusters. Two cluster sizes are examined in the present study. Extrinsic scattering effects, both elastic and inelastic, involving the photoelectron are discussed and suggested as the explanation of the variations in the Auger signal. The atoms in the first few coordination shells surrounding the core-ionized atom are proposed to be the main targets for the scattering processes

  7. A Latent Variable Clustering Method for Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Vasilev, Vladislav; Iliev, Georgi; Poulkov, Vladimir

    2016-01-01

    In this paper we derive a clustering method based on the Hidden Conditional Random Field (HCRF) model in order to maximizes the performance of a wireless sensor. Our novel approach to clustering in this paper is in the application of an index invariant graph that we defined in a previous work and...

  8. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.

  9. Clustering Methods Application for Customer Segmentation to Manage Advertisement Campaign

    Directory of Open Access Journals (Sweden)

    Maciej Kutera

    2010-10-01

    Full Text Available Clustering methods are recently so advanced elaborated algorithms for large collection data analysis that they have been already included today to data mining methods. Clustering methods are nowadays larger and larger group of methods, very quickly evolving and having more and more various applications. In the article, our research concerning usefulness of clustering methods in customer segmentation to manage advertisement campaign is presented. We introduce results obtained by using four selected methods which have been chosen because their peculiarities suggested their applicability to our purposes. One of the analyzed method k-means clustering with random selected initial cluster seeds gave very good results in customer segmentation to manage advertisement campaign and these results were presented in details in the article. In contrast one of the methods (hierarchical average linkage was found useless in customer segmentation. Further investigations concerning benefits of clustering methods in customer segmentation to manage advertisement campaign is worth continuing, particularly that finding solutions in this field can give measurable profits for marketing activity.

  10. Temporal super resolution using variational methods

    DEFF Research Database (Denmark)

    Keller, Sune Høgild; Lauze, Francois Bernard; Nielsen, Mads

    2010-01-01

    Temporal super resolution (TSR) is the ability to convert video from one frame rate to another and is as such a key functionality in modern video processing systems. A higher frame rate than what is recorded is desired for high frame rate displays, for super slow-motion, and for video/film format...... observed when watching video on large and bright displays where the motion of high contrast edges often seem jerky and unnatural. A novel motion compensated (MC) TSR algorithm using variational methods for both optical flow calculation and the actual new frame interpolation is presented. The flow...

  11. EVIDENCE FOR CLUSTER TO CLUSTER VARIATIONS IN LOW-MASS STELLAR ROTATIONAL EVOLUTION

    International Nuclear Information System (INIS)

    Coker, Carl T.; Pinsonneault, Marc; Terndrup, Donald M.

    2016-01-01

    The concordance model for angular momentum evolution postulates that star-forming regions and clusters are an evolutionary sequence that can be modeled with assumptions about protostar–disk coupling, angular momentum loss from magnetized winds that saturates in a mass-dependent fashion at high rotation rates, and core-envelope decoupling for solar analogs. We test this approach by combining established data with the large h Per data set from the MONITOR project and new low-mass Pleiades data. We confirm prior results that young low-mass stars can be used to test star–disk coupling and angular momentum loss independent of the treatment of internal angular momentum transport. For slow rotators, we confirm the need for star–disk interactions to evolve the ONC to older systems, using h Per (age 13 Myr) as our natural post-disk case. There is no evidence for extremely long-lived disks as an alternative to core-envelope decoupling. However, our wind models cannot evolve rapid rotators from h Per to older systems consistently, and we find that this result is robust with respect to the choice of angular momentum loss prescription. We outline two possible solutions: either there is cosmic variance in the distribution of stellar rotation rates in different clusters or there are substantially enhanced torques in low-mass rapid rotators. We favor the former explanation and discuss observational tests that could be used to distinguish them. If the distribution of initial conditions depends on environment, models that test parameters by assuming a universal underlying distribution of initial conditions will need to be re-evaluated.

  12. EVIDENCE FOR CLUSTER TO CLUSTER VARIATIONS IN LOW-MASS STELLAR ROTATIONAL EVOLUTION

    Energy Technology Data Exchange (ETDEWEB)

    Coker, Carl T.; Pinsonneault, Marc; Terndrup, Donald M., E-mail: coker@astronomy.ohio-state.edu, E-mail: pinsono@astronomy.ohio-state.edu, E-mail: terndrup@astronomy.ohio-state.edu [Department of Astronomy, The Ohio State University, Columbus, OH 43210 (United States)

    2016-12-10

    The concordance model for angular momentum evolution postulates that star-forming regions and clusters are an evolutionary sequence that can be modeled with assumptions about protostar–disk coupling, angular momentum loss from magnetized winds that saturates in a mass-dependent fashion at high rotation rates, and core-envelope decoupling for solar analogs. We test this approach by combining established data with the large h Per data set from the MONITOR project and new low-mass Pleiades data. We confirm prior results that young low-mass stars can be used to test star–disk coupling and angular momentum loss independent of the treatment of internal angular momentum transport. For slow rotators, we confirm the need for star–disk interactions to evolve the ONC to older systems, using h Per (age 13 Myr) as our natural post-disk case. There is no evidence for extremely long-lived disks as an alternative to core-envelope decoupling. However, our wind models cannot evolve rapid rotators from h Per to older systems consistently, and we find that this result is robust with respect to the choice of angular momentum loss prescription. We outline two possible solutions: either there is cosmic variance in the distribution of stellar rotation rates in different clusters or there are substantially enhanced torques in low-mass rapid rotators. We favor the former explanation and discuss observational tests that could be used to distinguish them. If the distribution of initial conditions depends on environment, models that test parameters by assuming a universal underlying distribution of initial conditions will need to be re-evaluated.

  13. The relationship between supplier networks and industrial clusters: an analysis based on the cluster mapping method

    Directory of Open Access Journals (Sweden)

    Ichiro IWASAKI

    2010-06-01

    Full Text Available Michael Porter’s concept of competitive advantages emphasizes the importance of regional cooperation of various actors in order to gain competitiveness on globalized markets. Foreign investors may play an important role in forming such cooperation networks. Their local suppliers tend to concentrate regionally. They can form, together with local institutions of education, research, financial and other services, development agencies, the nucleus of cooperative clusters. This paper deals with the relationship between supplier networks and clusters. Two main issues are discussed in more detail: the interest of multinational companies in entering regional clusters and the spillover effects that may stem from their participation. After the discussion on the theoretical background, the paper introduces a relatively new analytical method: “cluster mapping” - a method that can spot regional hot spots of specific economic activities with cluster building potential. Experience with the method was gathered in the US and in the European Union. After the discussion on the existing empirical evidence, the authors introduce their own cluster mapping results, which they obtained by using a refined version of the original methodology.

  14. An Examination of Three Spatial Event Cluster Detection Methods

    Directory of Open Access Journals (Sweden)

    Hensley H. Mariathas

    2015-03-01

    Full Text Available In spatial disease surveillance, geographic areas with large numbers of disease cases are to be identified, so that targeted investigations can be pursued. Geographic areas with high disease rates are called disease clusters and statistical cluster detection tests are used to identify geographic areas with higher disease rates than expected by chance alone. In some situations, disease-related events rather than individuals are of interest for geographical surveillance, and methods to detect clusters of disease-related events are called event cluster detection methods. In this paper, we examine three distributional assumptions for the events in cluster detection: compound Poisson, approximate normal and multiple hypergeometric (exact. The methods differ on the choice of distributional assumption for the potentially multiple correlated events per individual. The methods are illustrated on emergency department (ED presentations by children and youth (age < 18 years because of substance use in the province of Alberta, Canada, during 1 April 2007, to 31 March 2008. Simulation studies are conducted to investigate Type I error and the power of the clustering methods.

  15. Microscopic description of nuclear few-body systems with the stochastic variational method

    International Nuclear Information System (INIS)

    Suzuki, Yasuyuki

    2000-01-01

    A simple gambling procedure called the stochastic variational method can be applied, together with appropriate variational trial functions, to solve a few-body system where the correlation between the constituents plays an important role in determining its structure. The usefulness of the method is tested by comparing to other accurate solutions for Coulombic systems. Examples of application shown here include few-nucleon systems interacting with realistic forces and few-cluster systems with the Pauli principle being taken into account properly. These examples confirm the power of the stochastic variational method. There still remain many problems for extending to a system consisting of more particles. (author)

  16. Test computations on the dynamical evolution of star clusters. [Fluid dynamic method

    Energy Technology Data Exchange (ETDEWEB)

    Angeletti, L; Giannone, P. (Rome Univ. (Italy))

    1977-01-01

    Test calculations have been carried out on the evolution of star clusters using the fluid-dynamical method devised by Larson (1970). Large systems of stars have been considered with specific concern with globular clusters. With reference to the analogous 'standard' model by Larson, the influence of varying in turn the various free parameters (cluster mass, star mass, tidal radius, mass concentration of the initial model) has been studied for the results. Furthermore, the partial release of some simplifying assumptions with regard to the relaxation time and distribution of the 'target' stars has been considered. The change of the structural properties is discussed, and the variation of the evolutionary time scale is outlined. An indicative agreement of the results obtained here with structural properties of globular clusters as deduced from previous theoretical models is pointed out.

  17. The resonating group method three cluster approach to the ground state 9 Li nucleus structure

    International Nuclear Information System (INIS)

    Filippov, G.F.; Pozdnyakov, Yu.A.; Terenetsky, K.O.; Verbitsky, V.P.

    1994-01-01

    The three-cluster approach for light atomic nuclei is formulated in frame of the algebraic version of resonating group method. Overlap integral and Hamiltonian matrix elements on generating functions are obtained for 9 Li nucleus. All permissible by Pauli principle 9 Li different cluster nucleon permutations were taken into account in the calculations. The results obtained can be easily generalised on any three-cluster system up to 12 C. Matrix elements obtained in the work were used in the variational calculations of the ground state energetic and geometric 9 Li characteristics. It is shown that 9 Li ground state is not adequate to the shell model limit and has pronounced three-cluster structure. (author). 16 refs., 4 tab., 2 figs

  18. Sensitivity evaluation of dynamic speckle activity measurements using clustering methods

    International Nuclear Information System (INIS)

    Etchepareborda, Pablo; Federico, Alejandro; Kaufmann, Guillermo H.

    2010-01-01

    We evaluate and compare the use of competitive neural networks, self-organizing maps, the expectation-maximization algorithm, K-means, and fuzzy C-means techniques as partitional clustering methods, when the sensitivity of the activity measurement of dynamic speckle images needs to be improved. The temporal history of the acquired intensity generated by each pixel is analyzed in a wavelet decomposition framework, and it is shown that the mean energy of its corresponding wavelet coefficients provides a suited feature space for clustering purposes. The sensitivity obtained by using the evaluated clustering techniques is also compared with the well-known methods of Konishi-Fujii, weighted generalized differences, and wavelet entropy. The performance of the partitional clustering approach is evaluated using simulated dynamic speckle patterns and also experimental data.

  19. Momentum-space cluster dual-fermion method

    Science.gov (United States)

    Iskakov, Sergei; Terletska, Hanna; Gull, Emanuel

    2018-03-01

    Recent years have seen the development of two types of nonlocal extensions to the single-site dynamical mean field theory. On one hand, cluster approximations, such as the dynamical cluster approximation, recover short-range momentum-dependent correlations nonperturbatively. On the other hand, diagrammatic extensions, such as the dual-fermion theory, recover long-ranged corrections perturbatively. The correct treatment of both strong short-ranged and weak long-ranged correlations within the same framework is therefore expected to lead to a quick convergence of results, and offers the potential of obtaining smooth self-energies in nonperturbative regimes of phase space. In this paper, we present an exact cluster dual-fermion method based on an expansion around the dynamical cluster approximation. Unlike previous formulations, our method does not employ a coarse-graining approximation to the interaction, which we show to be the leading source of error at high temperature, and converges to the exact result independently of the size of the underlying cluster. We illustrate the power of the method with results for the second-order cluster dual-fermion approximation to the single-particle self-energies and double occupancies.

  20. Polarizable Density Embedding Coupled Cluster Method

    DEFF Research Database (Denmark)

    Hršak, Dalibor; Olsen, Jógvan Magnus Haugaard; Kongsted, Jacob

    2018-01-01

    by an embedding potential consisting of a set of fragment densities obtained from calculations on isolated fragments with a quantum-chemistry method such as Hartree-Fock (HF) or Kohn-Sham density functional theory (KS-DFT) and dressed with a set of atom-centered anisotropic dipole-dipole polarizabilities...

  1. Method for detecting clusters of possible uranium deposits

    International Nuclear Information System (INIS)

    Conover, W.J.; Bement, T.R.; Iman, R.L.

    1978-01-01

    When a two-dimensional map contains points that appear to be scattered somewhat at random, a question that often arises is whether groups of points that appear to cluster are merely exhibiting ordinary behavior, which one can expect with any random distribution of points, or whether the clusters are too pronounced to be attributable to chance alone. A method for detecting clusters along a straight line is applied to the two-dimensional map of 214 Bi anomalies observed as part of the National Uranium Resource Evaluation Program in the Lubbock, Texas, region. Some exact probabilities associated with this method are computed and compared with two approximate methods. The two methods for approximating probabilities work well in the cases examined and can be used when it is not feasible to obtain the exact probabilities

  2. Average correlation clustering algorithm (ACCA) for grouping of co-regulated genes with similar pattern of variation in their expression values.

    Science.gov (United States)

    Bhattacharya, Anindya; De, Rajat K

    2010-08-01

    Distance based clustering algorithms can group genes that show similar expression values under multiple experimental conditions. They are unable to identify a group of genes that have similar pattern of variation in their expression values. Previously we developed an algorithm called divisive correlation clustering algorithm (DCCA) to tackle this situation, which is based on the concept of correlation clustering. But this algorithm may also fail for certain cases. In order to overcome these situations, we propose a new clustering algorithm, called average correlation clustering algorithm (ACCA), which is able to produce better clustering solution than that produced by some others. ACCA is able to find groups of genes having more common transcription factors and similar pattern of variation in their expression values. Moreover, ACCA is more efficient than DCCA with respect to the time of execution. Like DCCA, we use the concept of correlation clustering concept introduced by Bansal et al. ACCA uses the correlation matrix in such a way that all genes in a cluster have the highest average correlation values with the genes in that cluster. We have applied ACCA and some well-known conventional methods including DCCA to two artificial and nine gene expression datasets, and compared the performance of the algorithms. The clustering results of ACCA are found to be more significantly relevant to the biological annotations than those of the other methods. Analysis of the results show the superiority of ACCA over some others in determining a group of genes having more common transcription factors and with similar pattern of variation in their expression profiles. Availability of the software: The software has been developed using C and Visual Basic languages, and can be executed on the Microsoft Windows platforms. The software may be downloaded as a zip file from http://www.isical.ac.in/~rajat. Then it needs to be installed. Two word files (included in the zip file) need to

  3. Performance Analysis of Unsupervised Clustering Methods for Brain Tumor Segmentation

    Directory of Open Access Journals (Sweden)

    Tushar H Jaware

    2013-10-01

    Full Text Available Medical image processing is the most challenging and emerging field of neuroscience. The ultimate goal of medical image analysis in brain MRI is to extract important clinical features that would improve methods of diagnosis & treatment of disease. This paper focuses on methods to detect & extract brain tumour from brain MR images. MATLAB is used to design, software tool for locating brain tumor, based on unsupervised clustering methods. K-Means clustering algorithm is implemented & tested on data base of 30 images. Performance evolution of unsupervised clusteringmethods is presented.

  4. A novel clustering and supervising users' profiles method

    Institute of Scientific and Technical Information of China (English)

    Zhu Mingfu; Zhang Hongbin; Song Fangyun

    2005-01-01

    To better understand different users' accessing intentions, a novel clustering and supervising method based on accessing path is presented. This method divides users' interest space to express the distribution of users' interests, and directly to instruct the constructing process of web pages indexing for advanced performance.

  5. Investigating the effects of climate variations on bacillary dysentery incidence in northeast China using ridge regression and hierarchical cluster analysis

    Directory of Open Access Journals (Sweden)

    Guo Junqiao

    2008-09-01

    Full Text Available Abstract Background The effects of climate variations on bacillary dysentery incidence have gained more recent concern. However, the multi-collinearity among meteorological factors affects the accuracy of correlation with bacillary dysentery incidence. Methods As a remedy, a modified method to combine ridge regression and hierarchical cluster analysis was proposed for investigating the effects of climate variations on bacillary dysentery incidence in northeast China. Results All weather indicators, temperatures, precipitation, evaporation and relative humidity have shown positive correlation with the monthly incidence of bacillary dysentery, while air pressure had a negative correlation with the incidence. Ridge regression and hierarchical cluster analysis showed that during 1987–1996, relative humidity, temperatures and air pressure affected the transmission of the bacillary dysentery. During this period, all meteorological factors were divided into three categories. Relative humidity and precipitation belonged to one class, temperature indexes and evaporation belonged to another class, and air pressure was the third class. Conclusion Meteorological factors have affected the transmission of bacillary dysentery in northeast China. Bacillary dysentery prevention and control would benefit from by giving more consideration to local climate variations.

  6. Variational methods for chemical and nuclear reactions

    International Nuclear Information System (INIS)

    Crawford, O.H.

    1977-01-01

    All the variational functionals are derived which satisfy certain criteria of suitability for molecular and nuclear scattering, below the threshold energy for three-body breakup. The existence and uniqueness of solutions are proven. The most general suitable functional is specialized, by particular values of its parameters, to Kohn's taneta, Kato's cot(eta-theta), the inverse Kohn coeta, Kohn's S matrix, our S matrix, Lane and Robson's functional, and several new functionals, an infinite number of which are contained in the general expression. Four general ways of deriving algebraic methods from a given functional are discussed, and illustrated with specific algebraic results. These include equations of Lane and Robson and of Kohn, the fundamental R matrix relation, and new equations. The relative configuration space is divided as in the Wigner R matrix theory, and trial wavefunctions are needed for only the region where all the particles are interacting. In addition, a version of the general functional is presented which does not require any division of space

  7. Image Registration Using Single Cluster PHD Methods

    Science.gov (United States)

    Campbell, M.; Schlangen, I.; Delande, E.; Clark, D.

    Cadets in the Department of Physics at the United States Air Force Academy are using the technique of slitless spectroscopy to analyze the spectra from geostationary satellites during glint season. The equinox periods of the year are particularly favorable for earth-based observers to detect specular reflections off satellites (glints), which have been observed in the past using broadband photometry techniques. Three seasons of glints were observed and analyzed for multiple satellites, as measured across the visible spectrum using a diffraction grating on the Academy’s 16-inch, f/8.2 telescope. It is clear from the results that the glint maximum wavelength decreases relative to the time periods before and after the glint, and that the spectral reflectance during the glint is less like a blackbody. These results are consistent with the presumption that solar panels are the predominant source of specular reflection. The glint spectra are also quantitatively compared to different blackbody curves and the solar spectrum by means of absolute differences and standard deviations. Our initial analysis appears to indicate a potential method of determining relative power capacity.

  8. Vinayaka : A Semi-Supervised Projected Clustering Method Using Differential Evolution

    OpenAIRE

    Satish Gajawada; Durga Toshniwal

    2012-01-01

    Differential Evolution (DE) is an algorithm for evolutionary optimization. Clustering problems have beensolved by using DE based clustering methods but these methods may fail to find clusters hidden insubspaces of high dimensional datasets. Subspace and projected clustering methods have been proposed inliterature to find subspace clusters that are present in subspaces of dataset. In this paper we proposeVINAYAKA, a semi-supervised projected clustering method based on DE. In this method DE opt...

  9. A CONSTRAINT ON BROWN DWARF FORMATION VIA EJECTION: RADIAL VARIATION OF THE STELLAR AND SUBSTELLAR MASS FUNCTION OF THE YOUNG OPEN CLUSTER IC 2391

    International Nuclear Information System (INIS)

    Boudreault, S.; Bailer-Jones, C. A. L.

    2009-01-01

    We present the stellar and substellar mass function (MF) of the open cluster IC 2391, plus its radial dependence, and use this to put constraints on the formation mechanism of brown dwarfs (BDs). Our multi-band optical and infrared photometric survey with spectroscopic follow-up covers 11 deg 2 , making it the largest survey of this cluster to date. We observe a radial variation in the MF over the range 0.072-0.3 M sun , but no significant variation in the MF below the substellar boundary at the three cluster radius intervals is analyzed. This lack of radial variation for low masses is what we would expect with the ejection scenario for BD formation, although considering that IC 2391 has an age about three times older than its crossing time, we expect that BDs with a velocity greater than the escape velocity have already escaped the cluster. Alternatively, the variation in the MF of the stellar objects could be an indication that they have undergone mass segregation via dynamical evolution. We also observe a significant variation across the cluster in the color of the (background) field star locus in color-magnitude diagrams and conclude that this is due to variable background extinction in the Galactic plane. From our preliminary spectroscopic follow-up, to confirm BD status and cluster membership, we find that all candidates are M dwarfs (in either the field or the cluster), demonstrating the efficiency of our photometric selection method in avoiding contaminants (e.g., red giants). About half of our photometric candidates for which we have spectra are spectroscopically confirmed as cluster members; two are new spectroscopically confirmed BD members of IC 2391.

  10. Kernel method for clustering based on optimal target vector

    International Nuclear Information System (INIS)

    Angelini, Leonardo; Marinazzo, Daniele; Pellicoro, Mario; Stramaglia, Sebastiano

    2006-01-01

    We introduce Ising models, suitable for dichotomic clustering, with couplings that are (i) both ferro- and anti-ferromagnetic (ii) depending on the whole data-set and not only on pairs of samples. Couplings are determined exploiting the notion of optimal target vector, here introduced, a link between kernel supervised and unsupervised learning. The effectiveness of the method is shown in the case of the well-known iris data-set and in benchmarks of gene expression levels, where it works better than existing methods for dichotomic clustering

  11. Agent-based method for distributed clustering of textual information

    Science.gov (United States)

    Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN

    2010-09-28

    A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.

  12. EXPLORING ANTICORRELATIONS AND LIGHT ELEMENT VARIATIONS IN NORTHERN GLOBULAR CLUSTERS OBSERVED BY THE APOGEE SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Mészáros, Szabolcs [ELTE Gothard Astrophysical Observatory, H-9704 Szombathely, Szent Imre Herceg st. 112 (Hungary); Martell, Sarah L. [Department of Astrophysics, School of Physics, University of New South Wales, Sydney, NSW 2052 (Australia); Shetrone, Matthew [University of Texas at Austin, McDonald Observatory, Fort Davis, TX 79734 (United States); Lucatello, Sara [INAF-Osservatorio Astronomico di Padova, vicolo dell Osservatorio 5, I-35122 Padova (Italy); Troup, Nicholas W.; Pérez, Ana E. García; Majewski, Steven R. [Department of Astronomy, University of Virginia, Charlottesville, VA 22904-4325 (United States); Bovy, Jo [Institute for Advanced Study, Einstein Drive, Princeton, NJ 08540 (United States); Cunha, Katia [University of Arizona, Tucson, AZ 85719 (United States); García-Hernández, Domingo A.; Prieto, Carlos Allende [Instituto de Astrofísica de Canarias (IAC), E-38200 La Laguna, Tenerife (Spain); Overbeek, Jamie C. [Department of Astronomy, Indiana University, Bloomington, IN 47405 (United States); Beers, Timothy C. [Department of Physics and JINA Center for the Evolution of the Elements, University of Notre Dame, Notre Dame, IN 46556 (United States); Frinchaboy, Peter M. [Texas Christian University, Fort Worth, TX 76129 (United States); Hearty, Fred R.; Schneider, Donald P. [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Holtzman, Jon [New Mexico State University, Las Cruces, NM 88003 (United States); Nidever, David L. [Department of Astronomy, University of Michigan, Ann Arbor, MI 48109 (United States); Schiavon, Ricardo P. [Astrophysics Research Institute, IC2, Liverpool Science Park, Liverpool John Moores University, 146 Brownlow Hill, Liverpool, L3 5RF (United Kingdom); and others

    2015-05-15

    We investigate the light-element behavior of red giant stars in northern globular clusters (GCs) observed by the SDSS-III Apache Point Observatory Galactic Evolution Experiment. We derive abundances of 9 elements (Fe, C, N, O, Mg, Al, Si, Ca, and Ti) for 428 red giant stars in 10 GCs. The intrinsic abundance range relative to measurement errors is examined, and the well-known C–N and Mg–Al anticorrelations are explored using an extreme-deconvolution code for the first time in a consistent way. We find that Mg and Al drive the population membership in most clusters, except in M107 and M71, the two most metal-rich clusters in our study, where the grouping is most sensitive to N. We also find a diversity in the abundance distributions, with some clusters exhibiting clear abundance bimodalities (for example M3 and M53) while others show extended distributions. The spread of Al abundances increases significantly as cluster average metallicity decreases as previously found by other works, which we take as evidence that low metallicity, intermediate mass AGB polluters were more common in the more metal-poor clusters. The statistically significant correlation of [Al/Fe] with [Si/Fe] in M15 suggests that {sup 28}Si leakage has occurred in this cluster. We also present C, N, and O abundances for stars cooler than 4500 K and examine the behavior of A(C+N+O) in each cluster as a function of temperature and [Al/Fe]. The scatter of A(C+N+O) is close to its estimated uncertainty in all clusters and independent of stellar temperature. A(C+N+O) exhibits small correlations and anticorrelations with [Al/Fe] in M3 and M13, but we cannot be certain about these relations given the size of our abundance uncertainties. Star-to-star variations of α-element (Si, Ca, Ti) abundances are comparable to our estimated errors in all clusters.

  13. Schroedinger's variational method of quantization revisited

    International Nuclear Information System (INIS)

    Yasue, K.

    1980-01-01

    Schroedinger's original quantization procedure is revisited in the light of Nelson's stochastic framework of quantum mechanics. It is clarified why Schroedinger's proposal of a variational problem led us to a true description of quantum mechanics. (orig.)

  14. A cluster approximation for the transfer-matrix method

    International Nuclear Information System (INIS)

    Surda, A.

    1990-08-01

    A cluster approximation for the transfer-method is formulated. The calculation of the partition function of lattice models is transformed to a nonlinear mapping problem. The method yields the free energy, correlation functions and the phase diagrams for a large class of lattice models. The high accuracy of the method is exemplified by the calculation of the critical temperature of the Ising model. (author). 14 refs, 2 figs, 1 tab

  15. The range of variation of the mass of the most massive star in stellar clusters derived from 35 million Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Popescu, Bogdan; Hanson, M. M., E-mail: bogdan.popescu@uc.edu, E-mail: margaret.hanson@uc.edu [Department of Physics, University of Cincinnati, P.O. Box 210011, Cincinnati, OH 45221-0011 (United States)

    2014-01-01

    A growing fraction of simple stellar population models, in an aim to create more realistic simulations capable of including stochastic variation in their outputs, begin their simulations with a distribution of discrete stars following a power-law function of masses. Careful attention is needed to create a correctly sampled initial mass function (IMF), and here we provide a solid mathematical method, called MASSCLEAN IMF Sampling, for doing so. We use our method to perform 10 million MASSCLEAN Monte Carlo stellar cluster simulations to determine the most massive star in a mass distribution as a function of the total mass of the cluster. We find that a maximum mass range is predicted, not a single maximum mass. This range is (1) dependent on the total mass of the cluster and (2) independent of an upper stellar mass limit, M{sub limit} , for unsaturated clusters and emerges naturally from our IMF sampling method. We then turn our analysis around, starting with our new database of 25 million simulated clusters, to constrain the highest mass star from the observed integrated colors of a sample of 40 low-mass Large Magellanic Cloud stellar clusters of known age and mass. Finally, we present an analytical description of the maximum mass range of the most massive star as a function of the cluster's total mass and present a new M{sub max} -M{sub cluster} relation.

  16. The range of variation of the mass of the most massive star in stellar clusters derived from 35 million Monte Carlo simulations

    International Nuclear Information System (INIS)

    Popescu, Bogdan; Hanson, M. M.

    2014-01-01

    A growing fraction of simple stellar population models, in an aim to create more realistic simulations capable of including stochastic variation in their outputs, begin their simulations with a distribution of discrete stars following a power-law function of masses. Careful attention is needed to create a correctly sampled initial mass function (IMF), and here we provide a solid mathematical method, called MASSCLEAN IMF Sampling, for doing so. We use our method to perform 10 million MASSCLEAN Monte Carlo stellar cluster simulations to determine the most massive star in a mass distribution as a function of the total mass of the cluster. We find that a maximum mass range is predicted, not a single maximum mass. This range is (1) dependent on the total mass of the cluster and (2) independent of an upper stellar mass limit, M limit , for unsaturated clusters and emerges naturally from our IMF sampling method. We then turn our analysis around, starting with our new database of 25 million simulated clusters, to constrain the highest mass star from the observed integrated colors of a sample of 40 low-mass Large Magellanic Cloud stellar clusters of known age and mass. Finally, we present an analytical description of the maximum mass range of the most massive star as a function of the cluster's total mass and present a new M max -M cluster relation.

  17. Fuzzy Clustering Methods and their Application to Fuzzy Modeling

    DEFF Research Database (Denmark)

    Kroszynski, Uri; Zhou, Jianjun

    1999-01-01

    Fuzzy modeling techniques based upon the analysis of measured input/output data sets result in a set of rules that allow to predict system outputs from given inputs. Fuzzy clustering methods for system modeling and identification result in relatively small rule-bases, allowing fast, yet accurate....... An illustrative synthetic example is analyzed, and prediction accuracy measures are compared between the different variants...

  18. No Evidence of Chemical Abundance Variations in the Intermediate-age Cluster NGC 1783

    Science.gov (United States)

    Zhang, Hao; de Grijs, Richard; Li, Chengyuan; Wu, Xiaohan

    2018-02-01

    We have analyzed multi-passband photometric observations, obtained with the Hubble Space Telescope, of the massive (1.8 × 105 M ⊙), intermediate-age (1.8 Gyr-old) Large Magellanic Cloud star cluster NGC 1783. The morphology of the cluster’s red giant branch does not exhibit a clear broadening beyond its intrinsic width; the observed width is consistent with that owing to photometric uncertainties alone and independent of the photometric selection boundaries we applied to obtain our sample of red giant stars. The color dispersion of the cluster’s red giant stars around the best-fitting ridgeline is 0.062 ± 0.009 mag, which is equivalent to the width of 0.080 ± 0.001 mag derived from artificial simple stellar population tests, that is, tests based on single-age, single-metallicity stellar populations. NGC 1783 is comparably as massive as other star clusters that show clear evidence of multiple stellar populations. After incorporating mass-loss recipes from its current age of 1.8 Gyr to an age of 6 Gyr, NGC 1783 is expected to remain as massive as some other clusters that host clear multiple populations at these intermediate ages. If we were to assume that mass is an important driver of multiple population formation, then NGC 1783 should have exhibited clear evidence of chemical abundance variations. However, our results support the absence of any chemical abundance variations in NGC 1783.

  19. Dynamic analysis of clustered building structures using substructures methods

    International Nuclear Information System (INIS)

    Leimbach, K.R.; Krutzik, N.J.

    1989-01-01

    The dynamic substructure approach to the building cluster on a common base mat starts with the generation of Ritz-vectors for each building on a rigid foundation. The base mat plus the foundation soil is subjected to kinematic constraint modes, for example constant, linear, quadratic or cubic constraints. These constraint modes are also imposed on the buildings. By enforcing kinematic compatibility of the complete structural system on the basis of the constraint modes a reduced Ritz model of the complete cluster is obtained. This reduced model can now be analyzed by modal time history or response spectrum methods

  20. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  1. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  2. a Probabilistic Embedding Clustering Method for Urban Structure Detection

    Science.gov (United States)

    Lin, X.; Li, H.; Zhang, Y.; Gao, L.; Zhao, L.; Deng, M.

    2017-09-01

    Urban structure detection is a basic task in urban geography. Clustering is a core technology to detect the patterns of urban spatial structure, urban functional region, and so on. In big data era, diverse urban sensing datasets recording information like human behaviour and human social activity, suffer from complexity in high dimension and high noise. And unfortunately, the state-of-the-art clustering methods does not handle the problem with high dimension and high noise issues concurrently. In this paper, a probabilistic embedding clustering method is proposed. Firstly, we come up with a Probabilistic Embedding Model (PEM) to find latent features from high dimensional urban sensing data by "learning" via probabilistic model. By latent features, we could catch essential features hidden in high dimensional data known as patterns; with the probabilistic model, we can also reduce uncertainty caused by high noise. Secondly, through tuning the parameters, our model could discover two kinds of urban structure, the homophily and structural equivalence, which means communities with intensive interaction or in the same roles in urban structure. We evaluated the performance of our model by conducting experiments on real-world data and experiments with real data in Shanghai (China) proved that our method could discover two kinds of urban structure, the homophily and structural equivalence, which means clustering community with intensive interaction or under the same roles in urban space.

  3. A PROBABILISTIC EMBEDDING CLUSTERING METHOD FOR URBAN STRUCTURE DETECTION

    Directory of Open Access Journals (Sweden)

    X. Lin

    2017-09-01

    Full Text Available Urban structure detection is a basic task in urban geography. Clustering is a core technology to detect the patterns of urban spatial structure, urban functional region, and so on. In big data era, diverse urban sensing datasets recording information like human behaviour and human social activity, suffer from complexity in high dimension and high noise. And unfortunately, the state-of-the-art clustering methods does not handle the problem with high dimension and high noise issues concurrently. In this paper, a probabilistic embedding clustering method is proposed. Firstly, we come up with a Probabilistic Embedding Model (PEM to find latent features from high dimensional urban sensing data by “learning” via probabilistic model. By latent features, we could catch essential features hidden in high dimensional data known as patterns; with the probabilistic model, we can also reduce uncertainty caused by high noise. Secondly, through tuning the parameters, our model could discover two kinds of urban structure, the homophily and structural equivalence, which means communities with intensive interaction or in the same roles in urban structure. We evaluated the performance of our model by conducting experiments on real-world data and experiments with real data in Shanghai (China proved that our method could discover two kinds of urban structure, the homophily and structural equivalence, which means clustering community with intensive interaction or under the same roles in urban space.

  4. Variation and Commonality in Phenomenographic Research Methods

    Science.gov (United States)

    Akerlind, Gerlese S.

    2012-01-01

    This paper focuses on the data analysis stage of phenomenographic research, elucidating what is involved in terms of both commonality and variation in accepted practice. The analysis stage of phenomenographic research is often not well understood. This paper helps to clarify the process, initially by collecting together in one location the more…

  5. Application of a Light-Front Coupled Cluster Method

    International Nuclear Information System (INIS)

    Chabysheva, S.S.; Hiller, J.R.

    2012-01-01

    As a test of the new light-front coupled-cluster method in a gauge theory, we apply it to the nonperturbative construction of the dressed-electron state in QED, for an arbitrary covariant gauge, and compute the electron's anomalous magnetic moment. The construction illustrates the spectator and Fock-sector independence of vertex and self-energy contributions and indicates resolution of the difficulties with uncanceled divergences that plague methods based on Fock-space truncation. (author)

  6. A Clustering Method for Data in Cylindrical Coordinates

    Directory of Open Access Journals (Sweden)

    Kazuhisa Fujita

    2017-01-01

    Full Text Available We propose a new clustering method for data in cylindrical coordinates based on the k-means. The goal of the k-means family is to maximize an optimization function, which requires a similarity. Thus, we need a new similarity to obtain the new clustering method for data in cylindrical coordinates. In this study, we first derive a new similarity for the new clustering method by assuming a particular probabilistic model. A data point in cylindrical coordinates has radius, azimuth, and height. We assume that the azimuth is sampled from a von Mises distribution and the radius and the height are independently generated from isotropic Gaussian distributions. We derive the new similarity from the log likelihood of the assumed probability distribution. Our experiments demonstrate that the proposed method using the new similarity can appropriately partition synthetic data defined in cylindrical coordinates. Furthermore, we apply the proposed method to color image quantization and show that the methods successfully quantize a color image with respect to the hue element.

  7. Multistep Hybrid Extragradient Method for Triple Hierarchical Variational Inequalities

    Directory of Open Access Journals (Sweden)

    Zhao-Rong Kong

    2013-01-01

    Full Text Available We consider a triple hierarchical variational inequality problem (THVIP, that is, a variational inequality problem defined over the set of solutions of another variational inequality problem which is defined over the intersection of the fixed point set of a strict pseudocontractive mapping and the solution set of the classical variational inequality problem. Moreover, we propose a multistep hybrid extragradient method to compute the approximate solutions of the THVIP and present the convergence analysis of the sequence generated by the proposed method. We also derive a solution method for solving a system of hierarchical variational inequalities (SHVI, that is, a system of variational inequalities defined over the intersection of the fixed point set of a strict pseudocontractive mapping and the solution set of the classical variational inequality problem. Under very mild conditions, it is proven that the sequence generated by the proposed method converges strongly to a unique solution of the SHVI.

  8. Using the Screened Coulomb Potential to Illustrate the Variational Method

    Science.gov (United States)

    Zuniga, Jose; Bastida, Adolfo; Requena, Alberto

    2012-01-01

    The screened Coulomb potential, or Yukawa potential, is used to illustrate the application of the single and linear variational methods. The trial variational functions are expressed in terms of Slater-type functions, for which the integrals needed to carry out the variational calculations are easily evaluated in closed form. The variational…

  9. A method of clustering observers with different visual characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Niimi, Takanaga [Nagoya University School of Health Sciences, Department of Radiological Technology, 1-1-20 Daiko-minami, Higashi-ku, Nagoya 461-8673 (Japan); Imai, Kuniharu [Nagoya University School of Health Sciences, Department of Radiological Technology, 1-1-20 Daiko-minami, Higashi-ku, Nagoya 461-8673 (Japan); Ikeda, Mitsuru [Nagoya University School of Health Sciences, Department of Radiological Technology, 1-1-20 Daiko-minami, Higashi-ku, Nagoya 461-8673 (Japan); Maeda, Hisatoshi [Nagoya University School of Health Sciences, Department of Radiological Technology, 1-1-20 Daiko-minami, Higashi-ku, Nagoya 461-8673 (Japan)

    2006-01-15

    Evaluation of observer's image perception in medical images is important, and yet has not been performed because it is difficult to quantify visual characteristics. In the present study, we investigated the observer's image perception by clustering a group of 20 observers. Images of a contrast-detail (C-D) phantom, which had cylinders of 10 rows and 10 columns with different diameters and lengths, were acquired with an X-ray screen-film system with fixed exposure conditions. A group of 10 films were prepared for visual evaluations. Sixteen radiological technicians, three radiologists and one medical physicist participated in the observation test. All observers read the phantom radiographs on a transillumination image viewer with room lights off. The detectability was defined as the shortest length of the cylinders of which border the observers could recognize from the background, and was recorded using the number of columns. The detectability was calculated as the average of 10 readings for each observer, and plotted for different phantom diameter. The unweighted pair-group method using arithmetic averages (UPGMA) was adopted for clustering. The observers were clustered into two groups: one group selected objects with a demarcation from the vicinity, and the other group searched for the objects with their eyes constrained. This study showed a usefulness of the cluster method to select personnel with the similar perceptual predisposition when a C-D phantom was used in image quality control.

  10. A method of clustering observers with different visual characteristics

    International Nuclear Information System (INIS)

    Niimi, Takanaga; Imai, Kuniharu; Ikeda, Mitsuru; Maeda, Hisatoshi

    2006-01-01

    Evaluation of observer's image perception in medical images is important, and yet has not been performed because it is difficult to quantify visual characteristics. In the present study, we investigated the observer's image perception by clustering a group of 20 observers. Images of a contrast-detail (C-D) phantom, which had cylinders of 10 rows and 10 columns with different diameters and lengths, were acquired with an X-ray screen-film system with fixed exposure conditions. A group of 10 films were prepared for visual evaluations. Sixteen radiological technicians, three radiologists and one medical physicist participated in the observation test. All observers read the phantom radiographs on a transillumination image viewer with room lights off. The detectability was defined as the shortest length of the cylinders of which border the observers could recognize from the background, and was recorded using the number of columns. The detectability was calculated as the average of 10 readings for each observer, and plotted for different phantom diameter. The unweighted pair-group method using arithmetic averages (UPGMA) was adopted for clustering. The observers were clustered into two groups: one group selected objects with a demarcation from the vicinity, and the other group searched for the objects with their eyes constrained. This study showed a usefulness of the cluster method to select personnel with the similar perceptual predisposition when a C-D phantom was used in image quality control

  11. A multigrid method for variational inequalities

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, S.; Stewart, D.E.; Wu, W.

    1996-12-31

    Multigrid methods have been used with great success for solving elliptic partial differential equations. Penalty methods have been successful in solving finite-dimensional quadratic programs. In this paper these two techniques are combined to give a fast method for solving obstacle problems. A nonlinear penalized problem is solved using Newton`s method for large values of a penalty parameter. Multigrid methods are used to solve the linear systems in Newton`s method. The overall numerical method developed is based on an exterior penalty function, and numerical results showing the performance of the method have been obtained.

  12. Rolling Element Bearing Performance Degradation Assessment Using Variational Mode Decomposition and Gath-Geva Clustering Time Series Segmentation

    Directory of Open Access Journals (Sweden)

    Yaolong Li

    2017-01-01

    Full Text Available By focusing on the issue of rolling element bearing (REB performance degradation assessment (PDA, a solution based on variational mode decomposition (VMD and Gath-Geva clustering time series segmentation (GGCTSS has been proposed. VMD is a new decomposition method. Since it is different from the recursive decomposition method, for example, empirical mode decomposition (EMD, local mean decomposition (LMD, and local characteristic-scale decomposition (LCD, VMD needs a priori parameters. In this paper, we will propose a method to optimize the parameters in VMD, namely, the number of decomposition modes and moderate bandwidth constraint, based on genetic algorithm. Executing VMD with the acquired parameters, the BLIMFs are obtained. By taking the envelope of the BLIMFs, the sensitive BLIMFs are selected. And then we take the amplitude of the defect frequency (ADF as a degradative feature. To get the performance degradation assessment, we are going to use the method called Gath-Geva clustering time series segmentation. Afterwards, the method is carried out by two pieces of run-to-failure data. The results indicate that the extracted feature could depict the process of degradation precisely.

  13. Variational iteration method for one dimensional nonlinear thermoelasticity

    International Nuclear Information System (INIS)

    Sweilam, N.H.; Khader, M.M.

    2007-01-01

    This paper applies the variational iteration method to solve the Cauchy problem arising in one dimensional nonlinear thermoelasticity. The advantage of this method is to overcome the difficulty of calculation of Adomian's polynomials in the Adomian's decomposition method. The numerical results of this method are compared with the exact solution of an artificial model to show the efficiency of the method. The approximate solutions show that the variational iteration method is a powerful mathematical tool for solving nonlinear problems

  14. Survey Shows Variation in Ph.D. Methods Training.

    Science.gov (United States)

    Steeves, Leslie; And Others

    1983-01-01

    Reports on a 1982 survey of journalism graduate studies indicating considerable variation in research methods requirements and emphases in 23 universities offering doctoral degrees in mass communication. (HOD)

  15. Time dependent variational method in quantum mechanics

    International Nuclear Information System (INIS)

    Torres del Castillo, G.F.

    1987-01-01

    Using the fact that the solutions to the time-dependent Schodinger equation can be obtained from a variational principle, by restricting the evolution of the state vector to some surface in the corresponding Hilbert space, approximations to the exact solutions can be obtained, which are determined by equations similar to Hamilton's equations. It is shown that, in order for the approximate evolution to be well defined on a given surface, the imaginary part of the inner product restricted to the surface must be non-singular. (author)

  16. A Trajectory Regression Clustering Technique Combining a Novel Fuzzy C-Means Clustering Algorithm with the Least Squares Method

    Directory of Open Access Journals (Sweden)

    Xiangbing Zhou

    2018-04-01

    Full Text Available Rapidly growing GPS (Global Positioning System trajectories hide much valuable information, such as city road planning, urban travel demand, and population migration. In order to mine the hidden information and to capture better clustering results, a trajectory regression clustering method (an unsupervised trajectory clustering method is proposed to reduce local information loss of the trajectory and to avoid getting stuck in the local optimum. Using this method, we first define our new concept of trajectory clustering and construct a novel partitioning (angle-based partitioning method of line segments; second, the Lagrange-based method and Hausdorff-based K-means++ are integrated in fuzzy C-means (FCM clustering, which are used to maintain the stability and the robustness of the clustering process; finally, least squares regression model is employed to achieve regression clustering of the trajectory. In our experiment, the performance and effectiveness of our method is validated against real-world taxi GPS data. When comparing our clustering algorithm with the partition-based clustering algorithms (K-means, K-median, and FCM, our experimental results demonstrate that the presented method is more effective and generates a more reasonable trajectory.

  17. Clustering method to process signals from a CdZnTe detector

    International Nuclear Information System (INIS)

    Zhang, Lan; Takahashi, Hiroyuki; Fukuda, Daiji; Nakazawa, Masaharu

    2001-01-01

    The poor mobility of holes in a compound semiconductor detector results in the imperfect collection of the primary charge deposited in the detector. Furthermore the fluctuation of the charge loss efficiency due to the change in the hole collection path length seriously degrades the energy resolution of the detector. Since the charge collection efficiency varies with the signal waveform, we can expect the improvement of the energy resolution through a proper waveform signal processing method. We developed a new digital signal processing technique, a clustering method which derives typical patterns containing the information on the real situation inside a detector from measured signals. The obtained typical patterns for the detector are then used for the pattern matching method. Measured signals are classified through analyzing the practical waveform variation due to the charge trapping, the electric field and the crystal defect etc. Signals with similar shape are placed into the same cluster. For each cluster we calculate an average waveform as a reference pattern. Using these reference patterns obtained from all the clusters, we can classify other measured signal waveforms from the same detector. Then signals are independently processed according to the classified category and form corresponding spectra. Finally these spectra are merged into one spectrum by multiplying normalization coefficients. The effectiveness of this method was verified with a CdZnTe detector of 2 mm thick and a 137 Cs gamma-ray source. The obtained energy resolution as improved to about 8 keV (FWHM). Because the clustering method is only related to the measured waveforms, it can be applied to any type and size of detectors and compatible with any type of filtering methods. (author)

  18. On Self-Adaptive Method for General Mixed Variational Inequalities

    Directory of Open Access Journals (Sweden)

    Abdellah Bnouhachem

    2008-01-01

    Full Text Available We suggest and analyze a new self-adaptive method for solving general mixed variational inequalities, which can be viewed as an improvement of the method of (Noor 2003. Global convergence of the new method is proved under the same assumptions as Noor's method. Some preliminary computational results are given to illustrate the efficiency of the proposed method. Since the general mixed variational inequalities include general variational inequalities, quasivariational inequalities, and nonlinear (implicit complementarity problems as special cases, results proved in this paper continue to hold for these problems.

  19. A Modified Alternating Direction Method for Variational Inequality Problems

    International Nuclear Information System (INIS)

    Han, D.

    2002-01-01

    The alternating direction method is an attractive method for solving large-scale variational inequality problems whenever the subproblems can be solved efficiently. However, the subproblems are still variational inequality problems, which are as structurally difficult to solve as the original one. To overcome this disadvantage, in this paper we propose a new alternating direction method for solving a class of nonlinear monotone variational inequality problems. In each iteration the method just makes an orthogonal projection to a simple set and some function evaluations. We report some preliminary computational results to illustrate the efficiency of the method

  20. A New Waveform Signal Processing Method Based on Adaptive Clustering-Genetic Algorithms

    International Nuclear Information System (INIS)

    Noha Shaaban; Fukuzo Masuda; Hidetsugu Morota

    2006-01-01

    We present a fast digital signal processing method for numerical analysis of individual pulses from CdZnTe compound semiconductor detectors. Using Maxi-Mini Distance Algorithm and Genetic Algorithms based discrimination technique. A parametric approach has been used for classifying the discriminated waveforms into a set of clusters each has a similar signal shape with a corresponding pulse height spectrum. A corrected total pulse height spectrum was obtained by applying a normalization factor for the full energy peak for each cluster with a highly improvements in the energy spectrum characteristics. This method applied successfully for both simulated and real measured data, it can be applied to any detector suffers from signal shape variation. (authors)

  1. A Comparison of Methods for Player Clustering via Behavioral Telemetry

    DEFF Research Database (Denmark)

    Drachen, Anders; Thurau, C.; Sifa, R.

    2013-01-01

    patterns in the behavioral data, and developing profiles that are actionable to game developers. There are numerous methods for unsupervised clustering of user behavior, e.g. k-means/c-means, Nonnegative Matrix Factorization, or Principal Component Analysis. Although all yield behavior categorizations......, interpretation of the resulting categories in terms of actual play behavior can be difficult if not impossible. In this paper, a range of unsupervised techniques are applied together with Archetypal Analysis to develop behavioral clusters from playtime data of 70,014 World of Warcraft players, covering a five......The analysis of user behavior in digital games has been aided by the introduction of user telemetry in game development, which provides unprecedented access to quantitative data on user behavior from the installed game clients of the entire population of players. Player behavior telemetry datasets...

  2. Cluster monte carlo method for nuclear criticality safety calculation

    International Nuclear Information System (INIS)

    Pei Lucheng

    1984-01-01

    One of the most important applications of the Monte Carlo method is the calculation of the nuclear criticality safety. The fair source game problem was presented at almost the same time as the Monte Carlo method was applied to calculating the nuclear criticality safety. The source iteration cost may be reduced as much as possible or no need for any source iteration. This kind of problems all belongs to the fair source game prolems, among which, the optimal source game is without any source iteration. Although the single neutron Monte Carlo method solved the problem without the source iteration, there is still quite an apparent shortcoming in it, that is, it solves the problem without the source iteration only in the asymptotic sense. In this work, a new Monte Carlo method called the cluster Monte Carlo method is given to solve the problem further

  3. Hybrid Steepest-Descent Methods for Triple Hierarchical Variational Inequalities

    Directory of Open Access Journals (Sweden)

    L. C. Ceng

    2015-01-01

    Full Text Available We introduce and analyze a relaxed iterative algorithm by combining Korpelevich’s extragradient method, hybrid steepest-descent method, and Mann’s iteration method. We prove that, under appropriate assumptions, the proposed algorithm converges strongly to a common element of the fixed point set of infinitely many nonexpansive mappings, the solution set of finitely many generalized mixed equilibrium problems (GMEPs, the solution set of finitely many variational inclusions, and the solution set of general system of variational inequalities (GSVI, which is just a unique solution of a triple hierarchical variational inequality (THVI in a real Hilbert space. In addition, we also consider the application of the proposed algorithm for solving a hierarchical variational inequality problem with constraints of finitely many GMEPs, finitely many variational inclusions, and the GSVI. The results obtained in this paper improve and extend the corresponding results announced by many others.

  4. Excitonic Order and Superconductivity in the Two-Orbital Hubbard Model: Variational Cluster Approach

    Science.gov (United States)

    Fujiuchi, Ryo; Sugimoto, Koudai; Ohta, Yukinori

    2018-06-01

    Using the variational cluster approach based on the self-energy functional theory, we study the possible occurrence of excitonic order and superconductivity in the two-orbital Hubbard model with intra- and inter-orbital Coulomb interactions. It is known that an antiferromagnetic Mott insulator state appears in the regime of strong intra-orbital interaction, a band insulator state appears in the regime of strong inter-orbital interaction, and an excitonic insulator state appears between them. In addition to these states, we find that the s±-wave superconducting state appears in the small-correlation regime, and the dx2 - y2-wave superconducting state appears on the boundary of the antiferromagnetic Mott insulator state. We calculate the single-particle spectral function of the model and compare the band gap formation due to the superconducting and excitonic orders.

  5. LIGHT-ELEMENT ABUNDANCE VARIATIONS AT LOW METALLICITY: THE GLOBULAR CLUSTER NGC 5466

    International Nuclear Information System (INIS)

    Shetrone, Matthew; Martell, Sarah L.; Wilkerson, Rachel; Adams, Joshua; Siegel, Michael H.; Smith, Graeme H.; Bond, Howard E.

    2010-01-01

    We present low-resolution (R ≅850) spectra for 67 asymptotic giant branch (AGB), horizontal branch, and red giant branch (RGB) stars in the low-metallicity globular cluster NGC 5466, taken with the VIRUS-P integral-field spectrograph at the 2.7 m Harlan J. Smith telescope at McDonald Observatory. Sixty-six stars are confirmed, and one rejected, as cluster members based on radial velocity, which we measure to an accuracy of 16 km s -1 via template-matching techniques. CN and CH band strengths have been measured for 29 RGB and AGB stars in NGC 5466, and the band-strength indices measured from VIRUS-P data show close agreement with those measured from Keck/LRIS spectra previously taken for five of our target stars. We also determine carbon abundances from comparisons with synthetic spectra. The RGB stars in our data set cover a range in absolute V magnitude from +2 to -3, which permits us to study the rate of carbon depletion on the giant branch as well as the point of its onset. The data show a clear decline in carbon abundance with rising luminosity above the luminosity function 'bump' on the giant branch, and also a subdued range in CN band strength, suggesting ongoing internal mixing in individual stars but minor or no primordial star-to-star variation in light-element abundances.

  6. Ancestral Variations of the PCDHG Gene Cluster Predispose to Dyslexia in a Multiplex Family

    Directory of Open Access Journals (Sweden)

    Teesta Naskar

    2018-02-01

    Full Text Available Dyslexia is a heritable neurodevelopmental disorder characterized by difficulties in reading and writing. In this study, we describe the identification of a set of 17 polymorphisms located across 1.9 Mb region on chromosome 5q31.3, encompassing genes of the PCDHG cluster, TAF7, PCDH1 and ARHGAP26, dominantly inherited with dyslexia in a multi-incident family. Strikingly, the non-risk form of seven variations of the PCDHG cluster, are preponderant in the human lineage, while risk alleles are ancestral and conserved across Neanderthals to non-human primates. Four of these seven ancestral variations (c.460A > C [p.Ile154Leu], c.541G > A [p.Ala181Thr], c.2036G > C [p.Arg679Pro] and c.2059A > G [p.Lys687Glu] result in amino acid alterations. p.Ile154Leu and p.Ala181Thr are present at EC2: EC3 interacting interface of γA3-PCDH and γA4-PCDH respectively might affect trans-homophilic interaction and hence neuronal connectivity. p.Arg679Pro and p.Lys687Glu are present within the linker region connecting trans-membrane to extracellular domain. Sequence analysis indicated the importance of p.Ile154, p.Arg679 and p.Lys687 in maintaining class specificity. Thus the observed association of PCDHG genes encoding neural adhesion proteins reinforces the hypothesis of aberrant neuronal connectivity in the pathophysiology of dyslexia. Additionally, the striking conservation of the identified variants indicates a role of PCDHG in the evolution of highly specialized cognitive skills critical to reading.

  7. Symmetrized partial-wave method for density-functional cluster calculations

    International Nuclear Information System (INIS)

    Averill, F.W.; Painter, G.S.

    1994-01-01

    The computational advantage and accuracy of the Harris method is linked to the simplicity and adequacy of the reference-density model. In an earlier paper, we investigated one way the Harris functional could be extended to systems outside the limits of weakly interacting atoms by making the charge density of the interacting atoms self-consistent within the constraints of overlapping spherical atomic densities. In the present study, a method is presented for augmenting the interacting atom charge densities with symmetrized partial-wave expansions on each atomic site. The added variational freedom of the partial waves leads to a scheme capable of giving exact results within a given exchange-correlation approximation while maintaining many of the desirable convergence and stability properties of the original Harris method. Incorporation of the symmetry of the cluster in the partial-wave construction further reduces the level of computational effort. This partial-wave cluster method is illustrated by its application to the dimer C 2 , the hypothetical atomic cluster Fe 6 Al 8 , and the benzene molecule

  8. Method of removing crud deposited on fuel element clusters

    International Nuclear Information System (INIS)

    Yokota, Tokunobu; Yashima, Akira; Tajima, Jun-ichiro.

    1982-01-01

    Purpose: To enable easy elimination of claddings deposited on the surface of fuel element. Method: An operator manipulates a pole from above a platform, engages the longitudinal flange of the cover to the opening at the upper end of a channel box and starts up a suction pump. The suction amount of the pump is set such that water flow becomes within the channel box at greater flow rate than the operational flow rate in the channel box of the fuel element clusters during reactor operation. This enables to remove crud deposited on the surface of individual fuel elements with ease and rapidly without detaching the channel box. (Moriyama, K.)

  9. Determining wood chip size: image analysis and clustering methods

    Directory of Open Access Journals (Sweden)

    Paolo Febbi

    2013-09-01

    Full Text Available One of the standard methods for the determination of the size distribution of wood chips is the oscillating screen method (EN 15149- 1:2010. Recent literature demonstrated how image analysis could return highly accurate measure of the dimensions defined for each individual particle, and could promote a new method depending on the geometrical shape to determine the chip size in a more accurate way. A sample of wood chips (8 litres was sieved through horizontally oscillating sieves, using five different screen hole diameters (3.15, 8, 16, 45, 63 mm; the wood chips were sorted in decreasing size classes and the mass of all fractions was used to determine the size distribution of the particles. Since the chip shape and size influence the sieving results, Wang’s theory, which concerns the geometric forms, was considered. A cluster analysis on the shape descriptors (Fourier descriptors and size descriptors (area, perimeter, Feret diameters, eccentricity was applied to observe the chips distribution. The UPGMA algorithm was applied on Euclidean distance. The obtained dendrogram shows a group separation according with the original three sieving fractions. A comparison has been made between the traditional sieve and clustering results. This preliminary result shows how the image analysis-based method has a high potential for the characterization of wood chip size distribution and could be further investigated. Moreover, this method could be implemented in an online detection machine for chips size characterization. An improvement of the results is expected by using supervised multivariate methods that utilize known class memberships. The main objective of the future activities will be to shift the analysis from a 2-dimensional method to a 3- dimensional acquisition process.

  10. Discrete variational methods and their application to electronic structures

    International Nuclear Information System (INIS)

    Ellis, D.E.

    1987-01-01

    Some general concepts concerning Discrete Variational methods are developed and applied to problems of determination of eletronic spectra, charge densities and bonding of free molecules, surface-chemisorbed species and bulk solids. (M.W.O.) [pt

  11. A convergent overlapping domain decomposition method for total variation minimization

    KAUST Repository

    Fornasier, Massimo; Langer, Andreas; Schö nlieb, Carola-Bibiane

    2010-01-01

    In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation

  12. Epsodic paroxysmal hemicrania with seasonal variation: case report and the EPH-cluster headache continuum hypothesis

    Directory of Open Access Journals (Sweden)

    Veloso Germany Gonçalves

    2001-01-01

    Full Text Available Episodic paroxysmal hemicrania (EPH is a rare disorder characterized by frequent, daily attacks of short-lived, unilateral headache with accompanying ipsilateral autonomic features. EPH has attack periods which last weeks to months separated by remission intervals lasting months to years, however, a seasonal variation has never been reported in EPH. We report a new case of EPH with a clear seasonal pattern: a 32-year-old woman with a right-sided headache for 17 years. Pain occurred with a seasonal variation, with bouts lasting one month (usually in the first months of the year and remission periods lasting around 11 months. During these periods she had headache from three to five times per day, lasting from 15 to 30 minutes, without any particular period preference. There were no precipitating or aggravating factors. Tearing and conjunctival injection accompanied ipsilaterally the pain. Previous treatments provided no pain relief. She completely responded to indomethacin 75 mg daily. After three years, the pain recurred with longer attack duration and was just relieved with prednisone. We also propose a new hypothesis: the EPH-cluster headache continuum.

  13. Exploring gravitational lensing model variations in the Frontier Fields galaxy clusters

    Science.gov (United States)

    Harris James, Nicholas John; Raney, Catie; Brennan, Sean; Keeton, Charles

    2018-01-01

    Multiple groups have been working on modeling the mass distributions of the six lensing galaxy clusters in the Hubble Space Telescope Frontier Fields data set. The magnification maps produced from these mass models will be important for the future study of the lensed background galaxies, but there exists significant variation in the different groups’ models and magnification maps. We explore the use of two-dimensional histograms as a tool for visualizing these magnification map variations. Using a number of simple, one- or two-halo singular isothermal sphere models, we explore the features that are produced in 2D histogram model comparisons when parameters such as halo mass, ellipticity, and location are allowed to vary. Our analysis demonstrates the potential of 2D histograms as a means of observing the full range of differences between the Frontier Fields groups’ models.This work has been supported by funding from National Science Foundation grants PHY-1560077 and AST-1211385, and from the Space Telescope Science Institute.

  14. Solution of problems in calculus of variations via He's variational iteration method

    International Nuclear Information System (INIS)

    Tatari, Mehdi; Dehghan, Mehdi

    2007-01-01

    In the modeling of a large class of problems in science and engineering, the minimization of a functional is appeared. Finding the solution of these problems needs to solve the corresponding ordinary differential equations which are generally nonlinear. In recent years He's variational iteration method has been attracted a lot of attention of the researchers for solving nonlinear problems. This method finds the solution of the problem without any discretization of the equation. Since this method gives a closed form solution of the problem and avoids the round off errors, it can be considered as an efficient method for solving various kinds of problems. In this research He's variational iteration method will be employed for solving some problems in calculus of variations. Some examples are presented to show the efficiency of the proposed technique

  15. Simple method to calculate percolation, Ising and Potts clusters

    International Nuclear Information System (INIS)

    Tsallis, C.

    1981-01-01

    A procedure ('break-collapse method') is introduced which considerably simplifies the calculation of two - or multirooted clusters like those commonly appearing in real space renormalization group (RG) treatments of bond-percolation, and pure and random Ising and Potts problems. The method is illustrated through two applications for the q-state Potts ferromagnet. The first of them concerns a RG calculation of the critical exponent ν for the isotropic square lattice: numerical consistence is obtained (particularly for q→0) with den Nijs conjecture. The second application is a compact reformulation of the standard star-triangle and duality transformations which provide the exact critical temperature for the anisotropic triangular and honeycomb lattices. (Author) [pt

  16. Expanding Comparative Literature into Comparative Sciences Clusters with Neutrosophy and Quad-stage Method

    Directory of Open Access Journals (Sweden)

    Fu Yuhua

    2016-08-01

    Full Text Available By using Neutrosophy and Quad-stage Method, the expansions of comparative literature include: comparative social sciences clusters, comparative natural sciences clusters, comparative interdisciplinary sciences clusters, and so on. Among them, comparative social sciences clusters include: comparative literature, comparative history, comparative philosophy, and so on; comparative natural sciences clusters include: comparative mathematics, comparative physics, comparative chemistry, comparative medicine, comparative biology, and so on.

  17. bcl::Cluster : A method for clustering biological molecules coupled with visualization in the Pymol Molecular Graphics System.

    Science.gov (United States)

    Alexander, Nathan; Woetzel, Nils; Meiler, Jens

    2011-02-01

    Clustering algorithms are used as data analysis tools in a wide variety of applications in Biology. Clustering has become especially important in protein structure prediction and virtual high throughput screening methods. In protein structure prediction, clustering is used to structure the conformational space of thousands of protein models. In virtual high throughput screening, databases with millions of drug-like molecules are organized by structural similarity, e.g. common scaffolds. The tree-like dendrogram structure obtained from hierarchical clustering can provide a qualitative overview of the results, which is important for focusing detailed analysis. However, in practice it is difficult to relate specific components of the dendrogram directly back to the objects of which it is comprised and to display all desired information within the two dimensions of the dendrogram. The current work presents a hierarchical agglomerative clustering method termed bcl::Cluster. bcl::Cluster utilizes the Pymol Molecular Graphics System to graphically depict dendrograms in three dimensions. This allows simultaneous display of relevant biological molecules as well as additional information about the clusters and the members comprising them.

  18. Cluster expansion of the wavefunction. Symmetry-adapted-cluster expansion, its variational determination, and extension of open-shell orbital theory

    International Nuclear Information System (INIS)

    Nakatsuji, H.; Hirao, K.

    1978-01-01

    The symmetry-adapted-cluster (SAC) expansion of an exact wavefunction is given. It is constructed from the generators of the symmetry-adapted excited configurations having the symmetry under consideration, and includes their higher-order effect and self-consistency effect. It is different from the conventional cluster expansions in several important points, and is suitable for applications to open-shell systems as well as closed-shell systems. The variational equation for the SAC wavefunction has a form similar to the generalized Brillouin theorem in accordance with the inclusion of the higher-order effect and the self-consistency effect. We have expressed some existing open-shell orbital theories equivalently in the conventional cluster expansion formulas, and on this basis, we have given the pseudo-orbital theory which is an extension of open-shell orbital theory in the SAC expansion formula

  19. Application of clustering methods: Regularized Markov clustering (R-MCL) for analyzing dengue virus similarity

    Science.gov (United States)

    Lestari, D.; Raharjo, D.; Bustamam, A.; Abdillah, B.; Widhianto, W.

    2017-07-01

    Dengue virus consists of 10 different constituent proteins and are classified into 4 major serotypes (DEN 1 - DEN 4). This study was designed to perform clustering against 30 protein sequences of dengue virus taken from Virus Pathogen Database and Analysis Resource (VIPR) using Regularized Markov Clustering (R-MCL) algorithm and then we analyze the result. By using Python program 3.4, R-MCL algorithm produces 8 clusters with more than one centroid in several clusters. The number of centroid shows the density level of interaction. Protein interactions that are connected in a tissue, form a complex protein that serves as a specific biological process unit. The analysis of result shows the R-MCL clustering produces clusters of dengue virus family based on the similarity role of their constituent protein, regardless of serotypes.

  20. The Local Maximum Clustering Method and Its Application in Microarray Gene Expression Data Analysis

    Directory of Open Access Journals (Sweden)

    Chen Yidong

    2004-01-01

    Full Text Available An unsupervised data clustering method, called the local maximum clustering (LMC method, is proposed for identifying clusters in experiment data sets based on research interest. A magnitude property is defined according to research purposes, and data sets are clustered around each local maximum of the magnitude property. By properly defining a magnitude property, this method can overcome many difficulties in microarray data clustering such as reduced projection in similarities, noises, and arbitrary gene distribution. To critically evaluate the performance of this clustering method in comparison with other methods, we designed three model data sets with known cluster distributions and applied the LMC method as well as the hierarchic clustering method, the -mean clustering method, and the self-organized map method to these model data sets. The results show that the LMC method produces the most accurate clustering results. As an example of application, we applied the method to cluster the leukemia samples reported in the microarray study of Golub et al. (1999.

  1. Nucleon matrix elements using the variational method in lattice QCD

    International Nuclear Information System (INIS)

    Dragos, J.; Kamleh, W.; Leinweber, D.B.; Zanotti, J.M.; Rakow, P.E.L.; Young, R.D.; Adelaide Univ., SA

    2016-06-01

    The extraction of hadron matrix elements in lattice QCD using the standard two- and threepoint correlator functions demands careful attention to systematic uncertainties. One of the most commonly studied sources of systematic error is contamination from excited states. We apply the variational method to calculate the axial vector current g_A, the scalar current g_S and the quark momentum fraction left angle x right angle of the nucleon and we compare the results to the more commonly used summation and two-exponential fit methods. The results demonstrate that the variational approach offers a more efficient and robust method for the determination of nucleon matrix elements.

  2. Improved determination of hadron matrix elements using the variational method

    International Nuclear Information System (INIS)

    Dragos, J.; Kamleh, W.; Leinweber, D.B.; Zanotti, J.M.; Rakow, P.E.L.; Young, R.D.; Adelaide Univ.

    2015-11-01

    The extraction of hadron form factors in lattice QCD using the standard two- and three-point correlator functions has its limitations. One of the most commonly studied sources of systematic error is excited state contamination, which occurs when correlators are contaminated with results from higher energy excitations. We apply the variational method to calculate the axial vector current g A and compare the results to the more commonly used summation and two-exponential fit methods. The results demonstrate that the variational approach offers a more efficient and robust method for the determination of nucleon matrix elements.

  3. Use of the Local Variation Methods for Nuclear Design Calculations

    International Nuclear Information System (INIS)

    Zhukov, A.I.

    2006-01-01

    A new problem-solving method for steady-state equations, which describe neutron diffusion, is presented. The method bases on a variation principal for steady-state diffusion equations and direct search the minimum of a corresponding functional. Benchmark problem calculation for power of fuel assemblies show ∼ 2% relative accuracy

  4. Variation Iteration Method for The Approximate Solution of Nonlinear ...

    African Journals Online (AJOL)

    In this study, we considered the numerical solution of the nonlinear Burgers equation using the Variational Iteration Method (VIM). The method seeks to examine the convergence of solutions of the Burgers equation at the expense of the parameters x and t of which the amount of errors depends. Numerical experimentation ...

  5. Some Implicit Methods for Solving Harmonic Variational Inequalities

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Noor

    2016-08-01

    Full Text Available In this paper, we use the auxiliary principle technique to suggest an implicit method for solving the harmonic variational inequalities. It is shown that the convergence of the proposed method only needs pseudo monotonicity of the operator, which is a weaker condition than monotonicity.

  6. A comparison of heuristic and model-based clustering methods for dietary pattern analysis.

    Science.gov (United States)

    Greve, Benjamin; Pigeot, Iris; Huybrechts, Inge; Pala, Valeria; Börnhorst, Claudia

    2016-02-01

    Cluster analysis is widely applied to identify dietary patterns. A new method based on Gaussian mixture models (GMM) seems to be more flexible compared with the commonly applied k-means and Ward's method. In the present paper, these clustering approaches are compared to find the most appropriate one for clustering dietary data. The clustering methods were applied to simulated data sets with different cluster structures to compare their performance knowing the true cluster membership of observations. Furthermore, the three methods were applied to FFQ data assessed in 1791 children participating in the IDEFICS (Identification and Prevention of Dietary- and Lifestyle-Induced Health Effects in Children and Infants) Study to explore their performance in practice. The GMM outperformed the other methods in the simulation study in 72 % up to 100 % of cases, depending on the simulated cluster structure. Comparing the computationally less complex k-means and Ward's methods, the performance of k-means was better in 64-100 % of cases. Applied to real data, all methods identified three similar dietary patterns which may be roughly characterized as a 'non-processed' cluster with a high consumption of fruits, vegetables and wholemeal bread, a 'balanced' cluster with only slight preferences of single foods and a 'junk food' cluster. The simulation study suggests that clustering via GMM should be preferred due to its higher flexibility regarding cluster volume, shape and orientation. The k-means seems to be a good alternative, being easier to use while giving similar results when applied to real data.

  7. STAR-TO-STAR IRON ABUNDANCE VARIATIONS IN RED GIANT BRANCH STARS IN THE GALACTIC GLOBULAR CLUSTER NGC 3201

    International Nuclear Information System (INIS)

    Simmerer, Jennifer; Ivans, Inese I.; Filler, Dan; Francois, Patrick; Charbonnel, Corinne; Monier, Richard; James, Gaël

    2013-01-01

    We present the metallicity as traced by the abundance of iron in the retrograde globular cluster NGC 3201, measured from high-resolution, high signal-to-noise spectra of 24 red giant branch stars. A spectroscopic analysis reveals a spread in [Fe/H] in the cluster stars at least as large as 0.4 dex. Star-to-star metallicity variations are supported both through photometry and through a detailed examination of spectra. We find no correlation between iron abundance and distance from the cluster core, as might be inferred from recent photometric studies. NGC 3201 is the lowest mass halo cluster to date to contain stars with significantly different [Fe/H] values.

  8. Star-to-star Iron Abundance Variations in Red Giant Branch Stars in the Galactic Globular Cluster NGC 3201

    Science.gov (United States)

    Simmerer, Jennifer; Ivans, Inese I.; Filler, Dan; Francois, Patrick; Charbonnel, Corinne; Monier, Richard; James, Gaël

    2013-02-01

    We present the metallicity as traced by the abundance of iron in the retrograde globular cluster NGC 3201, measured from high-resolution, high signal-to-noise spectra of 24 red giant branch stars. A spectroscopic analysis reveals a spread in [Fe/H] in the cluster stars at least as large as 0.4 dex. Star-to-star metallicity variations are supported both through photometry and through a detailed examination of spectra. We find no correlation between iron abundance and distance from the cluster core, as might be inferred from recent photometric studies. NGC 3201 is the lowest mass halo cluster to date to contain stars with significantly different [Fe/H] values.

  9. The Views of Turkish Pre-Service Teachers about Effectiveness of Cluster Method as a Teaching Writing Method

    Science.gov (United States)

    Kitis, Emine; Türkel, Ali

    2017-01-01

    The aim of this study is to find out Turkish pre-service teachers' views on effectiveness of cluster method as a writing teaching method. The Cluster Method can be defined as a connotative creative writing method. The way the method works is that the person who brainstorms on connotations of a word or a concept in abscence of any kind of…

  10. Short-Term Wind Power Forecasting Based on Clustering Pre-Calculated CFD Method

    Directory of Open Access Journals (Sweden)

    Yimei Wang

    2018-04-01

    Full Text Available To meet the increasing wind power forecasting (WPF demands of newly built wind farms without historical data, physical WPF methods are widely used. The computational fluid dynamics (CFD pre-calculated flow fields (CPFF-based WPF is a promising physical approach, which can balance well the competing demands of computational efficiency and accuracy. To enhance its adaptability for wind farms in complex terrain, a WPF method combining wind turbine clustering with CPFF is first proposed where the wind turbines in the wind farm are clustered and a forecasting is undertaken for each cluster. K-means, hierarchical agglomerative and spectral analysis methods are used to establish the wind turbine clustering models. The Silhouette Coefficient, Calinski-Harabaz index and within-between index are proposed as criteria to evaluate the effectiveness of the established clustering models. Based on different clustering methods and schemes, various clustering databases are built for clustering pre-calculated CFD (CPCC-based short-term WPF. For the wind farm case studied, clustering evaluation criteria show that hierarchical agglomerative clustering has reasonable results, spectral clustering is better and K-means gives the best performance. The WPF results produced by different clustering databases also prove the effectiveness of the three evaluation criteria in turn. The newly developed CPCC model has a much higher WPF accuracy than the CPFF model without using clustering techniques, both on temporal and spatial scales. The research provides supports for both the development and improvement of short-term physical WPF systems.

  11. The variational nodal method: history and recent accomplishments

    International Nuclear Information System (INIS)

    Lewis, E.E.

    2004-01-01

    The variational nodal method combines spherical harmonics expansions in angle with hybrid finite element techniques is space to obtain multigroup transport response matrix algorithms applicable to both deep penetration and reactor core physics problems. This survey briefly recounts the method's history and reviews its capabilities. The variational basis for the approach is presented and two methods for obtaining discretized equations in the form of response matrices are detailed. The first is that contained the widely used VARIANT code, while the second incorporates newly developed integral transport techniques into the variational nodal framework. The two approaches are combined with a finite sub element formulation to treat heterogeneous nodes. Applications are presented for both a deep penetration problem and to an OECD benchmark consisting of LWR MOX fuel assemblies. Ongoing work is discussed. (Author)

  12. Motion estimation using point cluster method and Kalman filter.

    Science.gov (United States)

    Senesh, M; Wolf, A

    2009-05-01

    The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal

  13. Prioritizing the risk of plant pests by clustering methods; self-organising maps, k-means and hierarchical clustering

    Directory of Open Access Journals (Sweden)

    Susan Worner

    2013-09-01

    Full Text Available For greater preparedness, pest risk assessors are required to prioritise long lists of pest species with potential to establish and cause significant impact in an endangered area. Such prioritization is often qualitative, subjective, and sometimes biased, relying mostly on expert and stakeholder consultation. In recent years, cluster based analyses have been used to investigate regional pest species assemblages or pest profiles to indicate the risk of new organism establishment. Such an approach is based on the premise that the co-occurrence of well-known global invasive pest species in a region is not random, and that the pest species profile or assemblage integrates complex functional relationships that are difficult to tease apart. In other words, the assemblage can help identify and prioritise species that pose a threat in a target region. A computational intelligence method called a Kohonen self-organizing map (SOM, a type of artificial neural network, was the first clustering method applied to analyse assemblages of invasive pests. The SOM is a well known dimension reduction and visualization method especially useful for high dimensional data that more conventional clustering methods may not analyse suitably. Like all clustering algorithms, the SOM can give details of clusters that identify regions with similar pest assemblages, possible donor and recipient regions. More important, however SOM connection weights that result from the analysis can be used to rank the strength of association of each species within each regional assemblage. Species with high weights that are not already established in the target region are identified as high risk. However, the SOM analysis is only the first step in a process to assess risk to be used alongside or incorporated within other measures. Here we illustrate the application of SOM analyses in a range of contexts in invasive species risk assessment, and discuss other clustering methods such as k

  14. clusters

    Indian Academy of Sciences (India)

    2017-09-27

    Sep 27, 2017 ... Author for correspondence (zh4403701@126.com). MS received 15 ... lic clusters using density functional theory (DFT)-GGA of the DMOL3 package. ... In the process of geometric optimization, con- vergence thresholds ..... and Postgraduate Research & Practice Innovation Program of. Jiangsu Province ...

  15. clusters

    Indian Academy of Sciences (India)

    environmental as well as technical problems during fuel gas utilization. ... adsorption on some alloys of Pd, namely PdAu, PdAg ... ried out on small neutral and charged Au24,26,27, Cu,28 ... study of Zanti et al.29 on Pdn (n = 1–9) clusters.

  16. Differences Between Ward's and UPGMA Methods of Cluster Analysis: Implications for School Psychology.

    Science.gov (United States)

    Hale, Robert L.; Dougherty, Donna

    1988-01-01

    Compared the efficacy of two methods of cluster analysis, the unweighted pair-groups method using arithmetic averages (UPGMA) and Ward's method, for students grouped on intelligence, achievement, and social adjustment by both clustering methods. Found UPGMA more efficacious based on output, on cophenetic correlation coefficients generated by each…

  17. Developing cluster strategy of apples dodol SMEs by integration K-means clustering and analytical hierarchy process method

    Science.gov (United States)

    Mustaniroh, S. A.; Effendi, U.; Silalahi, R. L. R.; Sari, T.; Ala, M.

    2018-03-01

    The purposes of this research were to determine the grouping of apples dodol small and medium enterprises (SMEs) in Batu City and to determine an appropriate development strategy for each cluster. The methods used for clustering SMEs was k-means. The Analytical Hierarchy Process (AHP) approach was then applied to determine the development strategy priority for each cluster. The variables used in grouping include production capacity per month, length of operation, investment value, average sales revenue per month, amount of SMEs assets, and the number of workers. Several factors were considered in AHP include industry cluster, government, as well as related and supporting industries. Data was collected using the methods of questionaire and interviews. SMEs respondents were selected among SMEs appels dodol in Batu City using purposive sampling. The result showed that two clusters were formed from five apples dodol SMEs. The 1stcluster of apples dodol SMEs, classified as small enterprises, included SME A, SME C, and SME D. The 2ndcluster of SMEs apples dodol, classified as medium enterprises, consisted of SME B and SME E. The AHP results indicated that the priority development strategy for the 1stcluster of apples dodol SMEs was improving quality and the product standardisation, while for the 2nd cluster was increasing the marketing access.

  18. Swarm: robust and fast clustering method for amplicon-based studies

    Science.gov (United States)

    Rognes, Torbjørn; Quince, Christopher; de Vargas, Colomban; Dunthorn, Micah

    2014-01-01

    Popular de novo amplicon clustering methods suffer from two fundamental flaws: arbitrary global clustering thresholds, and input-order dependency induced by centroid selection. Swarm was developed to address these issues by first clustering nearly identical amplicons iteratively using a local threshold, and then by using clusters’ internal structure and amplicon abundances to refine its results. This fast, scalable, and input-order independent approach reduces the influence of clustering parameters and produces robust operational taxonomic units. PMID:25276506

  19. Swarm: robust and fast clustering method for amplicon-based studies

    Directory of Open Access Journals (Sweden)

    Frédéric Mahé

    2014-09-01

    Full Text Available Popular de novo amplicon clustering methods suffer from two fundamental flaws: arbitrary global clustering thresholds, and input-order dependency induced by centroid selection. Swarm was developed to address these issues by first clustering nearly identical amplicons iteratively using a local threshold, and then by using clusters’ internal structure and amplicon abundances to refine its results. This fast, scalable, and input-order independent approach reduces the influence of clustering parameters and produces robust operational taxonomic units.

  20. Application of New Variational Homotopy Perturbation Method For ...

    African Journals Online (AJOL)

    This paper discusses the application of the New Variational Homotopy Perturbation Method (NVHPM) for solving integro-differential equations. The advantage of the new Scheme is that it does not require discretization, linearization or any restrictive assumption of any form be fore it is applied. Several test problems are ...

  1. Investigating the temporal variations of the time-clustering behavior of the Koyna-Warna (India) reservoir-triggered seismicity

    International Nuclear Information System (INIS)

    Telesca, Luciano

    2011-01-01

    Research highlights: → Time-clustering behaviour in seismicity can be detected by applying the Allan Factor. → The reservoir-induced seismicity at Koyna-Warna (India) is time-clusterized. → Pre- and co-seismic increases of the time-clustering degree are revealed. - Abstract: The time-clustering behavior of the 1996-2005 seismicity of Koyna-Warna region (India), a unique site where reservoir-triggered earthquakes have been continuously occurring over the last about 50 year, has been analyzed. The scaling exponent α, estimated by using the Allan Factor method, a powerful tool to investigate clusterization in point processes, shows co-seismic and pre-seismic enhancements associated with the occurrence of the major events.

  2. Discrete gradient methods for solving variational image regularisation models

    International Nuclear Information System (INIS)

    Grimm, V; McLachlan, Robert I; McLaren, David I; Quispel, G R W; Schönlieb, C-B

    2017-01-01

    Discrete gradient methods are well-known methods of geometric numerical integration, which preserve the dissipation of gradient systems. In this paper we show that this property of discrete gradient methods can be interesting in the context of variational models for image processing, that is where the processed image is computed as a minimiser of an energy functional. Numerical schemes for computing minimisers of such energies are desired to inherit the dissipative property of the gradient system associated to the energy and consequently guarantee a monotonic decrease of the energy along iterations, avoiding situations in which more computational work might lead to less optimal solutions. Under appropriate smoothness assumptions on the energy functional we prove that discrete gradient methods guarantee a monotonic decrease of the energy towards stationary states, and we promote their use in image processing by exhibiting experiments with convex and non-convex variational models for image deblurring, denoising, and inpainting. (paper)

  3. Variations in CCL3L gene cluster sequence and non-specific gene copy numbers

    Directory of Open Access Journals (Sweden)

    Edberg Jeffrey C

    2010-03-01

    Full Text Available Abstract Background Copy number variations (CNVs of the gene CC chemokine ligand 3-like1 (CCL3L1 have been implicated in HIV-1 susceptibility, but the association has been inconsistent. CCL3L1 shares homology with a cluster of genes localized to chromosome 17q12, namely CCL3, CCL3L2, and, CCL3L3. These genes are involved in host defense and inflammatory processes. Several CNV assays have been developed for the CCL3L1 gene. Findings Through pairwise and multiple alignments of these genes, we have shown that the homology between these genes ranges from 50% to 99% in complete gene sequences and from 70-100% in the exonic regions, with CCL3L1 and CCL3L3 being identical. By use of MEGA 4 and BioEdit, we aligned sense primers, anti-sense primers, and probes used in several previously described assays against pre-multiple alignments of all four chemokine genes. Each set of probes and primers aligned and matched with overlapping sequences in at least two of the four genes, indicating that previously utilized RT-PCR based CNV assays are not specific for only CCL3L1. The four available assays measured median copies of 2 and 3-4 in European and African American, respectively. The concordance between the assays ranged from 0.44-0.83 suggesting individual discordant calls and inconsistencies with the assays from the expected gene coverage from the known sequence. Conclusions This indicates that some of the inconsistencies in the association studies could be due to assays that provide heterogenous results. Sequence information to determine CNV of the three genes separately would allow to test whether their association with the pathogenesis of a human disease or phenotype is affected by an individual gene or by a combination of these genes.

  4. Variational iteration method for solving coupled-KdV equations

    International Nuclear Information System (INIS)

    Assas, Laila M.B.

    2008-01-01

    In this paper, the He's variational iteration method is applied to solve the non-linear coupled-KdV equations. This method is based on the use of Lagrange multipliers for identification of optimal value of a parameter in a functional. This technique provides a sequence of functions which converge to the exact solution of the coupled-KdV equations. This procedure is a powerful tool for solving coupled-KdV equations

  5. Molecular photoionization using the complex Kohn variational method

    International Nuclear Information System (INIS)

    Lynch, D.L.; Schneider, B.I.

    1992-01-01

    We have applied the complex Kohn variational method to the study of molecular-photoionization processes. This requires electron-ion scattering calculations enforcing incoming boundary conditions. The sensitivity of these results to the choice of the cutoff function in the Kohn method has been studied and we have demonstrated that a simple matching of the irregular function to a linear combination of regular functions produces accurate scattering phase shifts

  6. Cluster-cell calculation using the method of generalized homogenization

    International Nuclear Information System (INIS)

    Laletin, N.I.; Boyarinov, V.F.

    1988-01-01

    The generalized-homogenization method (GHM), used for solving the neutron transfer equation, was applied to calculating the neutron distribution in the cluster cell with a series of cylindrical cells with cylindrically coaxial zones. Single-group calculations of the technological channel of the cell of an RBMK reactor were performed using GHM. The technological channel was understood to be the reactor channel, comprised of the zirconium rod, the water or steam-water mixture, the uranium dioxide fuel element, and the zirconium tube, together with the adjacent graphite layer. Calculations were performed for channels with no internal sources and with unit incoming current at the external boundary as well as for channels with internal sources and zero current at the external boundary. The PRAKTINETs program was used to calculate the symmetric neutron distributions in the microcell and in channels with homogenized annular zones. The ORAR-TsM program was used to calculate the antisymmetric distribution in the microcell. The accuracy of the calculations were compared for the two channel versions

  7. Moments of inertia for solids of revolution and variational methods

    International Nuclear Information System (INIS)

    Diaz, Rodolfo A; Herrera, William J; Martinez, R

    2006-01-01

    We present some formulae for the moments of inertia of homogeneous solids of revolution in terms of the functions that generate the solids. The development of these expressions exploits the cylindrical symmetry of these objects and avoids the explicit use of multiple integration, providing an easy and pedagogical approach. The explicit use of the functions that generate the solid gives the possibility of writing the moment of inertia as a functional, which in turn allows us to utilize the calculus of variations to obtain new insight into some properties of this fundamental quantity. In particular, minimization of moments of inertia under certain restrictions is possible by using variational methods

  8. Elastic scattering of positronium: Application of the confined variational method

    KAUST Repository

    Zhang, Junyi

    2012-08-01

    We demonstrate for the first time that the phase shift in elastic positronium-atom scattering can be precisely determined by the confined variational method, in spite of the fact that the Hamiltonian includes an unphysical confining potential acting on the center of mass of the positron and one of the atomic electrons. As an example, we study the S-wave elastic scattering for the positronium-hydrogen scattering system, where the existing 4% discrepancy between the Kohn variational calculation and the R-matrix calculation is resolved. © Copyright EPLA, 2012.

  9. Elastic scattering of positronium: Application of the confined variational method

    KAUST Repository

    Zhang, Junyi; Yan, Zong-Chao; Schwingenschlö gl, Udo

    2012-01-01

    We demonstrate for the first time that the phase shift in elastic positronium-atom scattering can be precisely determined by the confined variational method, in spite of the fact that the Hamiltonian includes an unphysical confining potential acting on the center of mass of the positron and one of the atomic electrons. As an example, we study the S-wave elastic scattering for the positronium-hydrogen scattering system, where the existing 4% discrepancy between the Kohn variational calculation and the R-matrix calculation is resolved. © Copyright EPLA, 2012.

  10. Analytical Energy Gradients for Excited-State Coupled-Cluster Methods

    Science.gov (United States)

    Wladyslawski, Mark; Nooijen, Marcel

    The equation-of-motion coupled-cluster (EOM-CC) and similarity transformed equation-of-motion coupled-cluster (STEOM-CC) methods have been firmly established as accurate and routinely applicable extensions of single-reference coupled-cluster theory to describe electronically excited states. An overview of these methods is provided, with emphasis on the many-body similarity transform concept that is the key to a rationalization of their accuracy. The main topic of the paper is the derivation of analytical energy gradients for such non-variational electronic structure approaches, with an ultimate focus on obtaining their detailed algebraic working equations. A general theoretical framework using Lagrange's method of undetermined multipliers is presented, and the method is applied to formulate the EOM-CC and STEOM-CC gradients in abstract operator terms, following the previous work in [P.G. Szalay, Int. J. Quantum Chem. 55 (1995) 151] and [S.R. Gwaltney, R.J. Bartlett, M. Nooijen, J. Chem. Phys. 111 (1999) 58]. Moreover, the systematics of the Lagrange multiplier approach is suitable for automation by computer, enabling the derivation of the detailed derivative equations through a standardized and direct procedure. To this end, we have developed the SMART (Symbolic Manipulation and Regrouping of Tensors) package of automated symbolic algebra routines, written in the Mathematica programming language. The SMART toolkit provides the means to expand, differentiate, and simplify equations by manipulation of the detailed algebraic tensor expressions directly. The Lagrangian multiplier formulation establishes a uniform strategy to perform the automated derivation in a standardized manner: A Lagrange multiplier functional is constructed from the explicit algebraic equations that define the energy in the electronic method; the energy functional is then made fully variational with respect to all of its parameters, and the symbolic differentiations directly yield the explicit

  11. Minimizers with discontinuous velocities for the electromagnetic variational method

    International Nuclear Information System (INIS)

    De Luca, Jayme

    2010-01-01

    The electromagnetic two-body problem has neutral differential delay equations of motion that, for generic boundary data, can have solutions with discontinuous derivatives. If one wants to use these neutral differential delay equations with arbitrary boundary data, solutions with discontinuous derivatives must be expected and allowed. Surprisingly, Wheeler-Feynman electrodynamics has a boundary value variational method for which minimizer trajectories with discontinuous derivatives are also expected, as we show here. The variational method defines continuous trajectories with piecewise defined velocities and accelerations, and electromagnetic fields defined by the Euler-Lagrange equations on trajectory points. Here we use the piecewise defined minimizers with the Lienard-Wierchert formulas to define generalized electromagnetic fields almost everywhere (but on sets of points of zero measure where the advanced/retarded velocities and/or accelerations are discontinuous). Along with this generalization we formulate the generalized absorber hypothesis that the far fields vanish asymptotically almost everywhere and show that localized orbits with far fields vanishing almost everywhere must have discontinuous velocities on sewing chains of breaking points. We give the general solution for localized orbits with vanishing far fields by solving a (linear) neutral differential delay equation for these far fields. We discuss the physics of orbits with discontinuous derivatives stressing the differences to the variational methods of classical mechanics and the existence of a spinorial four-current associated with the generalized variational electrodynamics.

  12. Comparative analysis of clustering methods for gene expression time course data

    Directory of Open Access Journals (Sweden)

    Ivan G. Costa

    2004-01-01

    Full Text Available This work performs a data driven comparative study of clustering methods used in the analysis of gene expression time courses (or time series. Five clustering methods found in the literature of gene expression analysis are compared: agglomerative hierarchical clustering, CLICK, dynamical clustering, k-means and self-organizing maps. In order to evaluate the methods, a k-fold cross-validation procedure adapted to unsupervised methods is applied. The accuracy of the results is assessed by the comparison of the partitions obtained in these experiments with gene annotation, such as protein function and series classification.

  13. Improvement of economic potential estimation methods for enterprise with potential branch clusters use

    Directory of Open Access Journals (Sweden)

    V.Ya. Nusinov

    2017-08-01

    Full Text Available The research determines that the current existing methods of enterprise’s economic potential estimation are based on the use of additive, multiplicative and rating models. It is determined that the existing methods have a row of defects. For example, not all the methods take into account the branch features of the analysis, and also the level of development of the enterprise comparatively with other enterprises. It is suggested to level such defects by an account at the estimation of potential integral level not only by branch features of enterprises activity but also by the intra-account economic clusterization of such enterprises. Scientific works which are connected with the using of clusters for the estimation of economic potential are generalized. According to the results of generalization it is determined that it is possible to distinguish 9 scientific approaches in this direction: the use of natural clusterization of enterprises with the purpose of estimation and increase of region potential; the use of natural clusterization of enterprises with the purpose of estimation and increase of industry potential; use of artificial clusterization of enterprises with the purpose of estimation and increase of region potential; use of artificial clusterization of enterprises with the purpose of estimation and increase of industry potential; the use of artificial clusterization of enterprises with the purpose of clustering potential estimation; the use of artificial clusterization of enterprises with the purpose of estimation of clustering competitiveness potential; the use of natural (artificial clusterization for the estimation of clustering efficiency; the use of natural (artificial clusterization for the increase of level at region (industries development; the use of methods of economic potential of region (industries estimation or its constituents for the construction of the clusters. It is determined that the use of clusterization method in

  14. Cluster size statistic and cluster mass statistic: two novel methods for identifying changes in functional connectivity between groups or conditions.

    Science.gov (United States)

    Ing, Alex; Schwarzbauer, Christian

    2014-01-01

    Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods--the cluster size statistic (CSS) and cluster mass statistic (CMS)--are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity.

  15. The variational nodal method: some history and recent activity

    International Nuclear Information System (INIS)

    Lewis, E.E.; Smith, M.A.; Palmiotti, G.

    2005-01-01

    The variational nodal method combines spherical harmonics expansions in angle with hybrid finite element techniques in space to obtain multigroup transport response matrix algorithms applicable to a wide variety of reactor physics problems. This survey briefly recounts the method's history and reviews its capabilities. Two methods for obtaining discretized equations in the form of response matrices are compared. The first is that contained the widely used VARIANT code, while the second incorporates more recently developed integral transport techniques into the variational nodal framework. The two approaches are combined with a finite sub-element formulation to treat heterogeneous nodes. Results are presented for application to a deep penetration problem and to an OECD benchmark consisting of LWR Mox fuel assemblies. Ongoing work is discussed. (authors)

  16. Variation in sequence and location of the fumonisin mycotoxin niosynthetic gene cluster in Fusarium

    NARCIS (Netherlands)

    Proctor, R.H.; Hove, van F.; Susca, A.; Stea, A.; Busman, M.; Lee, van der T.A.J.; Waalwijk, C.; Moretti, A.

    2010-01-01

    In Fusarium, the ability to produce fumonisins is governed by a 17-gene fumonisin biosynthetic gene (FUM) cluster. Here, we examined the cluster in F. oxysporum strain O-1890 and nine other species selected to represent a wide range of the genetic diversity within the GFSC.

  17. Temperature Dependence of Arn+ Cluster Backscattering from Polymer Surfaces: a New Method to Determine the Surface Glass Transition Temperature.

    Science.gov (United States)

    Poleunis, Claude; Cristaudo, Vanina; Delcorte, Arnaud

    2018-01-01

    In this work, time-of-flight secondary ion mass spectrometry (ToF-SIMS) was used to study the intensity variations of the backscattered Ar n + clusters as a function of temperature for several amorphous polymer surfaces (polyolefins, polystyrene, and polymethyl methacrylate). For all these investigated polymers, our results show a transition of the ratio Ar 2 + /(Ar 2 + + Ar 3 + ) when the temperature is scanned from -120 °C to +125 °C (the exact limits depend on the studied polymer). This transition generally spans over a few tens of degrees and the temperature of the inflection point of each curve is always lower than the bulk glass transition temperature (T g ) reported for the considered polymer. Due to the surface sensitivity of the cluster backscattering process (several nanometers), the presented analysis could provide a new method to specifically evaluate a surface transition temperature of polymers, with the same lateral resolution as the gas cluster beam. Graphical abstract ᅟ.

  18. Recursive expectation-maximization clustering: A method for identifying buffering mechanisms composed of phenomic modules

    Science.gov (United States)

    Guo, Jingyu; Tian, Dehua; McKinney, Brett A.; Hartman, John L.

    2010-06-01

    Interactions between genetic and/or environmental factors are ubiquitous, affecting the phenotypes of organisms in complex ways. Knowledge about such interactions is becoming rate-limiting for our understanding of human disease and other biological phenomena. Phenomics refers to the integrative analysis of how all genes contribute to phenotype variation, entailing genome and organism level information. A systems biology view of gene interactions is critical for phenomics. Unfortunately the problem is intractable in humans; however, it can be addressed in simpler genetic model systems. Our research group has focused on the concept of genetic buffering of phenotypic variation, in studies employing the single-cell eukaryotic organism, S. cerevisiae. We have developed a methodology, quantitative high throughput cellular phenotyping (Q-HTCP), for high-resolution measurements of gene-gene and gene-environment interactions on a genome-wide scale. Q-HTCP is being applied to the complete set of S. cerevisiae gene deletion strains, a unique resource for systematically mapping gene interactions. Genetic buffering is the idea that comprehensive and quantitative knowledge about how genes interact with respect to phenotypes will lead to an appreciation of how genes and pathways are functionally connected at a systems level to maintain homeostasis. However, extracting biologically useful information from Q-HTCP data is challenging, due to the multidimensional and nonlinear nature of gene interactions, together with a relative lack of prior biological information. Here we describe a new approach for mining quantitative genetic interaction data called recursive expectation-maximization clustering (REMc). We developed REMc to help discover phenomic modules, defined as sets of genes with similar patterns of interaction across a series of genetic or environmental perturbations. Such modules are reflective of buffering mechanisms, i.e., genes that play a related role in the maintenance

  19. Atomic and electronic structure of clusters from car-Parrinello method

    International Nuclear Information System (INIS)

    Kumar, V.

    1994-06-01

    With the development of ab-initio molecular dynamics method, it has now become possible to study the static and dynamical properties of clusters containing up to a few tens of atoms. Here I present a review of the method within the framework of the density functional theory and pseudopotential approach to represent the electron-ion interaction and discuss some of its applications to clusters. Particular attention is focussed on the structure and bonding properties of clusters as a function of their size. Applications to clusters of alkali metals and Al, non-metal - metal transition in divalent metal clusters, molecular clusters of carbon and Sb are discussed in detail. Some results are also presented on mixed clusters. (author). 121 refs, 24 ifigs

  20. Variationally derived coarse mesh methods using an alternative flux representation

    International Nuclear Information System (INIS)

    Wojtowicz, G.; Holloway, J.P.

    1995-01-01

    Investigation of a previously reported variational technique for the solution of the 1-D, 1-group neutron transport equation in reactor lattices has inspired the development of a finite element formulation of the method. Compared to conventional homogenization methods in which node homogenized cross sections are used, the coefficients describing this system take on greater spatial dependence. However, the methods employ an alternative flux representation which allows the transport equation to be cast into a form whose solution has only a slow spatial variation and, hence, requires relatively few variables to describe. This alternative flux representation and the stationary property of a variational principle define a class of coarse mesh discretizations of transport theory capable of achieving order of magnitude reductions of eigenvalue and pointwise scalar flux errors as compared with diffusion theory while retaining diffusion theory's relatively low cost. Initial results of a 1-D spectral element approach are reviewed and used to motivate the finite element implementation which is more efficient and almost as accurate; one and two group results of this method are described

  1. The potential of clustering methods to define intersection test scenarios: Assessing real-life performance of AEB.

    Science.gov (United States)

    Sander, Ulrich; Lubbe, Nils

    2018-04-01

    Intersection accidents are frequent and harmful. The accident types 'straight crossing path' (SCP), 'left turn across path - oncoming direction' (LTAP/OD), and 'left-turn across path - lateral direction' (LTAP/LD) represent around 95% of all intersection accidents and one-third of all police-reported car-to-car accidents in Germany. The European New Car Assessment Program (Euro NCAP) have announced that intersection scenarios will be included in their rating from 2020; however, how these scenarios are to be tested has not been defined. This study investigates whether clustering methods can be used to identify a small number of test scenarios sufficiently representative of the accident dataset to evaluate Intersection Automated Emergency Braking (AEB). Data from the German In-Depth Accident Study (GIDAS) and the GIDAS-based Pre-Crash Matrix (PCM) from 1999 to 2016, containing 784 SCP and 453 LTAP/OD accidents, were analyzed with principal component methods to identify variables that account for the relevant total variances of the sample. Three different methods for data clustering were applied to each of the accident types, two similarity-based approaches, namely Hierarchical Clustering (HC) and Partitioning Around Medoids (PAM), and the probability-based Latent Class Clustering (LCC). The optimum number of clusters was derived for HC and PAM with the silhouette method. The PAM algorithm was both initiated with random start medoid selection and medoids from HC. For LCC, the Bayesian Information Criterion (BIC) was used to determine the optimal number of clusters. Test scenarios were defined from optimal cluster medoids weighted by their real-life representation in GIDAS. The set of variables for clustering was further varied to investigate the influence of variable type and character. We quantified how accurately each cluster variation represents real-life AEB performance using pre-crash simulations with PCM data and a generic algorithm for AEB intervention. The

  2. A Multidimensional and Multimembership Clustering Method for Social Networks and Its Application in Customer Relationship Management

    Directory of Open Access Journals (Sweden)

    Peixin Zhao

    2013-01-01

    Full Text Available Community detection in social networks plays an important role in cluster analysis. Many traditional techniques for one-dimensional problems have been proven inadequate for high-dimensional or mixed type datasets due to the data sparseness and attribute redundancy. In this paper we propose a graph-based clustering method for multidimensional datasets. This novel method has two distinguished features: nonbinary hierarchical tree and the multi-membership clusters. The nonbinary hierarchical tree clearly highlights meaningful clusters, while the multimembership feature may provide more useful service strategies. Experimental results on the customer relationship management confirm the effectiveness of the new method.

  3. Trend analysis using non-stationary time series clustering based on the finite element method

    OpenAIRE

    Gorji Sefidmazgi, M.; Sayemuzzaman, M.; Homaifar, A.; Jha, M. K.; Liess, S.

    2014-01-01

    In order to analyze low-frequency variability of climate, it is useful to model the climatic time series with multiple linear trends and locate the times of significant changes. In this paper, we have used non-stationary time series clustering to find change points in the trends. Clustering in a multi-dimensional non-stationary time series is challenging, since the problem is mathematically ill-posed. Clustering based on the finite element method (FEM) is one of the methods ...

  4. Variation in the fumonisin biosynthetic gene cluster in fumonisin-producing and nonproducing black aspergilli.

    Science.gov (United States)

    Susca, Antonia; Proctor, Robert H; Butchko, Robert A E; Haidukowski, Miriam; Stea, Gaetano; Logrieco, Antonio; Moretti, Antonio

    2014-12-01

    The ability to produce fumonisin mycotoxins varies among members of the black aspergilli. Previously, analyses of selected genes in the fumonisin biosynthetic gene (fum) cluster in black aspergilli from California grapes indicated that fumonisin-nonproducing isolates of Aspergillus welwitschiae lack six fum genes, but nonproducing isolates of Aspergillus niger do not. In the current study, analyses of black aspergilli from grapes from the Mediterranean Basin indicate that the genomic context of the fum cluster is the same in isolates of A. niger and A. welwitschiae regardless of fumonisin-production ability and that full-length clusters occur in producing isolates of both species and nonproducing isolates of A. niger. In contrast, the cluster has undergone an eight-gene deletion in fumonisin-nonproducing isolates of A. welwitschiae. Phylogenetic analyses suggest each species consists of a mixed population of fumonisin-producing and nonproducing individuals, and that existence of both production phenotypes may provide a selective advantage to these species. Differences in gene content of fum cluster homologues and phylogenetic relationships of fum genes suggest that the mutation(s) responsible for the nonproduction phenotype differs, and therefore arose independently, in the two species. Partial fum cluster homologues were also identified in genome sequences of four other black Aspergillus species. Gene content of these partial clusters and phylogenetic relationships of fum sequences indicate that non-random partial deletion of the cluster has occurred multiple times among the species. This in turn suggests that an intact cluster and fumonisin production were once more widespread among black aspergilli. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. THE CONTROL VARIATIONAL METHOD FOR ELASTIC CONTACT PROBLEMS

    Directory of Open Access Journals (Sweden)

    Mircea Sofonea

    2010-07-01

    Full Text Available We consider a multivalued equation of the form Ay + F(y = fin a real Hilbert space, where A is a linear operator and F represents the (Clarke subdifferential of some function. We prove existence and uniqueness results of the solution by using the control variational method. The main idea in this method is to minimize the energy functional associated to the nonlinear equation by arguments of optimal control theory. Then we consider a general mathematical model describing the contact between a linearly elastic body and an obstacle which leads to a variational formulation as above, for the displacement field. We apply the abstract existence and uniqueness results to prove the unique weak solvability of the corresponding contact problem. Finally, we present examples of contact and friction laws for which our results work.

  6. The variational method in the atomic structure calcularion

    International Nuclear Information System (INIS)

    Tomimura, A.

    1970-01-01

    The importance and limitations of variational methods on the atomic structure calculations is set into relevance. Comparisons are made to the Perturbation Theory. Ilustrating it, the method is applied to the H - , H + and H + 2 simple atomic structure systems, and the results are analysed with basis on the study of the associated essential eigenvalue spectrum. Hydrogenic functions (where the screening constants are replaced by variational parameters) are combined to construct the wave function with proper symmetry for each one of the systems. This shows the existence of a bound state for H - , but no conclusions can be made for the others, where it may or may not be necessary to use more flexible wave functions, i.e., with greater number of terms and parameters. (author) [pt

  7. A variational method in out-of-equilibrium physical systems.

    Science.gov (United States)

    Pinheiro, Mario J

    2013-12-09

    We propose a new variational principle for out-of-equilibrium dynamic systems that are fundamentally based on the method of Lagrange multipliers applied to the total entropy of an ensemble of particles. However, we use the fundamental equation of thermodynamics on differential forms, considering U and S as 0-forms. We obtain a set of two first order differential equations that reveal the same formal symplectic structure shared by classical mechanics, fluid mechanics and thermodynamics. From this approach, a topological torsion current emerges of the form , where Aj and ωk denote the components of the vector potential (gravitational and/or electromagnetic) and where ω denotes the angular velocity of the accelerated frame. We derive a special form of the Umov-Poynting theorem for rotating gravito-electromagnetic systems. The variational method is then applied to clarify the working mechanism of particular devices.

  8. Interactive K-Means Clustering Method Based on User Behavior for Different Analysis Target in Medicine.

    Science.gov (United States)

    Lei, Yang; Yu, Dai; Bin, Zhang; Yang, Yang

    2017-01-01

    Clustering algorithm as a basis of data analysis is widely used in analysis systems. However, as for the high dimensions of the data, the clustering algorithm may overlook the business relation between these dimensions especially in the medical fields. As a result, usually the clustering result may not meet the business goals of the users. Then, in the clustering process, if it can combine the knowledge of the users, that is, the doctor's knowledge or the analysis intent, the clustering result can be more satisfied. In this paper, we propose an interactive K -means clustering method to improve the user's satisfactions towards the result. The core of this method is to get the user's feedback of the clustering result, to optimize the clustering result. Then, a particle swarm optimization algorithm is used in the method to optimize the parameters, especially the weight settings in the clustering algorithm to make it reflect the user's business preference as possible. After that, based on the parameter optimization and adjustment, the clustering result can be closer to the user's requirement. Finally, we take an example in the breast cancer, to testify our method. The experiments show the better performance of our algorithm.

  9. Variational method for magnetic impurities in metals: impurity pairs

    Energy Technology Data Exchange (ETDEWEB)

    Oles, A M [Max-Planck-Institut fuer Festkoerperforschung, Stuttgart (Germany, F.R.); Chao, K A [Linkoeping Univ. (Sweden). Dept. of Physics and Measurement Technology

    1980-01-01

    Applying a variational method to the generalized Wolff model, we have investigated the effect of impurity-impurity interaction on the formation of local moments in the ground state. The direct coupling between the impurities is found to be more important than the interaction between the impurities and the host conduction electrons, as far as the formation of local moments is concerned. Under certain conditions we also observe different valences on different impurities.

  10. Anharmonic effects in the quantum cluster equilibrium method

    Science.gov (United States)

    von Domaros, Michael; Perlt, Eva

    2017-03-01

    The well-established quantum cluster equilibrium (QCE) model provides a statistical thermodynamic framework to apply high-level ab initio calculations of finite cluster structures to macroscopic liquid phases using the partition function. So far, the harmonic approximation has been applied throughout the calculations. In this article, we apply an important correction in the evaluation of the one-particle partition function and account for anharmonicity. Therefore, we implemented an analytical approximation to the Morse partition function and the derivatives of its logarithm with respect to temperature, which are required for the evaluation of thermodynamic quantities. This anharmonic QCE approach has been applied to liquid hydrogen chloride and cluster distributions, and the molar volume, the volumetric thermal expansion coefficient, and the isobaric heat capacity have been calculated. An improved description for all properties is observed if anharmonic effects are considered.

  11. A crystalline cluster method for deep impurities in insulators

    International Nuclear Information System (INIS)

    Guimaraes, P.S.

    1983-01-01

    An 'ab initio' self-consistent-field crystalline-cluster approach to the study of deep impurity states in insulators is proposed. It is shown that, in spite of being a cluster calculation, the interaction of the impurity with the crystal environment is fully taken into account. It is also shown that the present representation of the impurity states is, at least, as precise as the crystalline cluster representation of the pure crystal electronic structure. The procedure has been tested by performing the calculation of the electronic structure of the U center in a sodium chloride crystal, and it has been observed that the calculated GAMMA 1 - GAMMA 15 absorption energy is in good agreement with experiment. (Author) [pt

  12. A crystalline cluster method for deep impurities in insulators

    International Nuclear Information System (INIS)

    Guimaraes, P.S.

    1983-01-01

    An ''ab initio'' self-consistent-field crysttalline-cluster approach to the study of deep impurity states in insulators is proposed. It is shown that, in spite of being a cluster calculation, the interaction of the impurity with the crystal environment is fully taken into account. It is also shown that the present representation of the impurity states is, at least, as precise as the crystalline cluster representation of the pure crystal electronic structure. The procedure has been tested by performing the calculation of the electronic structure of the U center in a sodium chloride crystal, and it has been observed that the calculated γ 1 - γ 15 absorption energy is in good agreement with experiment. (author) [pt

  13. The variational method in quantum mechanics: an elementary introduction

    Science.gov (United States)

    Borghi, Riccardo

    2018-05-01

    Variational methods in quantum mechanics are customarily presented as invaluable techniques to find approximate estimates of ground state energies. In the present paper a short catalogue of different celebrated potential distributions (both 1D and 3D), for which an exact and complete (energy and wavefunction) ground state determination can be achieved in an elementary way, is illustrated. No previous knowledge of calculus of variations is required. Rather, in all presented cases the exact energy functional minimization is achieved by using only a couple of simple mathematical tricks: ‘completion of square’ and integration by parts. This makes our approach particularly suitable for undergraduates. Moreover, the key role played by particle localization is emphasized through the entire analysis. This gentle introduction to the variational method could also be potentially attractive for more expert students as a possible elementary route toward a rather advanced topic on quantum mechanics: the factorization method. Such an unexpected connection is outlined in the final part of the paper.

  14. Method for discovering relationships in data by dynamic quantum clustering

    Science.gov (United States)

    Weinstein, Marvin; Horn, David

    2014-10-28

    Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.

  15. A dynamic lattice searching method with rotation operation for optimization of large clusters

    International Nuclear Information System (INIS)

    Wu Xia; Cai Wensheng; Shao Xueguang

    2009-01-01

    Global optimization of large clusters has been a difficult task, though much effort has been paid and many efficient methods have been proposed. During our works, a rotation operation (RO) is designed to realize the structural transformation from decahedra to icosahedra for the optimization of large clusters, by rotating the atoms below the center atom with a definite degree around the fivefold axis. Based on the RO, a development of the previous dynamic lattice searching with constructed core (DLSc), named as DLSc-RO, is presented. With an investigation of the method for the optimization of Lennard-Jones (LJ) clusters, i.e., LJ 500 , LJ 561 , LJ 600 , LJ 665-667 , LJ 670 , LJ 685 , and LJ 923 , Morse clusters, silver clusters by Gupta potential, and aluminum clusters by NP-B potential, it was found that both the global minima with icosahedral and decahedral motifs can be obtained, and the method is proved to be efficient and universal.

  16. Creating multithemed ecological regions for macroscale ecology: Testing a flexible, repeatable, and accessible clustering method

    Science.gov (United States)

    Cheruvelil, Kendra Spence; Yuan, Shuai; Webster, Katherine E.; Tan, Pang-Ning; Lapierre, Jean-Francois; Collins, Sarah M.; Fergus, C. Emi; Scott, Caren E.; Norton Henry, Emily; Soranno, Patricia A.; Filstrup, Christopher T.; Wagner, Tyler

    2017-01-01

    Understanding broad-scale ecological patterns and processes often involves accounting for regional-scale heterogeneity. A common way to do so is to include ecological regions in sampling schemes and empirical models. However, most existing ecological regions were developed for specific purposes, using a limited set of geospatial features and irreproducible methods. Our study purpose was to: (1) describe a method that takes advantage of recent computational advances and increased availability of regional and global data sets to create customizable and reproducible ecological regions, (2) make this algorithm available for use and modification by others studying different ecosystems, variables of interest, study extents, and macroscale ecology research questions, and (3) demonstrate the power of this approach for the research question—How well do these regions capture regional-scale variation in lake water quality? To achieve our purpose we: (1) used a spatially constrained spectral clustering algorithm that balances geospatial homogeneity and region contiguity to create ecological regions using multiple terrestrial, climatic, and freshwater geospatial data for 17 northeastern U.S. states (~1,800,000 km2); (2) identified which of the 52 geospatial features were most influential in creating the resulting 100 regions; and (3) tested the ability of these ecological regions to capture regional variation in water nutrients and clarity for ~6,000 lakes. We found that: (1) a combination of terrestrial, climatic, and freshwater geospatial features influenced region creation, suggesting that the oft-ignored freshwater landscape provides novel information on landscape variability not captured by traditionally used climate and terrestrial metrics; and (2) the delineated regions captured macroscale heterogeneity in ecosystem properties not included in region delineation—approximately 40% of the variation in total phosphorus and water clarity among lakes was at the regional

  17. Variational-moment method for computing magnetohydrodynamic equilibria

    International Nuclear Information System (INIS)

    Lao, L.L.

    1983-08-01

    A fast yet accurate method to compute magnetohydrodynamic equilibria is provided by the variational-moment method, which is similar to the classical Rayleigh-Ritz-Galerkin approximation. The equilibrium solution sought is decomposed into a spectral representation. The partial differential equations describing the equilibrium are then recast into their equivalent variational form and systematically reduced to an optimum finite set of coupled ordinary differential equations. An appropriate spectral decomposition can make the series representing the solution coverge rapidly and hence substantially reduces the amount of computational time involved. The moment method was developed first to compute fixed-boundary inverse equilibria in axisymmetric toroidal geometry, and was demonstrated to be both efficient and accurate. The method since has been generalized to calculate free-boundary axisymmetric equilibria, to include toroidal plasma rotation and pressure anisotropy, and to treat three-dimensional toroidal geometry. In all these formulations, the flux surfaces are assumed to be smooth and nested so that the solutions can be decomposed in Fourier series in inverse coordinates. These recent developments and the advantages and limitations of the moment method are reviewed. The use of alternate coordinates for decomposition is discussed

  18. Storm surge model based on variational data assimilation method

    Directory of Open Access Journals (Sweden)

    Shi-li Huang

    2010-06-01

    Full Text Available By combining computation and observation information, the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting. It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge. By controlling the wind stress drag coefficient, the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon. In the data assimilation tests, the model accurately identified the wind stress drag coefficient and obtained results close to the true state. Then, the actual storm surge induced by Typhoon 0515 was forecast by the developed model, and the results demonstrate its efficiency in practical application.

  19. A convergent overlapping domain decomposition method for total variation minimization

    KAUST Repository

    Fornasier, Massimo

    2010-06-22

    In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.

  20. Newton-type methods for optimization and variational problems

    CERN Document Server

    Izmailov, Alexey F

    2014-01-01

    This book presents comprehensive state-of-the-art theoretical analysis of the fundamental Newtonian and Newtonian-related approaches to solving optimization and variational problems. A central focus is the relationship between the basic Newton scheme for a given problem and algorithms that also enjoy fast local convergence. The authors develop general perturbed Newtonian frameworks that preserve fast convergence and consider specific algorithms as particular cases within those frameworks, i.e., as perturbations of the associated basic Newton iterations. This approach yields a set of tools for the unified treatment of various algorithms, including some not of the Newton type per se. Among the new subjects addressed is the class of degenerate problems. In particular, the phenomenon of attraction of Newton iterates to critical Lagrange multipliers and its consequences as well as stabilized Newton methods for variational problems and stabilized sequential quadratic programming for optimization. This volume will b...

  1. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis

    Directory of Open Access Journals (Sweden)

    Huanhuan Li

    2017-08-01

    Full Text Available The Shipboard Automatic Identification System (AIS is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW, a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our

  2. A Dimensionality Reduction-Based Multi-Step Clustering Method for Robust Vessel Trajectory Analysis.

    Science.gov (United States)

    Li, Huanhuan; Liu, Jingxian; Liu, Ryan Wen; Xiong, Naixue; Wu, Kefeng; Kim, Tai-Hoon

    2017-08-04

    The Shipboard Automatic Identification System (AIS) is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW), a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA) is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our proposed method with

  3. A cluster merging method for time series microarray with production values.

    Science.gov (United States)

    Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio

    2014-09-01

    A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.

  4. Investigation on generalized Variational Nodal Methods for heterogeneous nodes

    International Nuclear Information System (INIS)

    Wang, Yongping; Wu, Hongchun; Li, Yunzhao; Cao, Liangzhi; Shen, Wei

    2017-01-01

    Highlights: • We developed two heterogeneous nodal methods based on the Variational Nodal Method. • Four problems were solved to evaluate the two heterogeneous nodal methods. • The function expansion method is good at treating continuous-changing heterogeneity. • The finite sub-element method is good at treating discontinuous-changing heterogeneity. - Abstract: The Variational Nodal Method (VNM) is generalized for heterogeneous nodes and applied to four kinds of problems including Molten Salt Reactor (MSR) core problem with continuous cross section profile, Pressurized Water Reactor (PWR) control rod cusping effect problem, PWR whole-core pin-by-pin problem, and heterogeneous PWR core problem without fuel-coolant homogenization in each pin cell. Two approaches have been investigated for the treatment of the nodal heterogeneity in this paper. To concentrate on spatial heterogeneity, diffusion approximation was adopted for the angular variable in neutron transport equation. To provide demonstrative numerical results, the codes in this paper were developed in slab geometry. The first method, named as function expansion (FE) method, expands nodal flux by orthogonal polynomials and the nodal cross sections are also expressed as spatial depended functions. The second path, named as finite sub-element (FS) method, takes advantage of the finite-element method by dividing each node into numbers of homogeneous sub-elements and expanding nodal flux into the combination of linear sub-element trial functions. Numerical tests have been carried out to evaluate the ability of the two nodal (coarse-mesh) heterogeneous VNMs by comparing with the fine-mesh homogeneous VNM. It has been demonstrated that both heterogeneous approaches can handle heterogeneous nodes. The FE method is good at continuous-changing heterogeneity as in the MSR core problem, while the FS method is good at discontinuous-changing heterogeneity such as the PWR pin-by-pin problem and heterogeneous PWR core

  5. Consensus of satellite cluster flight using an energy-matching optimal control method

    Science.gov (United States)

    Luo, Jianjun; Zhou, Liang; Zhang, Bo

    2017-11-01

    This paper presents an optimal control method for consensus of satellite cluster flight under a kind of energy matching condition. Firstly, the relation between energy matching and satellite periodically bounded relative motion is analyzed, and the satellite energy matching principle is applied to configure the initial conditions. Then, period-delayed errors are adopted as state variables to establish the period-delayed errors dynamics models of a single satellite and the cluster. Next a novel satellite cluster feedback control protocol with coupling gain is designed, so that the satellite cluster periodically bounded relative motion consensus problem (period-delayed errors state consensus problem) is transformed to the stability of a set of matrices with the same low dimension. Based on the consensus region theory in the research of multi-agent system consensus issues, the coupling gain can be obtained to satisfy the requirement of consensus region and decouple the satellite cluster information topology and the feedback control gain matrix, which can be determined by Linear quadratic regulator (LQR) optimal method. This method can realize the consensus of satellite cluster period-delayed errors, leading to the consistency of semi-major axes (SMA) and the energy-matching of satellite cluster. Then satellites can emerge the global coordinative cluster behavior. Finally the feasibility and effectiveness of the present energy-matching optimal consensus for satellite cluster flight is verified through numerical simulations.

  6. An Extended Affinity Propagation Clustering Method Based on Different Data Density Types

    Directory of Open Access Journals (Sweden)

    XiuLi Zhao

    2015-01-01

    Full Text Available Affinity propagation (AP algorithm, as a novel clustering method, does not require the users to specify the initial cluster centers in advance, which regards all data points as potential exemplars (cluster centers equally and groups the clusters totally by the similar degree among the data points. But in many cases there exist some different intensive areas within the same data set, which means that the data set does not distribute homogeneously. In such situation the AP algorithm cannot group the data points into ideal clusters. In this paper, we proposed an extended AP clustering algorithm to deal with such a problem. There are two steps in our method: firstly the data set is partitioned into several data density types according to the nearest distances of each data point; and then the AP clustering method is, respectively, used to group the data points into clusters in each data density type. Two experiments are carried out to evaluate the performance of our algorithm: one utilizes an artificial data set and the other uses a real seismic data set. The experiment results show that groups are obtained more accurately by our algorithm than OPTICS and AP clustering algorithm itself.

  7. Comparison of three methods for the estimation of cross-shock electric potential using Cluster data

    Directory of Open Access Journals (Sweden)

    A. P. Dimmock

    2011-05-01

    Full Text Available Cluster four point measurements provide a comprehensive dataset for the separation of temporal and spatial variations, which is crucial for the calculation of the cross shock electrostatic potential using electric field measurements. While Cluster is probably the most suited among present and past spacecraft missions to provide such a separation at the terrestrial bow shock, it is far from ideal for a study of the cross shock potential, since only 2 components of the electric field are measured in the spacecraft spin plane. The present paper is devoted to the comparison of 3 different techniques that can be used to estimate the potential with this limitation. The first technique is the estimate taking only into account the projection of the measured components onto the shock normal. The second uses the ideal MHD condition E·B = 0 to estimate the third electric field component. The last method is based on the structure of the electric field in the Normal Incidence Frame (NIF for which only the potential component along the shock normal and the motional electric field exist. All 3 approaches are used to estimate the potential for a single crossing of the terrestrial bow shock that took place on the 31 March 2001. Surprisingly all three methods lead to the same order of magnitude for the cross shock potential. It is argued that the third method must lead to more reliable results. The effect of the shock normal inaccuracy is investigated for this particular shock crossing. The resulting electrostatic potential appears too high in comparison with the theoretical results for low Mach number shocks. This shows the variability of the potential, interpreted in the frame of the non-stationary shock model.

  8. An integral nodal variational method for multigroup criticality calculations

    International Nuclear Information System (INIS)

    Lewis, E.E.; Tsoulfanidis, N.

    2003-01-01

    An integral formulation of the variational nodal method is presented and applied to a series of benchmark critically problems. The method combines an integral transport treatment of the even-parity flux within the spatial node with an odd-parity spherical harmonics expansion of the Lagrange multipliers at the node interfaces. The response matrices that result from this formulation are compatible with those in the VARIANT code at Argonne National Laboratory. Either homogeneous or heterogeneous nodes may be employed. In general, for calculations requiring higher-order angular approximations, the integral method yields solutions with comparable accuracy while requiring substantially less CPU time and memory than the standard spherical harmonics expansion using the same spatial approximations. (author)

  9. Equivalence of the generalized and complex Kohn variational methods

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, J N; Armour, E A G [School of Mathematical Sciences, University Park, Nottingham NG7 2RD (United Kingdom); Plummer, M, E-mail: pmxjnc@googlemail.co [STFC Daresbury Laboratory, Daresbury, Warrington, Cheshire WA4 4AD (United Kingdom)

    2010-04-30

    For Kohn variational calculations on low energy (e{sup +} - H{sub 2}) elastic scattering, we prove that the phase shift approximation, obtained using the complex Kohn method, is precisely equal to a value which can be obtained immediately via the real-generalized Kohn method. Our treatment is sufficiently general to be applied directly to arbitrary potential scattering or single open channel scattering problems, with exchange if required. In the course of our analysis, we develop a framework formally to describe the anomalous behaviour of our generalized Kohn calculations in the regions of the well-known Schwartz singularities. This framework also explains the mathematical origin of the anomaly-free singularities we reported in a previous article. Moreover, we demonstrate a novelty: that explicit solutions of the Kohn equations are not required in order to calculate optimal phase shift approximations. We relate our rigorous framework to earlier descriptions of the Kohn-type methods.

  10. Equivalence of the generalized and complex Kohn variational methods

    International Nuclear Information System (INIS)

    Cooper, J N; Armour, E A G; Plummer, M

    2010-01-01

    For Kohn variational calculations on low energy (e + - H 2 ) elastic scattering, we prove that the phase shift approximation, obtained using the complex Kohn method, is precisely equal to a value which can be obtained immediately via the real-generalized Kohn method. Our treatment is sufficiently general to be applied directly to arbitrary potential scattering or single open channel scattering problems, with exchange if required. In the course of our analysis, we develop a framework formally to describe the anomalous behaviour of our generalized Kohn calculations in the regions of the well-known Schwartz singularities. This framework also explains the mathematical origin of the anomaly-free singularities we reported in a previous article. Moreover, we demonstrate a novelty: that explicit solutions of the Kohn equations are not required in order to calculate optimal phase shift approximations. We relate our rigorous framework to earlier descriptions of the Kohn-type methods.

  11. Total variation superiorized conjugate gradient method for image reconstruction

    Science.gov (United States)

    Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.

    2018-03-01

    The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.

  12. A NEW METHOD TO QUANTIFY X-RAY SUBSTRUCTURES IN CLUSTERS OF GALAXIES

    Energy Technology Data Exchange (ETDEWEB)

    Andrade-Santos, Felipe; Lima Neto, Gastao B.; Lagana, Tatiana F. [Departamento de Astronomia, Instituto de Astronomia, Geofisica e Ciencias Atmosfericas, Universidade de Sao Paulo, Geofisica e Ciencias Atmosfericas, Rua do Matao 1226, Cidade Universitaria, 05508-090 Sao Paulo, SP (Brazil)

    2012-02-20

    We present a new method to quantify substructures in clusters of galaxies, based on the analysis of the intensity of structures. This analysis is done in a residual image that is the result of the subtraction of a surface brightness model, obtained by fitting a two-dimensional analytical model ({beta}-model or Sersic profile) with elliptical symmetry, from the X-ray image. Our method is applied to 34 clusters observed by the Chandra Space Telescope that are in the redshift range z in [0.02, 0.2] and have a signal-to-noise ratio (S/N) greater than 100. We present the calibration of the method and the relations between the substructure level with physical quantities, such as the mass, X-ray luminosity, temperature, and cluster redshift. We use our method to separate the clusters in two sub-samples of high- and low-substructure levels. We conclude, using Monte Carlo simulations, that the method recuperates very well the true amount of substructure for small angular core radii clusters (with respect to the whole image size) and good S/N observations. We find no evidence of correlation between the substructure level and physical properties of the clusters such as gas temperature, X-ray luminosity, and redshift; however, analysis suggest a trend between the substructure level and cluster mass. The scaling relations for the two sub-samples (high- and low-substructure level clusters) are different (they present an offset, i.e., given a fixed mass or temperature, low-substructure clusters tend to be more X-ray luminous), which is an important result for cosmological tests using the mass-luminosity relation to obtain the cluster mass function, since they rely on the assumption that clusters do not present different scaling relations according to their dynamical state.

  13. Leveraging long sequencing reads to investigate R-gene clustering and variation in sugar beet

    Science.gov (United States)

    Host-pathogen interactions are of prime importance to modern agriculture. Plants utilize various types of resistance genes to mitigate pathogen damage. Identification of the specific gene responsible for a specific resistance can be difficult due to duplication and clustering within R-gene families....

  14. Investigation of the cluster formation in lithium niobate crystals by computer modeling method

    Energy Technology Data Exchange (ETDEWEB)

    Voskresenskii, V. M.; Starodub, O. R., E-mail: ol-star@mail.ru; Sidorov, N. V.; Palatnikov, M. N. [Russian Academy of Sciences, Tananaev Institute of Chemistry and Technology of Rare Earth Elements and Mineral Raw Materials, Kola Science Centre (Russian Federation)

    2017-03-15

    The processes occurring upon the formation of energetically equilibrium oxygen-octahedral clusters in the ferroelectric phase of a stoichiometric lithium niobate (LiNbO{sub 3}) crystal have been investigated by the computer modeling method within the semiclassical atomistic model. An energetically favorable cluster size (at which a structure similar to that of a congruent crystal is organized) is shown to exist. A stoichiometric cluster cannot exist because of the electroneutrality loss. The most energetically favorable cluster is that with a Li/Nb ratio of about 0.945, a value close to the lithium-to-niobium ratio for a congruent crystal.

  15. Genetic variations and haplotype diversity of the UGT1 gene cluster in the Chinese population.

    Directory of Open Access Journals (Sweden)

    Jing Yang

    Full Text Available Vertebrates require tremendous molecular diversity to defend against numerous small hydrophobic chemicals. UDP-glucuronosyltransferases (UGTs are a large family of detoxification enzymes that glucuronidate xenobiotics and endobiotics, facilitating their excretion from the body. The UGT1 gene cluster contains a tandem array of variable first exons, each preceded by a specific promoter, and a common set of downstream constant exons, similar to the genomic organization of the protocadherin (Pcdh, immunoglobulin, and T-cell receptor gene clusters. To assist pharmacogenomics studies in Chinese, we sequenced nine first exons, promoter and intronic regions, and five common exons of the UGT1 gene cluster in a population sample of 253 unrelated Chinese individuals. We identified 101 polymorphisms and found 15 novel SNPs. We then computed allele frequencies for each polymorphism and reconstructed their linkage disequilibrium (LD map. The UGT1 cluster can be divided into five linkage blocks: Block 9 (UGT1A9, Block 9/7/6 (UGT1A9, UGT1A7, and UGT1A6, Block 5 (UGT1A5, Block 4/3 (UGT1A4 and UGT1A3, and Block 3' UTR. Furthermore, we inferred haplotypes and selected their tagSNPs. Finally, comparing our data with those of three other populations of the HapMap project revealed ethnic specificity of the UGT1 genetic diversity in Chinese. These findings have important implications for future molecular genetic studies of the UGT1 gene cluster as well as for personalized medical therapies in Chinese.

  16. Clustering Methods; Part IV of Scientific Report No. ISR-18, Information Storage and Retrieval...

    Science.gov (United States)

    Cornell Univ., Ithaca, NY. Dept. of Computer Science.

    Two papers are included as Part Four of this report on Salton's Magical Automatic Retriever of Texts (SMART) project report. The first paper: "A Controlled Single Pass Classification Algorithm with Application to Multilevel Clustering" by D. B. Johnson and J. M. Laferente presents a single pass clustering method which compares favorably…

  17. An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.

    Science.gov (United States)

    Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei

    2013-05-01

    Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.

  18. Novel crystal timing calibration method based on total variation

    Science.gov (United States)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  19. PARTIAL TRAINING METHOD FOR HEURISTIC ALGORITHM OF POSSIBLE CLUSTERIZATION UNDER UNKNOWN NUMBER OF CLASSES

    Directory of Open Access Journals (Sweden)

    D. A. Viattchenin

    2009-01-01

    Full Text Available A method for constructing a subset of labeled objects which is used in a heuristic algorithm of possible  clusterization with partial  training is proposed in the  paper.  The  method  is  based  on  data preprocessing by the heuristic algorithm of possible clusterization using a transitive closure of a fuzzy tolerance. Method efficiency is demonstrated by way of an illustrative example.

  20. A variational Bayesian method to inverse problems with impulsive noise

    KAUST Repository

    Jin, Bangti

    2012-01-01

    We propose a novel numerical method for solving inverse problems subject to impulsive noises which possibly contain a large number of outliers. The approach is of Bayesian type, and it exploits a heavy-tailed t distribution for data noise to achieve robustness with respect to outliers. A hierarchical model with all hyper-parameters automatically determined from the given data is described. An algorithm of variational type by minimizing the Kullback-Leibler divergence between the true posteriori distribution and a separable approximation is developed. The numerical method is illustrated on several one- and two-dimensional linear and nonlinear inverse problems arising from heat conduction, including estimating boundary temperature, heat flux and heat transfer coefficient. The results show its robustness to outliers and the fast and steady convergence of the algorithm. © 2011 Elsevier Inc.

  1. A two-stage method for microcalcification cluster segmentation in mammography by deformable models

    International Nuclear Information System (INIS)

    Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.; Karahaliou, A.; Costaridou, L.; Vassiou, K.

    2015-01-01

    Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods are applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST cluster , average of minimum distance—AMINDIST cluster ) and the area overlap measure (AOM cluster ). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing tenfold cross

  2. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems

  3. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    A variational method is developed for systematic numerical computation of physical quantities-bound state energies and scattering amplitudes-in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. An algorithm is presented for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. It is shown how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. It is shown how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. The author discusses the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, the author does not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. The method is applied to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. The author describes a computer implementation of the method and present numerical results for simple quantum mechanical systems

  4. The swift UVOT stars survey. I. Methods and test clusters

    Energy Technology Data Exchange (ETDEWEB)

    Siegel, Michael H.; Porterfield, Blair L.; Linevsky, Jacquelyn S.; Bond, Howard E.; Hoversten, Erik A.; Berrier, Joshua L.; Gronwall, Caryl A. [Department of Astronomy and Astrophysics, The Pennsylvania State University, 525 Davey Laboratory, University Park, PA 16802 (United States); Holland, Stephen T. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Breeveld, Alice A. [Mullard Space Science Laboratory, University College London, Holmbury St. Mary, Dorking, Surrey RH5 6NT (United Kingdom); Brown, Peter J., E-mail: siegel@astro.psu.edu, E-mail: blp14@psu.edu, E-mail: heb11@psu.edu, E-mail: caryl@astro.psu.edu, E-mail: sholland@stsci.edu, E-mail: aab@mssl.ucl.ac.uk, E-mail: grbpeter@yahoo.com [George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, Texas A. and M. University, Department of Physics and Astronomy, 4242 TAMU, College Station, TX 77843 (United States)

    2014-12-01

    We describe the motivations and background of a large survey of nearby stellar populations using the Ultraviolet Optical Telescope (UVOT) on board the Swift Gamma-Ray Burst Mission. UVOT, with its wide field, near-UV sensitivity, and 2.″3 spatial resolution, is uniquely suited to studying nearby stellar populations and providing insight into the near-UV properties of hot stars and the contribution of those stars to the integrated light of more distant stellar populations. We review the state of UV stellar photometry, outline the survey, and address problems specific to wide- and crowded-field UVOT photometry. We present color–magnitude diagrams of the nearby open clusters M67, NGC 188, and NGC 2539, and the globular cluster M79. We demonstrate that UVOT can easily discern the young- and intermediate-age main sequences, blue stragglers, and hot white dwarfs, producing results consistent with previous studies. We also find that it characterizes the blue horizontal branch of M79 and easily identifies a known post-asymptotic giant branch star.

  5. The swift UVOT stars survey. I. Methods and test clusters

    International Nuclear Information System (INIS)

    Siegel, Michael H.; Porterfield, Blair L.; Linevsky, Jacquelyn S.; Bond, Howard E.; Hoversten, Erik A.; Berrier, Joshua L.; Gronwall, Caryl A.; Holland, Stephen T.; Breeveld, Alice A.; Brown, Peter J.

    2014-01-01

    We describe the motivations and background of a large survey of nearby stellar populations using the Ultraviolet Optical Telescope (UVOT) on board the Swift Gamma-Ray Burst Mission. UVOT, with its wide field, near-UV sensitivity, and 2.″3 spatial resolution, is uniquely suited to studying nearby stellar populations and providing insight into the near-UV properties of hot stars and the contribution of those stars to the integrated light of more distant stellar populations. We review the state of UV stellar photometry, outline the survey, and address problems specific to wide- and crowded-field UVOT photometry. We present color–magnitude diagrams of the nearby open clusters M67, NGC 188, and NGC 2539, and the globular cluster M79. We demonstrate that UVOT can easily discern the young- and intermediate-age main sequences, blue stragglers, and hot white dwarfs, producing results consistent with previous studies. We also find that it characterizes the blue horizontal branch of M79 and easily identifies a known post-asymptotic giant branch star.

  6. Comprehensive assessment of sequence variation within the copy number variable defensin cluster on 8p23 by target enriched in-depth 454 sequencing

    Directory of Open Access Journals (Sweden)

    Zhang Xinmin

    2011-05-01

    Full Text Available Abstract Background In highly copy number variable (CNV regions such as the human defensin gene locus, comprehensive assessment of sequence variations is challenging. PCR approaches are practically restricted to tiny fractions, and next-generation sequencing (NGS approaches of whole individual genomes e.g. by the 1000 Genomes Project is confined by an affordable sequence depth. Combining target enrichment with NGS may represent a feasible approach. Results As a proof of principle, we enriched a ~850 kb section comprising the CNV defensin gene cluster DEFB, the invariable DEFA part and 11 control regions from two genomes by sequence capture and sequenced it by 454 technology. 6,651 differences to the human reference genome were found. Comparison to HapMap genotypes revealed sensitivities and specificities in the range of 94% to 99% for the identification of variations. Using error probabilities for rigorous filtering revealed 2,886 unique single nucleotide variations (SNVs including 358 putative novel ones. DEFB CN determinations by haplotype ratios were in agreement with alternative methods. Conclusion Although currently labor extensive and having high costs, target enriched NGS provides a powerful tool for the comprehensive assessment of SNVs in highly polymorphic CNV regions of individual genomes. Furthermore, it reveals considerable amounts of putative novel variations and simultaneously allows CN estimation.

  7. Fast optimization of binary clusters using a novel dynamic lattice searching method

    International Nuclear Information System (INIS)

    Wu, Xia; Cheng, Wen

    2014-01-01

    Global optimization of binary clusters has been a difficult task despite of much effort and many efficient methods. Directing toward two types of elements (i.e., homotop problem) in binary clusters, two classes of virtual dynamic lattices are constructed and a modified dynamic lattice searching (DLS) method, i.e., binary DLS (BDLS) method, is developed. However, it was found that the BDLS can only be utilized for the optimization of binary clusters with small sizes because homotop problem is hard to be solved without atomic exchange operation. Therefore, the iterated local search (ILS) method is adopted to solve homotop problem and an efficient method based on the BDLS method and ILS, named as BDLS-ILS, is presented for global optimization of binary clusters. In order to assess the efficiency of the proposed method, binary Lennard-Jones clusters with up to 100 atoms are investigated. Results show that the method is proved to be efficient. Furthermore, the BDLS-ILS method is also adopted to study the geometrical structures of (AuPd) 79 clusters with DFT-fit parameters of Gupta potential

  8. Heuristic methods using grasp, path relinking and variable neighborhood search for the clustered traveling salesman problem

    Directory of Open Access Journals (Sweden)

    Mário Mestria

    2013-08-01

    Full Text Available The Clustered Traveling Salesman Problem (CTSP is a generalization of the Traveling Salesman Problem (TSP in which the set of vertices is partitioned into disjoint clusters and objective is to find a minimum cost Hamiltonian cycle such that the vertices of each cluster are visited contiguously. The CTSP is NP-hard and, in this context, we are proposed heuristic methods for the CTSP using GRASP, Path Relinking and Variable Neighborhood Descent (VND. The heuristic methods were tested using Euclidean instances with up to 2000 vertices and clusters varying between 4 to 150 vertices. The computational tests were performed to compare the performance of the heuristic methods with an exact algorithm using the Parallel CPLEX software. The computational results showed that the hybrid heuristic method using VND outperforms other heuristic methods.

  9. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    International Nuclear Information System (INIS)

    Anderson, Ryan B.; Bell, James F.; Wiens, Roger C.; Morris, Richard V.; Clegg, Samuel M.

    2012-01-01

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO 2 at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by ∼ 3 wt.%. The statistical significance of these improvements was ∼ 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and specifically

  10. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  11. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang; Cheng, James; Xiao, Xiaokui; Fujimaki, Ryohei; Muraoka, Yusuke

    2017-01-01

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  12. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods.

    Science.gov (United States)

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community.

  13. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods

    Science.gov (United States)

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community. PMID:27124610

  14. Cluster cosmological analysis with X ray instrumental observables: introduction and testing of AsPIX method

    International Nuclear Information System (INIS)

    Valotti, Andrea

    2016-01-01

    Cosmology is one of the fundamental pillars of astrophysics, as such it contains many unsolved puzzles. To investigate some of those puzzles, we analyze X-ray surveys of galaxy clusters. These surveys are possible thanks to the bremsstrahlung emission of the intra-cluster medium. The simultaneous fit of cluster counts as a function of mass and distance provides an independent measure of cosmological parameters such as Ω m , σ s , and the dark energy equation of state w0. A novel approach to cosmological analysis using galaxy cluster data, called top-down, was developed in N. Clerc et al. (2012). This top-down approach is based purely on instrumental observables that are considered in a two-dimensional X-ray color-magnitude diagram. The method self-consistently includes selection effects and scaling relationships. It also provides a means of bypassing the computation of individual cluster masses. My work presents an extension of the top-down method by introducing the apparent size of the cluster, creating a three-dimensional X-ray cluster diagram. The size of a cluster is sensitive to both the cluster mass and its angular diameter, so it must also be included in the assessment of selection effects. The performance of this new method is investigated using a Fisher analysis. In parallel, I have studied the effects of the intrinsic scatter in the cluster size scaling relation on the sample selection as well as on the obtained cosmological parameters. To validate the method, I estimate uncertainties of cosmological parameters with MCMC method Amoeba minimization routine and using two simulated XMM surveys that have an increasing level of complexity. The first simulated survey is a set of toy catalogues of 100 and 10000 deg 2 , whereas the second is a 1000 deg 2 catalogue that was generated using an Aardvark semi-analytical N-body simulation. This comparison corroborates the conclusions of the Fisher analysis. In conclusion, I find that a cluster diagram that accounts

  15. Understanding the cluster randomised crossover design: a graphical illustraton of the components of variation and a sample size tutorial.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B

    2017-08-15

    In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of

  16. A Total Variation-Based Reconstruction Method for Dynamic MRI

    Directory of Open Access Journals (Sweden)

    Germana Landi

    2008-01-01

    Full Text Available In recent years, total variation (TV regularization has become a popular and powerful tool for image restoration and enhancement. In this work, we apply TV minimization to improve the quality of dynamic magnetic resonance images. Dynamic magnetic resonance imaging is an increasingly popular clinical technique used to monitor spatio-temporal changes in tissue structure. Fast data acquisition is necessary in order to capture the dynamic process. Most commonly, the requirement of high temporal resolution is fulfilled by sacrificing spatial resolution. Therefore, the numerical methods have to address the issue of images reconstruction from limited Fourier data. One of the most successful techniques for dynamic imaging applications is the reduced-encoded imaging by generalized-series reconstruction method of Liang and Lauterbur. However, even if this method utilizes a priori data for optimal image reconstruction, the produced dynamic images are degraded by truncation artifacts, most notably Gibbs ringing, due to the spatial low resolution of the data. We use a TV regularization strategy in order to reduce these truncation artifacts in the dynamic images. The resulting TV minimization problem is solved by the fixed point iteration method of Vogel and Oman. The results of test problems with simulated and real data are presented to illustrate the effectiveness of the proposed approach in reducing the truncation artifacts of the reconstructed images.

  17. A clustering based method to evaluate soil corrosivity for pipeline external integrity management

    International Nuclear Information System (INIS)

    Yajima, Ayako; Wang, Hui; Liang, Robert Y.; Castaneda, Homero

    2015-01-01

    One important category of transportation infrastructure is underground pipelines. Corrosion of these buried pipeline systems may cause pipeline failures with the attendant hazards of property loss and fatalities. Therefore, developing the capability to estimate the soil corrosivity is important for designing and preserving materials and for risk assessment. The deterioration rate of metal is highly influenced by the physicochemical characteristics of a material and the environment of its surroundings. In this study, the field data obtained from the southeast region of Mexico was examined using various data mining techniques to determine the usefulness of these techniques for clustering soil corrosivity level. Specifically, the soil was classified into different corrosivity level clusters by k-means and Gaussian mixture model (GMM). In terms of physical space, GMM shows better separability; therefore, the distributions of the material loss of the buried petroleum pipeline walls were estimated via the empirical density within GMM clusters. The soil corrosivity levels of the clusters were determined based on the medians of metal loss. The proposed clustering method was demonstrated to be capable of classifying the soil into different levels of corrosivity severity. - Highlights: • The clustering approach is applied to the data extracted from a real-life pipeline system. • Soil properties in the right-of-way are analyzed via clustering techniques to assess corrosivity. • GMM is selected as the preferred method for detecting the hidden pattern of in-situ data. • K–W test is performed for significant difference of corrosivity level between clusters

  18. Relation between financial market structure and the real economy: comparison between clustering methods.

    Science.gov (United States)

    Musmeci, Nicoló; Aste, Tomaso; Di Matteo, T

    2015-01-01

    We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover,we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging [corrected].

  19. Relation between financial market structure and the real economy: comparison between clustering methods.

    Directory of Open Access Journals (Sweden)

    Nicoló Musmeci

    Full Text Available We quantify the amount of information filtered by different hierarchical clustering methods on correlations between stock returns comparing the clustering structure with the underlying industrial activity classification. We apply, for the first time to financial data, a novel hierarchical clustering approach, the Directed Bubble Hierarchical Tree and we compare it with other methods including the Linkage and k-medoids. By taking the industrial sector classification of stocks as a benchmark partition, we evaluate how the different methods retrieve this classification. The results show that the Directed Bubble Hierarchical Tree can outperform other methods, being able to retrieve more information with fewer clusters. Moreover,we show that the economic information is hidden at different levels of the hierarchical structures depending on the clustering method. The dynamical analysis on a rolling window also reveals that the different methods show different degrees of sensitivity to events affecting financial markets, like crises. These results can be of interest for all the applications of clustering methods to portfolio optimization and risk hedging [corrected].

  20. Variational principles for Ginzburg-Landau equation by He's semi-inverse method

    International Nuclear Information System (INIS)

    Liu, W.Y.; Yu, Y.J.; Chen, L.D.

    2007-01-01

    Via the semi-inverse method of establishing variational principles proposed by He, a generalized variational principle is established for Ginzburg-Landau equation. The present theory provides a quite straightforward tool to the search for various variational principles for physical problems. This paper aims at providing a more complete theoretical basis for applications using finite element and other direct variational methods

  1. Space-angle approximations in the variational nodal method

    International Nuclear Information System (INIS)

    Lewis, E. E.; Palmiotti, G.; Taiwo, T.

    1999-01-01

    The variational nodal method is formulated such that the angular and spatial approximations maybe examined separately. Spherical harmonic, simplified spherical harmonic, and discrete ordinate approximations are coupled to the primal hybrid finite element treatment of the spatial variables. Within this framework, two classes of spatial trial functions are presented: (1) orthogonal polynomials for the treatment of homogeneous nodes and (2) bilinear finite subelement trial functions for the treatment of fuel assembly sized nodes in which fuel-pin cell cross sections are represented explicitly. Polynomial and subelement trial functions are applied to benchmark water-reactor problems containing MOX fuel using spherical harmonic and simplified spherical harmonic approximations. The resulting accuracy and computing costs are compared

  2. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.

  3. Variational methods for high-order multiphoton processes

    International Nuclear Information System (INIS)

    Gao, B.; Pan, C.; Liu, C.; Starace, A.F.

    1990-01-01

    Methods for applying the variationally stable procedure for Nth-order perturbative transition matrix elements of Gao and Starace [Phys. Rev. Lett. 61, 404 (1988); Phys. Rev. A 39, 4550 (1989)] to multiphoton processes involving systems other than atomic H are presented. Three specific cases are discussed: one-electron ions or atoms in which the electron--ion interaction is described by a central potential; two-electron ions or atoms in which the electronic states are described by the adiabatic hyperspherical representation; and closed-shell ions or atoms in which the electronic states are described by the multiconfiguration Hartree--Fock representation. Applications are made to the dynamic polarizability of He and the two-photon ionization cross section of Ar

  4. Benchmark Applications of Variations of Multireference Equation of Motion Coupled-Cluster Theory

    Czech Academy of Sciences Publication Activity Database

    Huntington, L. M.; Demel, Ondřej; Nooijen, M.

    2016-01-01

    Roč. 12, č. 1 (2016), s. 114-132 ISSN 1549-9618 R&D Projects: GA ČR GJ15-00058Y Institutional support: RVO:61388955 Keywords : MR-EOM * Benchmark applications * variations Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.245, year: 2016

  5. Trend analysis using non-stationary time series clustering based on the finite element method

    Science.gov (United States)

    Gorji Sefidmazgi, M.; Sayemuzzaman, M.; Homaifar, A.; Jha, M. K.; Liess, S.

    2014-05-01

    In order to analyze low-frequency variability of climate, it is useful to model the climatic time series with multiple linear trends and locate the times of significant changes. In this paper, we have used non-stationary time series clustering to find change points in the trends. Clustering in a multi-dimensional non-stationary time series is challenging, since the problem is mathematically ill-posed. Clustering based on the finite element method (FEM) is one of the methods that can analyze multidimensional time series. One important attribute of this method is that it is not dependent on any statistical assumption and does not need local stationarity in the time series. In this paper, it is shown how the FEM-clustering method can be used to locate change points in the trend of temperature time series from in situ observations. This method is applied to the temperature time series of North Carolina (NC) and the results represent region-specific climate variability despite higher frequency harmonics in climatic time series. Next, we investigated the relationship between the climatic indices with the clusters/trends detected based on this clustering method. It appears that the natural variability of climate change in NC during 1950-2009 can be explained mostly by AMO and solar activity.

  6. Phenotypic clustering: a novel method for microglial morphology analysis.

    Science.gov (United States)

    Verdonk, Franck; Roux, Pascal; Flamant, Patricia; Fiette, Laurence; Bozza, Fernando A; Simard, Sébastien; Lemaire, Marc; Plaud, Benoit; Shorte, Spencer L; Sharshar, Tarek; Chrétien, Fabrice; Danckaert, Anne

    2016-06-17

    Microglial cells are tissue-resident macrophages of the central nervous system. They are extremely dynamic, sensitive to their microenvironment and present a characteristic complex and heterogeneous morphology and distribution within the brain tissue. Many experimental clues highlight a strong link between their morphology and their function in response to aggression. However, due to their complex "dendritic-like" aspect that constitutes the major pool of murine microglial cells and their dense network, precise and powerful morphological studies are not easy to realize and complicate correlation with molecular or clinical parameters. Using the knock-in mouse model CX3CR1(GFP/+), we developed a 3D automated confocal tissue imaging system coupled with morphological modelling of many thousands of microglial cells revealing precise and quantitative assessment of major cell features: cell density, cell body area, cytoplasm area and number of primary, secondary and tertiary processes. We determined two morphological criteria that are the complexity index (CI) and the covered environment area (CEA) allowing an innovative approach lying in (i) an accurate and objective study of morphological changes in healthy or pathological condition, (ii) an in situ mapping of the microglial distribution in different neuroanatomical regions and (iii) a study of the clustering of numerous cells, allowing us to discriminate different sub-populations. Our results on more than 20,000 cells by condition confirm at baseline a regional heterogeneity of the microglial distribution and phenotype that persists after induction of neuroinflammation by systemic injection of lipopolysaccharide (LPS). Using clustering analysis, we highlight that, at resting state, microglial cells are distributed in four microglial sub-populations defined by their CI and CEA with a regional pattern and a specific behaviour after challenge. Our results counteract the classical view of a homogenous regional resting

  7. Correction for dispersion and Coulombic interactions in molecular clusters with density functional derived methods: Application to polycyclic aromatic hydrocarbon clusters

    Science.gov (United States)

    Rapacioli, Mathias; Spiegelman, Fernand; Talbi, Dahbia; Mineva, Tzonka; Goursot, Annick; Heine, Thomas; Seifert, Gotthard

    2009-06-01

    The density functional based tight binding (DFTB) is a semiempirical method derived from the density functional theory (DFT). It inherits therefore its problems in treating van der Waals clusters. A major error comes from dispersion forces, which are poorly described by commonly used DFT functionals, but which can be accounted for by an a posteriori treatment DFT-D. This correction is used for DFTB. The self-consistent charge (SCC) DFTB is built on Mulliken charges which are known to give a poor representation of Coulombic intermolecular potential. We propose to calculate this potential using the class IV/charge model 3 definition of atomic charges. The self-consistent calculation of these charges is introduced in the SCC procedure and corresponding nuclear forces are derived. Benzene dimer is then studied as a benchmark system with this corrected DFTB (c-DFTB-D) method, but also, for comparison, with the DFT-D. Both methods give similar results and are in agreement with references calculations (CCSD(T) and symmetry adapted perturbation theory) calculations. As a first application, pyrene dimer is studied with the c-DFTB-D and DFT-D methods. For coronene clusters, only the c-DFTB-D approach is used, which finds the sandwich configurations to be more stable than the T-shaped ones.

  8. Novel Clustering Method Based on K-Medoids and Mobility Metric

    Directory of Open Access Journals (Sweden)

    Y. Hamzaoui

    2018-06-01

    Full Text Available The structure and constraint of MANETS influence negatively the performance of QoS, moreover the main routing protocols proposed generally operate in flat routing. Hence, this structure gives the bad results of QoS when the network becomes larger and denser. To solve this problem we use one of the most popular methods named clustering. The present paper comes within the frameworks of research to improve the QoS in MANETs. In this paper we propose a new algorithm of clustering based on the new mobility metric and K-Medoid to distribute the nodes into several clusters. Intuitively our algorithm can give good results in terms of stability of the cluster, and can also extend life time of cluster head.

  9. A simple and fast method to determine the parameters for fuzzy c-means cluster analysis

    DEFF Research Database (Denmark)

    Schwämmle, Veit; Jensen, Ole Nørregaard

    2010-01-01

    MOTIVATION: Fuzzy c-means clustering is widely used to identify cluster structures in high-dimensional datasets, such as those obtained in DNA microarray and quantitative proteomics experiments. One of its main limitations is the lack of a computationally fast method to set optimal values...... of algorithm parameters. Wrong parameter values may either lead to the inclusion of purely random fluctuations in the results or ignore potentially important data. The optimal solution has parameter values for which the clustering does not yield any results for a purely random dataset but which detects cluster...... formation with maximum resolution on the edge of randomness. RESULTS: Estimation of the optimal parameter values is achieved by evaluation of the results of the clustering procedure applied to randomized datasets. In this case, the optimal value of the fuzzifier follows common rules that depend only...

  10. A semi-supervised method to detect seismic random noise with fuzzy GK clustering

    International Nuclear Information System (INIS)

    Hashemi, Hosein; Javaherian, Abdolrahim; Babuska, Robert

    2008-01-01

    We present a new method to detect random noise in seismic data using fuzzy Gustafson–Kessel (GK) clustering. First, using an adaptive distance norm, a matrix is constructed from the observed seismic amplitudes. The next step is to find centres of ellipsoidal clusters and construct a partition matrix which determines the soft decision boundaries between seismic events and random noise. The GK algorithm updates the cluster centres in order to iteratively minimize the cluster variance. Multiplication of the fuzzy membership function with values of each sample yields new sections; we name them 'clustered sections'. The seismic amplitude values of the clustered sections are given in a way to decrease the level of noise in the original noisy seismic input. In pre-stack data, it is essential to study the clustered sections in a f–k domain; finding the quantitative index for weighting the post-stack data needs a similar approach. Using the knowledge of a human specialist together with the fuzzy unsupervised clustering, the method is a semi-supervised random noise detection. The efficiency of this method is investigated on synthetic and real seismic data for both pre- and post-stack data. The results show a significant improvement of the input noisy sections without harming the important amplitude and phase information of the original data. The procedure for finding the final weights of each clustered section should be carefully done in order to keep almost all the evident seismic amplitudes in the output section. The method interactively uses the knowledge of the seismic specialist in detecting the noise

  11. Kinetic methods for measuring the temperature of clusters and nanoparticles in molecular beams

    International Nuclear Information System (INIS)

    Makarov, Grigorii N

    2011-01-01

    The temperature (internal energy) of clusters and nanoparticles is an important physical parameter which affects many of their properties and the character of processes they are involved in. At the same time, determining the temperature of free clusters and nanoparticles in molecular beams is a rather complicated problem because the temperature of small particles depends on their size. In this paper, recently developed kinetic methods for measuring the temperature of clusters and nanoparticles in molecular beams are reviewed. The definition of temperature in the present context is given, and how the temperature affects the properties of and the processes involving the particles is discussed. The temperature behavior of clusters and nanoparticles near a phase transition point is analyzed. Early methods for measuring the temperature of large clusters are briefly described. It is shown that, compared to other methods, new kinetic methods are more universal and applicable for determining the temperature of clusters and nanoparticles of practically any size and composition. The future development and applications of these methods are outlined. (reviews of topical problems)

  12. Comparison of Bayesian clustering and edge detection methods for inferring boundaries in landscape genetics

    Science.gov (United States)

    Safner, T.; Miller, M.P.; McRae, B.H.; Fortin, M.-J.; Manel, S.

    2011-01-01

    Recently, techniques available for identifying clusters of individuals or boundaries between clusters using genetic data from natural populations have expanded rapidly. Consequently, there is a need to evaluate these different techniques. We used spatially-explicit simulation models to compare three spatial Bayesian clustering programs and two edge detection methods. Spatially-structured populations were simulated where a continuous population was subdivided by barriers. We evaluated the ability of each method to correctly identify boundary locations while varying: (i) time after divergence, (ii) strength of isolation by distance, (iii) level of genetic diversity, and (iv) amount of gene flow across barriers. To further evaluate the methods' effectiveness to detect genetic clusters in natural populations, we used previously published data on North American pumas and a European shrub. Our results show that with simulated and empirical data, the Bayesian spatial clustering algorithms outperformed direct edge detection methods. All methods incorrectly detected boundaries in the presence of strong patterns of isolation by distance. Based on this finding, we support the application of Bayesian spatial clustering algorithms for boundary detection in empirical datasets, with necessary tests for the influence of isolation by distance. ?? 2011 by the authors; licensee MDPI, Basel, Switzerland.

  13. Perturbation theory corrections to the two-particle reduced density matrix variational method.

    Science.gov (United States)

    Juhasz, Tamas; Mazziotti, David A

    2004-07-15

    In the variational 2-particle-reduced-density-matrix (2-RDM) method, the ground-state energy is minimized with respect to the 2-particle reduced density matrix, constrained by N-representability conditions. Consider the N-electron Hamiltonian H(lambda) as a function of the parameter lambda where we recover the Fock Hamiltonian at lambda=0 and we recover the fully correlated Hamiltonian at lambda=1. We explore using the accuracy of perturbation theory at small lambda to correct the 2-RDM variational energies at lambda=1 where the Hamiltonian represents correlated atoms and molecules. A key assumption in the correction is that the 2-RDM method will capture a fairly constant percentage of the correlation energy for lambda in (0,1] because the nonperturbative 2-RDM approach depends more significantly upon the nature rather than the strength of the two-body Hamiltonian interaction. For a variety of molecules we observe that this correction improves the 2-RDM energies in the equilibrium bonding region, while the 2-RDM energies at stretched or nearly dissociated geometries, already highly accurate, are not significantly changed. At equilibrium geometries the corrected 2-RDM energies are similar in accuracy to those from coupled-cluster singles and doubles (CCSD), but at nonequilibrium geometries the 2-RDM energies are often dramatically more accurate as shown in the bond stretching and dissociation data for water and nitrogen. (c) 2004 American Institute of Physics.

  14. Comment on “Variational Iteration Method for Fractional Calculus Using He’s Polynomials”

    Directory of Open Access Journals (Sweden)

    Ji-Huan He

    2012-01-01

    boundary value problems. This note concludes that the method is a modified variational iteration method using He’s polynomials. A standard variational iteration algorithm for fractional differential equations is suggested.

  15. Spectrographical method for determining temperature variations of cosmic rays

    International Nuclear Information System (INIS)

    Dorman, L.I.; Krest'yannikov, Yu.Ya.; AN SSSR, Irkutsk. Sibirskij Inst. Zemnogo Magnetizma Ionosfery i Rasprostraneniya Radiovoln)

    1977-01-01

    A spectrographic method for determining [sigmaJsup(μ)/Jsup(μ)]sub(T) temperature variations in cosmic rays is proposed. The value of (sigmaJsup(μ)/Jsup(μ)]sub(T) is determined from three equations for neutron supermonitors and the equation for the muon component of cosmic rays. It is assumed that all the observation data include corrections for the barometric effect. No temperature effect is observed in the neutron component. To improve the reliability and accuracy of the results obtained the surface area of the existing devices and the number of spectrographic equations should be increased as compared with that of the unknown values. The value of [sigmaJsup(μ)/Jsup(μ)]sub(T) for time instants when the aerological probing was carried out, was determined from the data of observations of cosmic rays with the aid of a spectrographic complex of devices of Sib IZMIR. The r.m.s. dispersion of the difference is about 0.2%, which agrees with the expected dispersion. The agreement obtained can be regarded as an independent proof of the correctness of the theory of meteorological effects of cosmic rays. With the existing detection accuracy the spectrographic method can be used for determining the hourly values of temperature corrections for the muon component

  16. Variational methods in electron-atom scattering theory

    CERN Document Server

    Nesbet, Robert K

    1980-01-01

    The investigation of scattering phenomena is a major theme of modern physics. A scattered particle provides a dynamical probe of the target system. The practical problem of interest here is the scattering of a low­ energy electron by an N-electron atom. It has been difficult in this area of study to achieve theoretical results that are even qualitatively correct, yet quantitative accuracy is often needed as an adjunct to experiment. The present book describes a quantitative theoretical method, or class of methods, that has been applied effectively to this problem. Quantum mechanical theory relevant to the scattering of an electron by an N-electron atom, which may gain or lose energy in the process, is summarized in Chapter 1. The variational theory itself is presented in Chapter 2, both as currently used and in forms that may facilitate future applications. The theory of multichannel resonance and threshold effects, which provide a rich structure to observed electron-atom scattering data, is presented in Cha...

  17. Variations in Decision-Making Profiles by Age and Gender: A Cluster-Analytic Approach

    Science.gov (United States)

    Delaney, Rebecca; Strough, JoNell; Parker, Andrew M.; de Bruin, Wandi Bruine

    2015-01-01

    Using cluster-analysis, we investigated whether rational, intuitive, spontaneous, dependent, and avoidant styles of decision making (Scott & Bruce, 1995) combined to form distinct decision-making profiles that differed by age and gender. Self-report survey data were collected from 1,075 members of RAND’s American Life Panel (56.2% female, 18–93 years, Mage = 53.49). Three decision-making profiles were identified: affective/experiential, independent/self-controlled, and an interpersonally-oriented dependent profile. Older people were less likely to be in the affective/experiential profile and more likely to be in the independent/self-controlled profile. Women were less likely to be in the affective/experiential profile and more likely to be in the interpersonally-oriented dependent profile. Interpersonally-oriented profiles are discussed as an overlooked but important dimension of how people make important decisions. PMID:26005238

  18. Variations in Decision-Making Profiles by Age and Gender: A Cluster-Analytic Approach.

    Science.gov (United States)

    Delaney, Rebecca; Strough, JoNell; Parker, Andrew M; de Bruin, Wandi Bruine

    2015-10-01

    Using cluster-analysis, we investigated whether rational, intuitive, spontaneous, dependent, and avoidant styles of decision making (Scott & Bruce, 1995) combined to form distinct decision-making profiles that differed by age and gender. Self-report survey data were collected from 1,075 members of RAND's American Life Panel (56.2% female, 18-93 years, M age = 53.49). Three decision-making profiles were identified: affective/experiential, independent/self-controlled, and an interpersonally-oriented dependent profile. Older people were less likely to be in the affective/experiential profile and more likely to be in the independent/self-controlled profile. Women were less likely to be in the affective/experiential profile and more likely to be in the interpersonally-oriented dependent profile. Interpersonally-oriented profiles are discussed as an overlooked but important dimension of how people make important decisions.

  19. Engineering practice variation through provider agreement: a cluster-randomized feasibility trial.

    Science.gov (United States)

    McCarren, Madeline; Twedt, Elaine L; Mansuri, Faizmohamed M; Nelson, Philip R; Peek, Brian T

    2014-01-01

    Minimal-risk randomized trials that can be embedded in practice could facilitate learning health-care systems. A cluster-randomized design was proposed to compare treatment strategies by assigning clusters (eg, providers) to "favor" a particular drug, with providers retaining autonomy for specific patients. Patient informed consent might be waived, broadening inclusion. However, it is not known if providers will adhere to the assignment or whether institutional review boards will waive consent. We evaluated the feasibility of this trial design. Agreeable providers were randomized to "favor" either hydrochlorothiazide or chlorthalidone when starting patients on thiazide-type therapy for hypertension. The assignment applied when the provider had already decided to start a thiazide, and providers could deviate from the strategy as needed. Prescriptions were aggregated to produce a provider strategy-adherence rate. All four institutional review boards waived documentation of patient consent. Providers (n=18) followed their assigned strategy for most of their new thiazide prescriptions (n=138 patients). In the "favor hydrochlorothiazide" group, there was 99% adherence to that strategy. In the "favor chlorthalidone" group, chlorthalidone comprised 77% of new thiazide starts, up from 1% in the pre-study period. When the assigned strategy was followed, dosing in the recommended range was 48% for hydrochlorothiazide (25-50 mg/day) and 100% for chlorthalidone (12.5-25.0 mg/day). Providers were motivated to participate by a desire to contribute to a comparative effectiveness study. A study promotional mug, provider information letter, and interactions with the site investigator were identified as most helpful in reminding providers of their study drug strategy. Providers prescribed according to an assigned drug-choice strategy most of the time for the purpose of a comparative effectiveness study. This simple design could facilitate research participation and behavior change

  20. Application of hierarchical clustering method to classify of space-time rainfall patterns

    Science.gov (United States)

    Yu, Hwa-Lung; Chang, Tu-Je

    2010-05-01

    Understanding the local precipitation patterns is essential to the water resources management and flooding mitigation. The precipitation patterns can vary in space and time depending upon the factors from different spatial scales such as local topological changes and macroscopic atmospheric circulation. The spatiotemporal variation of precipitation in Taiwan is significant due to its complex terrain and its location at west pacific and subtropical area, where is the boundary between the pacific ocean and Asia continent with the complex interactions among the climatic processes. This study characterizes local-scale precipitation patterns by classifying the historical space-time precipitation records. We applied the hierarchical ascending clustering method to analyze the precipitation records from 1960 to 2008 at the six rainfall stations located in Lan-yang catchment at the northeast of the island. Our results identify the four primary space-time precipitation types which may result from distinct driving forces from the changes of atmospheric variables and topology at different space-time scales. This study also presents an important application of the statistical downscaling to combine large-scale upper-air circulation with local space-time precipitation patterns.

  1. An improved K-means clustering method for cDNA microarray image segmentation.

    Science.gov (United States)

    Wang, T N; Li, T J; Shao, G F; Wu, S X

    2015-07-14

    Microarray technology is a powerful tool for human genetic research and other biomedical applications. Numerous improvements to the standard K-means algorithm have been carried out to complete the image segmentation step. However, most of the previous studies classify the image into two clusters. In this paper, we propose a novel K-means algorithm, which first classifies the image into three clusters, and then one of the three clusters is divided as the background region and the other two clusters, as the foreground region. The proposed method was evaluated on six different data sets. The analyses of accuracy, efficiency, expression values, special gene spots, and noise images demonstrate the effectiveness of our method in improving the segmentation quality.

  2. Gauge-invariant variational methods for Hamiltonian lattice gauge theories

    International Nuclear Information System (INIS)

    Horn, D.; Weinstein, M.

    1982-01-01

    This paper develops variational methods for calculating the ground-state and excited-state spectrum of Hamiltonian lattice gauge theories defined in the A 0 = 0 gauge. The scheme introduced in this paper has the advantage of allowing one to convert more familiar tools such as mean-field, Hartree-Fock, and real-space renormalization-group approximation, which are by their very nature gauge-noninvariant methods, into fully gauge-invariant techniques. We show that these methods apply in the same way to both Abelian and non-Abelian theories, and that they are at least powerful enough to describe correctly the physics of periodic quantum electrodynamics (PQED) in (2+1) and (3+1) space-time dimensions. This paper formulates the problem for both Abelian and non-Abelian theories and shows how to reduce the Rayleigh-Ritz problem to that of computing the partition function of a classical spin system. We discuss the evaluation of the effective spin problem which one derives the PQED and then discuss ways of carrying out the evaluation of the partition function for the system equivalent to a non-Abelian theory. The explicit form of the effective partition function for the non-Abelian theory is derived, but because the evaluation of this function is considerably more complicated than the one derived in the Abelian theory no explicit evaluation of this function is presented. However, by comparing the gauge-projected Hartree-Fock wave function for PQED with that of the pure SU(2) gauge theory, we are able to show that extremely interesting differences emerge between these theories even at this simple level. We close with a discussion of fermions and a discussion of how one can extend these ideas to allow the computation of the glueball and hadron spectrum

  3. Local Fractional Laplace Variational Iteration Method for Solving Linear Partial Differential Equations with Local Fractional Derivative

    Directory of Open Access Journals (Sweden)

    Ai-Min Yang

    2014-01-01

    Full Text Available The local fractional Laplace variational iteration method was applied to solve the linear local fractional partial differential equations. The local fractional Laplace variational iteration method is coupled by the local fractional variational iteration method and Laplace transform. The nondifferentiable approximate solutions are obtained and their graphs are also shown.

  4. Application Of WIMS Code To Calculation Kartini Reactor Parameters By Pin-Cell And Cluster Method

    International Nuclear Information System (INIS)

    Sumarsono, Bambang; Tjiptono, T.W.

    1996-01-01

    Analysis UZrH fuel element parameters calculation in Kartini Reactor by WIMS Code has been done. The analysis is done by pin cell and cluster method. The pin cell method is done as a function percent burn-up and by 8 group 3 region analysis and cluster method by 8 group 12 region analysis. From analysis and calculation resulted K ∼ = 1.3687 by pin cell method and K ∼ = 1.3162 by cluster method and so deviation is 3.83%. By pin cell analysis as a function percent burn-up at the percent burn-up greater than 59.50%, the multiplication factor is less than one (k ∼ < 1) it is mean that the fuel element reactivity is negative

  5. The Semianalytical Solutions for Stiff Systems of Ordinary Differential Equations by Using Variational Iteration Method and Modified Variational Iteration Method with Comparison to Exact Solutions

    Directory of Open Access Journals (Sweden)

    Mehmet Tarik Atay

    2013-01-01

    Full Text Available The Variational Iteration Method (VIM and Modified Variational Iteration Method (MVIM are used to find solutions of systems of stiff ordinary differential equations for both linear and nonlinear problems. Some examples are given to illustrate the accuracy and effectiveness of these methods. We compare our results with exact results. In some studies related to stiff ordinary differential equations, problems were solved by Adomian Decomposition Method and VIM and Homotopy Perturbation Method. Comparisons with exact solutions reveal that the Variational Iteration Method (VIM and the Modified Variational Iteration Method (MVIM are easier to implement. In fact, these methods are promising methods for various systems of linear and nonlinear stiff ordinary differential equations. Furthermore, VIM, or in some cases MVIM, is giving exact solutions in linear cases and very satisfactory solutions when compared to exact solutions for nonlinear cases depending on the stiffness ratio of the stiff system to be solved.

  6. The use of different clustering methods in the evaluation of genetic diversity in upland cotton

    Directory of Open Access Journals (Sweden)

    Laíse Ferreira de Araújo

    Full Text Available The continuous development and evaluation of new genotypes through crop breeding is essential in order to obtain new cultivars. The objective of this work was to evaluate the genetic divergences between cultivars of upland cotton (Gossypium hirsutum L. using the agronomic and technological characteristics of the fibre, in order to select superior parent plants. The experiment was set up during 2010 at the Federal University of Ceará in Fortaleza, Ceará, Brazil. Eleven cultivars of upland cotton were used in an experimental design of randomised blocks with three replications. In order to evaluate the genetic diversity among cultivars, the generalised Mahalanobis distance matrix was calculated, with cluster analysis then being applied, employing various methods: single linkage, Ward, complete linkage, median, average linkage within a cluster and average linkage between clusters. Genetic variability exists among the evaluated genotypes. The most consistant clustering method was that employing average linkage between clusters. Among the characteristics assessed, mean boll weight presented the highest contribution to genetic diversity, followed by elongation at rupture. Employing the method of mean linkage between clusters, the cultivars with greater genetic divergence were BRS Acacia and LD Frego; those of greater similarity were BRS Itaúba and BRS Araripe.

  7. A semantics-based method for clustering of Chinese web search results

    Science.gov (United States)

    Zhang, Hui; Wang, Deqing; Wang, Li; Bi, Zhuming; Chen, Yong

    2014-01-01

    Information explosion is a critical challenge to the development of modern information systems. In particular, when the application of an information system is over the Internet, the amount of information over the web has been increasing exponentially and rapidly. Search engines, such as Google and Baidu, are essential tools for people to find the information from the Internet. Valuable information, however, is still likely submerged in the ocean of search results from those tools. By clustering the results into different groups based on subjects automatically, a search engine with the clustering feature allows users to select most relevant results quickly. In this paper, we propose an online semantics-based method to cluster Chinese web search results. First, we employ the generalised suffix tree to extract the longest common substrings (LCSs) from search snippets. Second, we use the HowNet to calculate the similarities of the words derived from the LCSs, and extract the most representative features by constructing the vocabulary chain. Third, we construct a vector of text features and calculate snippets' semantic similarities. Finally, we improve the Chameleon algorithm to cluster snippets. Extensive experimental results have shown that the proposed algorithm has outperformed over the suffix tree clustering method and other traditional clustering methods.

  8. Free vibration of finite cylindrical shells by the variational method

    International Nuclear Information System (INIS)

    Campen, D.H. van; Huetink, J.

    1975-01-01

    The calculation of the free vibrations of circular cylindrical shells of finite length has been of engineer's interest for a long time. The motive for the present calculations originates from a particular type of construction at the inlet of a sodium heated superheater with helix heating bundle for SNR-Kalkar. The variational analysis is based on a modified energy functional for cylindrical shells, proposed by Koiter and resulting in Morley's equilibrium equations. As usual, the dispacement amplitude is assumed to be distributed harmonically in the circumferential direction of the shell. Following the method of Gontkevich, the dependence between the displacements of the shell middle surface and the axial shell co-ordinate is expressed approximately by a set of eigenfunctions of a free vibrating beam satisfying the desired boundary conditions. Substitution of this displacement expression into the virtual work equation for the complete shell leads to a characteristic equation determining the natural frequencies. The calculations are carried out for a clamped-clamped and a clamped-free cylinder. A comparison is given between the above numerical results and experimental and theoretical results from literature. In addition, the influence of surrounding fluid mass on the above frequencies is analysed for a clamped-clamped shell. The solution for the velocity potential used in this case differs from the solutions used in literature until now in that not only travelling waves in the axial direction are considered. (Auth.)

  9. Variational methods applied to problems of diffusion and reaction

    CERN Document Server

    Strieder, William

    1973-01-01

    This monograph is an account of some problems involving diffusion or diffusion with simultaneous reaction that can be illuminated by the use of variational principles. It was written during a period that included sabbatical leaves of one of us (W. S. ) at the University of Minnesota and the other (R. A. ) at the University of Cambridge and we are grateful to the Petroleum Research Fund for helping to support the former and the Guggenheim Foundation for making possible the latter. We would also like to thank Stephen Prager for getting us together in the first place and for showing how interesting and useful these methods can be. We have also benefitted from correspondence with Dr. A. M. Arthurs of the University of York and from the counsel of Dr. B. D. Coleman the general editor of this series. Table of Contents Chapter 1. Introduction and Preliminaries . 1. 1. General Survey 1 1. 2. Phenomenological Descriptions of Diffusion and Reaction 2 1. 3. Correlation Functions for Random Suspensions 4 1. 4. Mean Free ...

  10. Variational method for infinite nuclear matter with noncentral forces

    International Nuclear Information System (INIS)

    Takano, M.; Yamada, M.

    1998-01-01

    Approximate energy expressions are proposed for infinite zero-temperature nuclear matter by taking into account noncentral forces. They are explicitly expressed as functionals of spin- (isospin-) dependent radial distribution functions, tensor distribution functions and spin-orbit distribution functions, and can be used conveniently in the variational method. A notable feature of these expressions is that they automatically guarantee the necessary conditions on the spin-isospin-dependent structure functions. The Euler-Lagrange equations are derived from these energy expressions and numerically solved for neutron matter and symmetric nuclear matter. The results show that the noncentral forces bring down the total energies too much with too dense saturation densities. Since the main reason for these undesirable results seems to be the long tails of the noncentral distribution functions, an effective theory is proposed by introducing a density-dependent damping function into the noncentral potentials to suppress the long tails of the non-central distribution functions. By adjusting the value of a parameter included in the damping function, we can reproduce the saturation point (both the energy and density) of symmetric nuclear matter with the Hamada-Johnston potential. (Copyright (1998) World Scientific Publishing Co. Pte. Ltd)

  11. Automated assessment and tracking of human body thermal variations using unsupervised clustering.

    Science.gov (United States)

    Yousefi, Bardia; Fleuret, Julien; Zhang, Hai; Maldague, Xavier P V; Watt, Raymond; Klein, Matthieu

    2016-12-01

    The presented approach addresses a review of the overheating that occurs during radiological examinations, such as magnetic resonance imaging, and a series of thermal experiments to determine a thermally suitable fabric material that should be used for radiological gowns. Moreover, an automatic system for detecting and tracking of the thermal fluctuation is presented. It applies hue-saturated-value-based kernelled k-means clustering, which initializes and controls the points that lie on the region-of-interest (ROI) boundary. Afterward, a particle filter tracks the targeted ROI during the video sequence independently of previous locations of overheating spots. The proposed approach was tested during experiments and under conditions very similar to those used during real radiology exams. Six subjects have voluntarily participated in these experiments. To simulate the hot spots occurring during radiology, a controllable heat source was utilized near the subject's body. The results indicate promising accuracy for the proposed approach to track hot spots. Some approximations were used regarding the transmittance of the atmosphere, and emissivity of the fabric could be neglected because of the independence of the proposed approach for these parameters. The approach can track the heating spots continuously and correctly, even for moving subjects, and provides considerable robustness against motion artifact, which occurs during most medical radiology procedures.

  12. A Spatial Shape Constrained Clustering Method for Mammographic Mass Segmentation

    Directory of Open Access Journals (Sweden)

    Jian-Yong Lou

    2015-01-01

    error of 7.18% for well-defined masses (or 8.06% for ill-defined masses was obtained by using DACF on MiniMIAS database, with 5.86% (or 5.55% and 6.14% (or 5.27% improvements as compared to the standard DA and fuzzy c-means methods.

  13. Adaptive cluster sampling: An efficient method for assessing inconspicuous species

    Science.gov (United States)

    Andrea M. Silletti; Joan Walker

    2003-01-01

    Restorationistis typically evaluate the success of a project by estimating the population sizes of species that have been planted or seeded. Because total census is raely feasible, they must rely on sampling methods for population estimates. However, traditional random sampling designs may be inefficient for species that, for one reason or another, are challenging to...

  14. Robustness of serial clustering of extratropical cyclones to the choice of tracking method

    Directory of Open Access Journals (Sweden)

    Joaquim G. Pinto

    2016-07-01

    Full Text Available Cyclone clusters are a frequent synoptic feature in the Euro-Atlantic area. Recent studies have shown that serial clustering of cyclones generally occurs on both flanks and downstream regions of the North Atlantic storm track, while cyclones tend to occur more regulary on the western side of the North Atlantic basin near Newfoundland. This study explores the sensitivity of serial clustering to the choice of cyclone tracking method using cyclone track data from 15 methods derived from ERA-Interim data (1979–2010. Clustering is estimated by the dispersion (ratio of variance to mean of winter [December – February (DJF] cyclone passages near each grid point over the Euro-Atlantic area. The mean number of cyclone counts and their variance are compared between methods, revealing considerable differences, particularly for the latter. Results show that all different tracking methods qualitatively capture similar large-scale spatial patterns of underdispersion and overdispersion over the study region. The quantitative differences can primarily be attributed to the differences in the variance of cyclone counts between the methods. Nevertheless, overdispersion is statistically significant for almost all methods over parts of the eastern North Atlantic and Western Europe, and is therefore considered as a robust feature. The influence of the North Atlantic Oscillation (NAO on cyclone clustering displays a similar pattern for all tracking methods, with one maximum near Iceland and another between the Azores and Iberia. The differences in variance between methods are not related with different sensitivities to the NAO, which can account to over 50% of the clustering in some regions. We conclude that the general features of underdispersion and overdispersion of extratropical cyclones over the North Atlantic and Western Europe are robust to the choice of tracking method. The same is true for the influence of the NAO on cyclone dispersion.

  15. An effective trust-based recommendation method using a novel graph clustering algorithm

    Science.gov (United States)

    Moradi, Parham; Ahmadian, Sajad; Akhlaghian, Fardin

    2015-10-01

    Recommender systems are programs that aim to provide personalized recommendations to users for specific items (e.g. music, books) in online sharing communities or on e-commerce sites. Collaborative filtering methods are important and widely accepted types of recommender systems that generate recommendations based on the ratings of like-minded users. On the other hand, these systems confront several inherent issues such as data sparsity and cold start problems, caused by fewer ratings against the unknowns that need to be predicted. Incorporating trust information into the collaborative filtering systems is an attractive approach to resolve these problems. In this paper, we present a model-based collaborative filtering method by applying a novel graph clustering algorithm and also considering trust statements. In the proposed method first of all, the problem space is represented as a graph and then a sparsest subgraph finding algorithm is applied on the graph to find the initial cluster centers. Then, the proposed graph clustering algorithm is performed to obtain the appropriate users/items clusters. Finally, the identified clusters are used as a set of neighbors to recommend unseen items to the current active user. Experimental results based on three real-world datasets demonstrate that the proposed method outperforms several state-of-the-art recommender system methods.

  16. MHCcluster, a method for functional clustering of MHC molecules

    DEFF Research Database (Denmark)

    Thomsen, Martin Christen Frølund; Lundegaard, Claus; Buus, Søren

    2013-01-01

    The identification of peptides binding to major histocompatibility complexes (MHC) is a critical step in the understanding of T cell immune responses. The human MHC genomic region (HLA) is extremely polymorphic comprising several thousand alleles, many encoding a distinct molecule. The potentially...... binding specificity. The method has a flexible web interface that allows the user to include any MHC of interest in the analysis. The output consists of a static heat map and graphical tree-based visualizations of the functional relationship between MHC variants and a dynamic TreeViewer interface where...

  17. Systematic approach to critical phenomena by the extended variational method and coherent-anomaly method

    International Nuclear Information System (INIS)

    Kawashima, N.; Katori, M.; Tsallis, C.; Suzuki, M.

    1989-01-01

    A general procedure to study critical phenomena of magnetic systems is discussed. It consists of systematic series of Landau-like approximations (Extended Variational Method) and the coherent-anomaly method (CAM). As for susceptibility, the present method is equivalent to the power-series CAM theory. On the other hand, the EVM gives a set of new approximants for other physical quantities. Applications to d-dimensional Ising ferromagnets are also described. The critical points and exponents are estimated with high accuracy. (author) [pt

  18. Discrete variational derivative method a structure-preserving numerical method for partial differential equations

    CERN Document Server

    Furihata, Daisuke

    2010-01-01

    Nonlinear Partial Differential Equations (PDEs) have become increasingly important in the description of physical phenomena. Unlike Ordinary Differential Equations, PDEs can be used to effectively model multidimensional systems. The methods put forward in Discrete Variational Derivative Method concentrate on a new class of ""structure-preserving numerical equations"" which improves the qualitative behaviour of the PDE solutions and allows for stable computing. The authors have also taken care to present their methods in an accessible manner, which means that the book will be useful to engineer

  19. Pseudo-potential method for taking into account the Pauli principle in cluster systems

    International Nuclear Information System (INIS)

    Krasnopol'skii, V.M.; Kukulin, V.I.

    1975-01-01

    In order to take account of the Pauli principle in cluster systems (such as 3α, α + α + n) a convenient method of renormalization of the cluster-cluster deep attractive potentials with forbidden states is suggested. The renormalization consists of adding projectors upon the occupied states with an infinite coupling constant to the initial deep potential which means that we pass to pseudo-potentials. The pseudo-potential approach in projecting upon the noneigenstates is shown to be equivalent to the orthogonality condition model of Saito et al. The orthogonality of the many-particle wave function to the forbidden states of each two-cluster sub-system is clearly demonstrated

  20. A New Soft Computing Method for K-Harmonic Means Clustering.

    Science.gov (United States)

    Yeh, Wei-Chang; Jiang, Yunzhi; Chen, Yee-Fen; Chen, Zhe

    2016-01-01

    The K-harmonic means clustering algorithm (KHM) is a new clustering method used to group data such that the sum of the harmonic averages of the distances between each entity and all cluster centroids is minimized. Because it is less sensitive to initialization than K-means (KM), many researchers have recently been attracted to studying KHM. In this study, the proposed iSSO-KHM is based on an improved simplified swarm optimization (iSSO) and integrates a variable neighborhood search (VNS) for KHM clustering. As evidence of the utility of the proposed iSSO-KHM, we present extensive computational results on eight benchmark problems. From the computational results, the comparison appears to support the superiority of the proposed iSSO-KHM over previously developed algorithms for all experiments in the literature.

  1. Grey Wolf Optimizer Based on Powell Local Optimization Method for Clustering Analysis

    Directory of Open Access Journals (Sweden)

    Sen Zhang

    2015-01-01

    Full Text Available One heuristic evolutionary algorithm recently proposed is the grey wolf optimizer (GWO, inspired by the leadership hierarchy and hunting mechanism of grey wolves in nature. This paper presents an extended GWO algorithm based on Powell local optimization method, and we call it PGWO. PGWO algorithm significantly improves the original GWO in solving complex optimization problems. Clustering is a popular data analysis and data mining technique. Hence, the PGWO could be applied in solving clustering problems. In this study, first the PGWO algorithm is tested on seven benchmark functions. Second, the PGWO algorithm is used for data clustering on nine data sets. Compared to other state-of-the-art evolutionary algorithms, the results of benchmark and data clustering demonstrate the superior performance of PGWO algorithm.

  2. Developing a Clustering-Based Empirical Bayes Analysis Method for Hotspot Identification

    Directory of Open Access Journals (Sweden)

    Yajie Zou

    2017-01-01

    Full Text Available Hotspot identification (HSID is a critical part of network-wide safety evaluations. Typical methods for ranking sites are often rooted in using the Empirical Bayes (EB method to estimate safety from both observed crash records and predicted crash frequency based on similar sites. The performance of the EB method is highly related to the selection of a reference group of sites (i.e., roadway segments or intersections similar to the target site from which safety performance functions (SPF used to predict crash frequency will be developed. As crash data often contain underlying heterogeneity that, in essence, can make them appear to be generated from distinct subpopulations, methods are needed to select similar sites in a principled manner. To overcome this possible heterogeneity problem, EB-based HSID methods that use common clustering methodologies (e.g., mixture models, K-means, and hierarchical clustering to select “similar” sites for building SPFs are developed. Performance of the clustering-based EB methods is then compared using real crash data. Here, HSID results, when computed on Texas undivided rural highway cash data, suggest that all three clustering-based EB analysis methods are preferred over the conventional statistical methods. Thus, properly classifying the road segments for heterogeneous crash data can further improve HSID accuracy.

  3. Cluster Analysis of the Newcastle Electronic Corpus of Tyneside English: A Comparison of Methods

    NARCIS (Netherlands)

    Moisl, Hermann; Jones, Valerie M.

    2005-01-01

    This article examines the feasibility of an empirical approach to sociolinguistic analysis of the Newcastle Electronic Corpus of Tyneside English using exploratory multivariate methods. It addresses a known problem with one class of such methods, hierarchical cluster analysis—that different

  4. Cluster Analysis of the Newcastle Electronic Corpus of Tyneside English: In A Comparison of Methods

    NARCIS (Netherlands)

    Moisl, Hermann; Jones, Valerie M.

    2005-01-01

    This article examines the feasibility of an empirical approach to sociolinguistic analysis of the Newcastle Electronic Corpus of Tyneside English using exploratory multivariate methods. It addresses a known problem with one class of such methods, hierarchical cluster analysis—that different

  5. System and Method for Outlier Detection via Estimating Clusters

    Science.gov (United States)

    Iverson, David J. (Inventor)

    2016-01-01

    An efficient method and system for real-time or offline analysis of multivariate sensor data for use in anomaly detection, fault detection, and system health monitoring is provided. Models automatically derived from training data, typically nominal system data acquired from sensors in normally operating conditions or from detailed simulations, are used to identify unusual, out of family data samples (outliers) that indicate possible system failure or degradation. Outliers are determined through analyzing a degree of deviation of current system behavior from the models formed from the nominal system data. The deviation of current system behavior is presented as an easy to interpret numerical score along with a measure of the relative contribution of each system parameter to any off-nominal deviation. The techniques described herein may also be used to "clean" the training data.

  6. A method of detecting spatial clustering of disease

    International Nuclear Information System (INIS)

    Openshaw, S.; Wilkie, D.; Binks, K.; Wakeford, R.; Gerrard, M.H.; Croasdale, M.R.

    1989-01-01

    A statistical technique has been developed to identify extreme groupings of a disease and is being applied to childhood cancers, initially to acute lymphoblastic leukaemia incidence in the Northern and North-Western Regions of England. The method covers the area with a square grid, the size of which is varied over a wide range and whose origin is moved in small increments in two directions. The population at risk within any square is estimated using the 1971 and 1981 censuses. The significance of an excess of disease is determined by random simulation. In addition, tests to detect a general departure from a background Poisson process are carried out. Available results will be presented at the conference. (author)

  7. A method for determining the radius of an open cluster from stellar proper motions

    Science.gov (United States)

    Sánchez, Néstor; Alfaro, Emilio J.; López-Martínez, Fátima

    2018-04-01

    We propose a method for calculating the radius of an open cluster in an objective way from an astrometric catalogue containing, at least, positions and proper motions. It uses the minimum spanning tree in the proper motion space to discriminate cluster stars from field stars and it quantifies the strength of the cluster-field separation by means of a statistical parameter defined for the first time in this paper. This is done for a range of different sampling radii from where the cluster radius is obtained as the size at which the best cluster-field separation is achieved. The novelty of this strategy is that the cluster radius is obtained independently of how its stars are spatially distributed. We test the reliability and robustness of the method with both simulated and real data from a well-studied open cluster (NGC 188), and apply it to UCAC4 data for five other open clusters with different catalogued radius values. NGC 188, NGC 1647, NGC 6603, and Ruprecht 155 yielded unambiguous radius values of 15.2 ± 1.8, 29.4 ± 3.4, 4.2 ± 1.7, and 7.0 ± 0.3 arcmin, respectively. ASCC 19 and Collinder 471 showed more than one possible solution, but it is not possible to know whether this is due to the involved uncertainties or due to the presence of complex patterns in their proper motion distributions, something that could be inherent to the physical object or due to the way in which the catalogue was sampled.

  8. Evaluation of hierarchical agglomerative cluster analysis methods for discrimination of primary biological aerosol

    Directory of Open Access Journals (Sweden)

    I. Crawford

    2015-11-01

    Full Text Available In this paper we present improved methods for discriminating and quantifying primary biological aerosol particles (PBAPs by applying hierarchical agglomerative cluster analysis to multi-parameter ultraviolet-light-induced fluorescence (UV-LIF spectrometer data. The methods employed in this study can be applied to data sets in excess of 1 × 106 points on a desktop computer, allowing for each fluorescent particle in a data set to be explicitly clustered. This reduces the potential for misattribution found in subsampling and comparative attribution methods used in previous approaches, improving our capacity to discriminate and quantify PBAP meta-classes. We evaluate the performance of several hierarchical agglomerative cluster analysis linkages and data normalisation methods using laboratory samples of known particle types and an ambient data set. Fluorescent and non-fluorescent polystyrene latex spheres were sampled with a Wideband Integrated Bioaerosol Spectrometer (WIBS-4 where the optical size, asymmetry factor and fluorescent measurements were used as inputs to the analysis package. It was found that the Ward linkage with z-score or range normalisation performed best, correctly attributing 98 and 98.1 % of the data points respectively. The best-performing methods were applied to the BEACHON-RoMBAS (Bio–hydro–atmosphere interactions of Energy, Aerosols, Carbon, H2O, Organics and Nitrogen–Rocky Mountain Biogenic Aerosol Study ambient data set, where it was found that the z-score and range normalisation methods yield similar results, with each method producing clusters representative of fungal spores and bacterial aerosol, consistent with previous results. The z-score result was compared to clusters generated with previous approaches (WIBS AnalysiS Program, WASP where we observe that the subsampling and comparative attribution method employed by WASP results in the overestimation of the fungal spore concentration by a factor of 1.5 and the

  9. Variational methods and effective actions in string models

    International Nuclear Information System (INIS)

    Dereli, T.; Tucker, R.W.

    1987-01-01

    Effective actions motivated by zero-order and first-order actions are examined. Particular attention is devoted to a variational procedure that is consistent with the structure equations involving the Lorentz connection. Attention is drawn to subtleties that can arise in varying higher-order actions and an efficient procedure developed to handle these cases using the calculus of forms. The effect of constrained variations on the field equations is discussed. (author)

  10. Analysis of cost data in a cluster-randomized, controlled trial: comparison of methods

    DEFF Research Database (Denmark)

    Sokolowski, Ineta; Ørnbøl, Eva; Rosendal, Marianne

    studies have used non-valid analysis of skewed data. We propose two different methods to compare mean cost in two groups. Firstly, we use a non-parametric bootstrap method where the re-sampling takes place on two levels in order to take into account the cluster effect. Secondly, we proceed with a log......-transformation of the cost data and apply the normal theory on these data. Again we try to account for the cluster effect. The performance of these two methods is investigated in a simulation study. The advantages and disadvantages of the different approaches are discussed.......  We consider health care data from a cluster-randomized intervention study in primary care to test whether the average health care costs among study patients differ between the two groups. The problems of analysing cost data are that most data are severely skewed. Median instead of mean...

  11. Identification of rural landscape classes through a GIS clustering method

    Directory of Open Access Journals (Sweden)

    Irene Diti

    2013-09-01

    Full Text Available The paper presents a methodology aimed at supporting the rural planning process. The analysis of the state of the art of local and regional policies focused on rural and suburban areas, and the study of the scientific literature in the field of spatial analysis methodologies, have allowed the definition of the basic concept of the research. The proposed method, developed in a GIS, is based on spatial metrics selected and defined to cover various agricultural, environmental, and socio-economic components. The specific goal of the proposed methodology is to identify homogeneous extra-urban areas through their objective characterization at different scales. Once areas with intermediate urban-rural characters have been identified, the analysis is then focused on the more detailed definition of periurban agricultural areas. The synthesis of the results of the analysis of the various landscape components is achieved through an original interpretative key which aims to quantify the potential impacts of rural areas on the urban system. This paper presents the general framework of the methodology and some of the main results of its first implementation through an Italian case study.

  12. Symptom Clusters in Advanced Cancer Patients: An Empirical Comparison of Statistical Methods and the Impact on Quality of Life.

    Science.gov (United States)

    Dong, Skye T; Costa, Daniel S J; Butow, Phyllis N; Lovell, Melanie R; Agar, Meera; Velikova, Galina; Teckle, Paulos; Tong, Allison; Tebbutt, Niall C; Clarke, Stephen J; van der Hoek, Kim; King, Madeleine T; Fayers, Peter M

    2016-01-01

    Symptom clusters in advanced cancer can influence patient outcomes. There is large heterogeneity in the methods used to identify symptom clusters. To investigate the consistency of symptom cluster composition in advanced cancer patients using different statistical methodologies for all patients across five primary cancer sites, and to examine which clusters predict functional status, a global assessment of health and global quality of life. Principal component analysis and exploratory factor analysis (with different rotation and factor selection methods) and hierarchical cluster analysis (with different linkage and similarity measures) were used on a data set of 1562 advanced cancer patients who completed the European Organization for the Research and Treatment of Cancer Quality of Life Questionnaire-Core 30. Four clusters consistently formed for many of the methods and cancer sites: tense-worry-irritable-depressed (emotional cluster), fatigue-pain, nausea-vomiting, and concentration-memory (cognitive cluster). The emotional cluster was a stronger predictor of overall quality of life than the other clusters. Fatigue-pain was a stronger predictor of overall health than the other clusters. The cognitive cluster and fatigue-pain predicted physical functioning, role functioning, and social functioning. The four identified symptom clusters were consistent across statistical methods and cancer types, although there were some noteworthy differences. Statistical derivation of symptom clusters is in need of greater methodological guidance. A psychosocial pathway in the management of symptom clusters may improve quality of life. Biological mechanisms underpinning symptom clusters need to be delineated by future research. A framework for evidence-based screening, assessment, treatment, and follow-up of symptom clusters in advanced cancer is essential. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  13. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Ryan B., E-mail: randerson@astro.cornell.edu [Cornell University Department of Astronomy, 406 Space Sciences Building, Ithaca, NY 14853 (United States); Bell, James F., E-mail: Jim.Bell@asu.edu [Arizona State University School of Earth and Space Exploration, Bldg.: INTDS-A, Room: 115B, Box 871404, Tempe, AZ 85287 (United States); Wiens, Roger C., E-mail: rwiens@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663 MS J565, Los Alamos, NM 87545 (United States); Morris, Richard V., E-mail: richard.v.morris@nasa.gov [NASA Johnson Space Center, 2101 NASA Parkway, Houston, TX 77058 (United States); Clegg, Samuel M., E-mail: sclegg@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663 MS J565, Los Alamos, NM 87545 (United States)

    2012-04-15

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO{sub 2} at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by {approx} 3 wt.%. The statistical significance of these improvements was {approx} 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and

  14. A review on cluster estimation methods and their application to neural spike data

    Science.gov (United States)

    Zhang, James; Nguyen, Thanh; Cogill, Steven; Bhatti, Asim; Luo, Lingkun; Yang, Samuel; Nahavandi, Saeid

    2018-06-01

    The extracellular action potentials recorded on an electrode result from the collective simultaneous electrophysiological activity of an unknown number of neurons. Identifying and assigning these action potentials to their firing neurons—‘spike sorting’—is an indispensable step in studying the function and the response of an individual or ensemble of neurons to certain stimuli. Given the task of neural spike sorting, the determination of the number of clusters (neurons) is arguably the most difficult and challenging issue, due to the existence of background noise and the overlap and interactions among neurons in neighbouring regions. It is not surprising that some researchers still rely on visual inspection by experts to estimate the number of clusters in neural spike sorting. Manual inspection, however, is not suitable to processing the vast, ever-growing amount of neural data. To address this pressing need, in this paper, thirty-three clustering validity indices have been comprehensively reviewed and implemented to determine the number of clusters in neural datasets. To gauge the suitability of the indices to neural spike data, and inform the selection process, we then calculated the indices by applying k-means clustering to twenty widely used synthetic neural datasets and one empirical dataset, and compared the performance of these indices against pre-existing ground truth labels. The results showed that the top five validity indices work consistently well across variations in noise level, both for the synthetic datasets and the real dataset. Using these top performing indices provides strong support for the determination of the number of neural clusters, which is essential in the spike sorting process.

  15. A review on cluster estimation methods and their application to neural spike data.

    Science.gov (United States)

    Zhang, James; Nguyen, Thanh; Cogill, Steven; Bhatti, Asim; Luo, Lingkun; Yang, Samuel; Nahavandi, Saeid

    2018-06-01

    The extracellular action potentials recorded on an electrode result from the collective simultaneous electrophysiological activity of an unknown number of neurons. Identifying and assigning these action potentials to their firing neurons-'spike sorting'-is an indispensable step in studying the function and the response of an individual or ensemble of neurons to certain stimuli. Given the task of neural spike sorting, the determination of the number of clusters (neurons) is arguably the most difficult and challenging issue, due to the existence of background noise and the overlap and interactions among neurons in neighbouring regions. It is not surprising that some researchers still rely on visual inspection by experts to estimate the number of clusters in neural spike sorting. Manual inspection, however, is not suitable to processing the vast, ever-growing amount of neural data. To address this pressing need, in this paper, thirty-three clustering validity indices have been comprehensively reviewed and implemented to determine the number of clusters in neural datasets. To gauge the suitability of the indices to neural spike data, and inform the selection process, we then calculated the indices by applying k-means clustering to twenty widely used synthetic neural datasets and one empirical dataset, and compared the performance of these indices against pre-existing ground truth labels. The results showed that the top five validity indices work consistently well across variations in noise level, both for the synthetic datasets and the real dataset. Using these top performing indices provides strong support for the determination of the number of neural clusters, which is essential in the spike sorting process.

  16. Electronic states in clusters of H forms of zeolites with variation of the Si/Al ratio

    International Nuclear Information System (INIS)

    Gun'ko, V.M.

    1987-01-01

    Fragments of H forms of zeolites of the faujasite type including up to 12 silicon- and aluminum-oxygen tetrahedrons and having different Si/Al ratios have been calculated in the cluster approximation by the MINDO/3 and CNDO/2 methods. The dependence of the integral and orbital densities of electronic states in the clusters on the aluminum content has been investigated. It has been shown that the profiles of the s- and p-orbital density of states of Al remain practically unchanged as the Si/Al ratio is lowered and that the maxima of the orbital density of states of Si broaden, and new maxima appear at the bottom and top of the valence band. When the acidity of the structural OH groups is lowered, the maxima of the orbital density of states of the H atoms are displaced appreciably only in the deep valence band, while in the upper valence band the positions of the peaks of the s-orbital density of states of the H atoms remain constant. Satisfactory agreement of the calculated orbital densities of states of Si, Al, and O with the corresponding x-ray photoelectron spectra has been obtained. In the deep valence band the data from the MINDO/3 method are better than those from the CNDO/2 method and reproduce the positions of the maxima in the x-ray photoelectron spectra

  17. DLTAP: A Network-efficient Scheduling Method for Distributed Deep Learning Workload in Containerized Cluster Environment

    Directory of Open Access Journals (Sweden)

    Qiao Wei

    2017-01-01

    Full Text Available Deep neural networks (DNNs have recently yielded strong results on a range of applications. Training these DNNs using a cluster of commodity machines is a promising approach since training is time consuming and compute-intensive. Furthermore, putting DNN tasks into containers of clusters would enable broader and easier deployment of DNN-based algorithms. Toward this end, this paper addresses the problem of scheduling DNN tasks in the containerized cluster environment. Efficiently scheduling data-parallel computation jobs like DNN over containerized clusters is critical for job performance, system throughput, and resource utilization. It becomes even more challenging with the complex workloads. We propose a scheduling method called Deep Learning Task Allocation Priority (DLTAP which performs scheduling decisions in a distributed manner, and each of scheduling decisions takes aggregation degree of parameter sever task and worker task into account, in particularly, to reduce cross-node network transmission traffic and, correspondingly, decrease the DNN training time. We evaluate the DLTAP scheduling method using a state-of-the-art distributed DNN training framework on 3 benchmarks. The results show that the proposed method can averagely reduce 12% cross-node network traffic, and decrease the DNN training time even with the cluster of low-end servers.

  18. AN EFFICIENT INITIALIZATION METHOD FOR K-MEANS CLUSTERING OF HYPERSPECTRAL DATA

    Directory of Open Access Journals (Sweden)

    A. Alizade Naeini

    2014-10-01

    Full Text Available K-means is definitely the most frequently used partitional clustering algorithm in the remote sensing community. Unfortunately due to its gradient decent nature, this algorithm is highly sensitive to the initial placement of cluster centers. This problem deteriorates for the high-dimensional data such as hyperspectral remotely sensed imagery. To tackle this problem, in this paper, the spectral signatures of the endmembers in the image scene are extracted and used as the initial positions of the cluster centers. For this purpose, in the first step, A Neyman–Pearson detection theory based eigen-thresholding method (i.e., the HFC method has been employed to estimate the number of endmembers in the image. Afterwards, the spectral signatures of the endmembers are obtained using the Minimum Volume Enclosing Simplex (MVES algorithm. Eventually, these spectral signatures are used to initialize the k-means clustering algorithm. The proposed method is implemented on a hyperspectral dataset acquired by ROSIS sensor with 103 spectral bands over the Pavia University campus, Italy. For comparative evaluation, two other commonly used initialization methods (i.e., Bradley & Fayyad (BF and Random methods are implemented and compared. The confusion matrix, overall accuracy and Kappa coefficient are employed to assess the methods’ performance. The evaluations demonstrate that the proposed solution outperforms the other initialization methods and can be applied for unsupervised classification of hyperspectral imagery for landcover mapping.

  19. Multishell method: Exact treatment of a cluster in an effective medium

    International Nuclear Information System (INIS)

    Gonis, A.; Garland, J.W.

    1977-01-01

    A method is presented for the exact determination of the Green's function of a cluster embedded in a given effective medium. This method, the multishell method, is applicable even to systems with off-diagonal disorder, extended-range hopping, multiple bands, and/or hybridization, and is computationally practicable for any system described by a tight-binding or interpolation-scheme Hamiltonian. It allows one to examine the effects of local environment on the densities of states and site spectral weight functions of disordered systems. For any given analytic effective medium characterized by a non-negative density of states the method yields analytic cluster Green's functions and non-negative site spectral weight functions. Previous methods used for the calculation of the Green's function of a cluster embedded in a given effective medium have not been exact. The results of numerical calculations for model systems show that even the best of these previous methods can lead to substantial errors, at least for small clusters in two- and three-dimensional lattices. These results also show that fluctuations in local environment have large effects on site spectral weight functions, even in cases in which the single-site coherent-potential approximation yields an accurate overall density of states

  20. Open-Source Sequence Clustering Methods Improve the State Of the Art.

    Science.gov (United States)

    Kopylova, Evguenia; Navas-Molina, Jose A; Mercier, Céline; Xu, Zhenjiang Zech; Mahé, Frédéric; He, Yan; Zhou, Hong-Wei; Rognes, Torbjørn; Caporaso, J Gregory; Knight, Rob

    2016-01-01

    Sequence clustering is a common early step in amplicon-based microbial community analysis, when raw sequencing reads are clustered into operational taxonomic units (OTUs) to reduce the run time of subsequent analysis steps. Here, we evaluated the performance of recently released state-of-the-art open-source clustering software products, namely, OTUCLUST, Swarm, SUMACLUST, and SortMeRNA, against current principal options (UCLUST and USEARCH) in QIIME, hierarchical clustering methods in mothur, and USEARCH's most recent clustering algorithm, UPARSE. All the latest open-source tools showed promising results, reporting up to 60% fewer spurious OTUs than UCLUST, indicating that the underlying clustering algorithm can vastly reduce the number of these derived OTUs. Furthermore, we observed that stringent quality filtering, such as is done in UPARSE, can cause a significant underestimation of species abundance and diversity, leading to incorrect biological results. Swarm, SUMACLUST, and SortMeRNA have been included in the QIIME 1.9.0 release. IMPORTANCE Massive collections of next-generation sequencing data call for fast, accurate, and easily accessible bioinformatics algorithms to perform sequence clustering. A comprehensive benchmark is presented, including open-source tools and the popular USEARCH suite. Simulated, mock, and environmental communities were used to analyze sensitivity, selectivity, species diversity (alpha and beta), and taxonomic composition. The results demonstrate that recent clustering algorithms can significantly improve accuracy and preserve estimated diversity without the application of aggressive filtering. Moreover, these tools are all open source, apply multiple levels of multithreading, and scale to the demands of modern next-generation sequencing data, which is essential for the analysis of massive multidisciplinary studies such as the Earth Microbiome Project (EMP) (J. A. Gilbert, J. K. Jansson, and R. Knight, BMC Biol 12:69, 2014, http

  1. Form gene clustering method about pan-ethnic-group products based on emotional semantic

    Science.gov (United States)

    Chen, Dengkai; Ding, Jingjing; Gao, Minzhuo; Ma, Danping; Liu, Donghui

    2016-09-01

    The use of pan-ethnic-group products form knowledge primarily depends on a designer's subjective experience without user participation. The majority of studies primarily focus on the detection of the perceptual demands of consumers from the target product category. A pan-ethnic-group products form gene clustering method based on emotional semantic is constructed. Consumers' perceptual images of the pan-ethnic-group products are obtained by means of product form gene extraction and coding and computer aided product form clustering technology. A case of form gene clustering about the typical pan-ethnic-group products is investigated which indicates that the method is feasible. This paper opens up a new direction for the future development of product form design which improves the agility of product design process in the era of Industry 4.0.

  2. Communication: Time-dependent optimized coupled-cluster method for multielectron dynamics

    Science.gov (United States)

    Sato, Takeshi; Pathak, Himadri; Orimo, Yuki; Ishikawa, Kenichi L.

    2018-02-01

    Time-dependent coupled-cluster method with time-varying orbital functions, called time-dependent optimized coupled-cluster (TD-OCC) method, is formulated for multielectron dynamics in an intense laser field. We have successfully derived the equations of motion for CC amplitudes and orthonormal orbital functions based on the real action functional, and implemented the method including double excitations (TD-OCCD) and double and triple excitations (TD-OCCDT) within the optimized active orbitals. The present method is size extensive and gauge invariant, a polynomial cost-scaling alternative to the time-dependent multiconfiguration self-consistent-field method. The first application of the TD-OCC method of intense-laser driven correlated electron dynamics in Ar atom is reported.

  3. Unsupervised Learning —A Novel Clustering Method for Rolling Bearing Faults Identification

    Science.gov (United States)

    Kai, Li; Bo, Luo; Tao, Ma; Xuefeng, Yang; Guangming, Wang

    2017-12-01

    To promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rolling bearing. Among these studies, such as artificial neural networks, support vector machines, decision trees and other supervised learning methods are used commonly. These methods can detect the failure of rolling bearing effectively, but to achieve better detection results, it often requires a lot of training samples. Based on above, a novel clustering method is proposed in this paper. This novel method is able to find the correct number of clusters automatically the effectiveness of the proposed method is validated using datasets from rolling element bearings. The diagnosis results show that the proposed method can accurately detect the fault types of small samples. Meanwhile, the diagnosis results are also relative high accuracy even for massive samples.

  4. Clustering of attitudes towards obesity: a mixed methods study of Australian parents and children.

    Science.gov (United States)

    Olds, Tim; Thomas, Samantha; Lewis, Sophie; Petkov, John

    2013-10-12

    Current population-based anti-obesity campaigns often target individuals based on either weight or socio-demographic characteristics, and give a 'mass' message about personal responsibility. There is a recognition that attempts to influence attitudes and opinions may be more effective if they resonate with the beliefs that different groups have about the causes of, and solutions for, obesity. Limited research has explored how attitudinal factors may inform the development of both upstream and downstream social marketing initiatives. Computer-assisted face-to-face interviews were conducted with 159 parents and 184 of their children (aged 9-18 years old) in two Australian states. A mixed methods approach was used to assess attitudes towards obesity, and elucidate why different groups held various attitudes towards obesity. Participants were quantitatively assessed on eight dimensions relating to the severity and extent, causes and responsibility, possible remedies, and messaging strategies. Cluster analysis was used to determine attitudinal clusters. Participants were also able to qualify each answer. Qualitative responses were analysed both within and across attitudinal clusters using a constant comparative method. Three clusters were identified. Concerned Internalisers (27% of the sample) judged that obesity was a serious health problem, that Australia had among the highest levels of obesity in the world and that prevalence was rapidly increasing. They situated the causes and remedies for the obesity crisis in individual choices. Concerned Externalisers (38% of the sample) held similar views about the severity and extent of the obesity crisis. However, they saw responsibility and remedies as a societal rather than an individual issue. The final cluster, the Moderates, which contained significantly more children and males, believed that obesity was not such an important public health issue, and judged the extent of obesity to be less extreme than the other clusters

  5. Unstructured characteristic method embedded with variational nodal method using domain decomposition techniques

    Energy Technology Data Exchange (ETDEWEB)

    Girardi, E.; Ruggieri, J.M. [CEA Cadarache (DER/SPRC/LEPH), 13 - Saint-Paul-lez-Durance (France). Dept. d' Etudes des Reacteurs; Santandrea, S. [CEA Saclay, Dept. Modelisation de Systemes et Structures DM2S/SERMA/LENR, 91 - Gif sur Yvette (France)

    2005-07-01

    This paper describes a recently-developed extension of our 'Multi-methods,multi-domains' (MM-MD) method for the solution of the multigroup transport equation. Based on a domain decomposition technique, our approach allows us to treat the one-group equation by cooperatively employing several numerical methods together. In this work, we describe the coupling between the Method of Characteristics (integro-differential equation, unstructured meshes) with the Variational Nodal Method (even parity equation, cartesian meshes). Then, the coupling method is applied to the benchmark model of the Phebus experimental facility (Cea Cadarache). Our domain decomposition method give us the capability to employ a very fine mesh in describing a particular fuel bundle with an appropriate numerical method (MOC), while using a much large mesh size in the rest of the core, in conjunction with a coarse-mesh method (VNM). This application shows the benefits of our MM-MD approach, in terms of accuracy and computing time: the domain decomposition method allows us to reduce the Cpu time, while preserving a good accuracy of the neutronic indicators: reactivity, core-to-bundle power coupling coefficient and flux error. (authors)

  6. Unstructured characteristic method embedded with variational nodal method using domain decomposition techniques

    International Nuclear Information System (INIS)

    Girardi, E.; Ruggieri, J.M.

    2005-01-01

    This paper describes a recently-developed extension of our 'Multi-methods,multi-domains' (MM-MD) method for the solution of the multigroup transport equation. Based on a domain decomposition technique, our approach allows us to treat the one-group equation by cooperatively employing several numerical methods together. In this work, we describe the coupling between the Method of Characteristics (integro-differential equation, unstructured meshes) with the Variational Nodal Method (even parity equation, cartesian meshes). Then, the coupling method is applied to the benchmark model of the Phebus experimental facility (Cea Cadarache). Our domain decomposition method give us the capability to employ a very fine mesh in describing a particular fuel bundle with an appropriate numerical method (MOC), while using a much large mesh size in the rest of the core, in conjunction with a coarse-mesh method (VNM). This application shows the benefits of our MM-MD approach, in terms of accuracy and computing time: the domain decomposition method allows us to reduce the Cpu time, while preserving a good accuracy of the neutronic indicators: reactivity, core-to-bundle power coupling coefficient and flux error. (authors)

  7. Cluster analysis of European Y-chromosomal STR haplotypes using the discrete Laplace method

    DEFF Research Database (Denmark)

    Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels

    2014-01-01

    The European Y-chromosomal short tandem repeat (STR) haplotype distribution has previously been analysed in various ways. Here, we introduce a new way of analysing population substructure using a new method based on clustering within the discrete Laplace exponential family that models the probabi......The European Y-chromosomal short tandem repeat (STR) haplotype distribution has previously been analysed in various ways. Here, we introduce a new way of analysing population substructure using a new method based on clustering within the discrete Laplace exponential family that models...... the probability distribution of the Y-STR haplotypes. Creating a consistent statistical model of the haplotypes enables us to perform a wide range of analyses. Previously, haplotype frequency estimation using the discrete Laplace method has been validated. In this paper we investigate how the discrete Laplace...... method can be used for cluster analysis to further validate the discrete Laplace method. A very important practical fact is that the calculations can be performed on a normal computer. We identified two sub-clusters of the Eastern and Western European Y-STR haplotypes similar to results of previous...

  8. Detecting and extracting clusters in atom probe data: A simple, automated method using Voronoi cells

    International Nuclear Information System (INIS)

    Felfer, P.; Ceguerra, A.V.; Ringer, S.P.; Cairney, J.M.

    2015-01-01

    The analysis of the formation of clusters in solid solutions is one of the most common uses of atom probe tomography. Here, we present a method where we use the Voronoi tessellation of the solute atoms and its geometric dual, the Delaunay triangulation to test for spatial/chemical randomness of the solid solution as well as extracting the clusters themselves. We show how the parameters necessary for cluster extraction can be determined automatically, i.e. without user interaction, making it an ideal tool for the screening of datasets and the pre-filtering of structures for other spatial analysis techniques. Since the Voronoi volumes are closely related to atomic concentrations, the parameters resulting from this analysis can also be used for other concentration based methods such as iso-surfaces. - Highlights: • Cluster analysis of atom probe data can be significantly simplified by using the Voronoi cell volumes of the atomic distribution. • Concentration fields are defined on a single atomic basis using Voronoi cells. • All parameters for the analysis are determined by optimizing the separation probability of bulk atoms vs clustered atoms

  9. Analysis of spin and gauge models with variational methods

    International Nuclear Information System (INIS)

    Dagotto, E.; Masperi, L.; Moreo, A.; Della Selva, A.; Fiore, R.

    1985-01-01

    Since independent-site (link) or independent-link (plaquette) variational states enhance the order or the disorder, respectively, in the treatment of spin (gauge) models, we prove that mixed states are able to improve the critical coupling while giving the qualitatively correct behavior of the relevant parameters

  10. Perturbative vs. variational methods in the study of carbon nanotubes

    DEFF Research Database (Denmark)

    Cornean, Horia; Pedersen, Thomas Garm; Ricaud, Benjamin

    2007-01-01

    Recent two-photon photo-luminescence experiments give accurate data for the ground and first excited excitonic energies at different nanotube radii. In this paper we compare the analytic approximations proved in [CDR], with a standard variational approach. We show an excellent agreement at suffic...

  11. Variational method for inverting the Kohn-Sham procedure

    International Nuclear Information System (INIS)

    Kadantsev, Eugene S.; Stott, M.J.

    2004-01-01

    A procedure based on a variational principle is developed for determining the local Kohn-Sham (KS) potential corresponding to a given ground-state electron density. This procedure is applied to calculate the exchange-correlation part of the effective Kohn-Sham (KS) potential for the neon atom and the methane molecule

  12. Clustering self-organizing maps (SOM) method for human papillomavirus (HPV) DNA as the main cause of cervical cancer disease

    Science.gov (United States)

    Bustamam, A.; Aldila, D.; Fatimah, Arimbi, M. D.

    2017-07-01

    One of the most widely used clustering method, since it has advantage on its robustness, is Self-Organizing Maps (SOM) method. This paper discusses the application of SOM method on Human Papillomavirus (HPV) DNA which is the main cause of cervical cancer disease, the most dangerous cancer in developing countries. We use 18 types of HPV DNA-based on the newest complete genome. By using open-source-based program R, clustering process can separate 18 types of HPV into two different clusters. There are two types of HPV in the first cluster while 16 others in the second cluster. The analyzing result of 18 types HPV based on the malignancy of the virus (the difficultness to cure). Two of HPV types the first cluster can be classified as tame HPV, while 16 others in the second cluster are classified as vicious HPV.

  13. A comparison of three clustering methods for finding subgroups in MRI, SMS or clinical data

    DEFF Research Database (Denmark)

    Kent, Peter; Jensen, Rikke K; Kongsted, Alice

    2014-01-01

    ). There is a scarcity of head-to-head comparisons that can inform the choice of which clustering method might be suitable for particular clinical datasets and research questions. Therefore, the aim of this study was to perform a head-to-head comparison of three commonly available methods (SPSS TwoStep CA, Latent Gold...... LCA and SNOB LCA). METHODS: The performance of these three methods was compared: (i) quantitatively using the number of subgroups detected, the classification probability of individuals into subgroups, the reproducibility of results, and (ii) qualitatively using subjective judgments about each program...... classify individuals into those subgroups. CONCLUSIONS: Our subjective judgement was that Latent Gold offered the best balance of sensitivity to subgroups, ease of use and presentation of results with these datasets but we recognise that different clustering methods may suit other types of data...

  14. A method to determine the number of nanoparticles in a cluster using conventional optical microscopes

    International Nuclear Information System (INIS)

    Kang, Hyeonggon; Attota, Ravikiran; Tondare, Vipin; Vladár, András E.; Kavuri, Premsagar

    2015-01-01

    We present a method that uses conventional optical microscopes to determine the number of nanoparticles in a cluster, which is typically not possible using traditional image-based optical methods due to the diffraction limit. The method, called through-focus scanning optical microscopy (TSOM), uses a series of optical images taken at varying focus levels to achieve this. The optical images cannot directly resolve the individual nanoparticles, but contain information related to the number of particles. The TSOM method makes use of this information to determine the number of nanoparticles in a cluster. Initial good agreement between the simulations and the measurements is also presented. The TSOM method can be applied to fluorescent and non-fluorescent as well as metallic and non-metallic nano-scale materials, including soft materials, making it attractive for tag-less, high-speed, optical analysis of nanoparticles down to 45 nm diameter

  15. Methods for simultaneously identifying coherent local clusters with smooth global patterns in gene expression profiles

    Directory of Open Access Journals (Sweden)

    Lee Yun-Shien

    2008-03-01

    Full Text Available Abstract Background The hierarchical clustering tree (HCT with a dendrogram 1 and the singular value decomposition (SVD with a dimension-reduced representative map 2 are popular methods for two-way sorting the gene-by-array matrix map employed in gene expression profiling. While HCT dendrograms tend to optimize local coherent clustering patterns, SVD leading eigenvectors usually identify better global grouping and transitional structures. Results This study proposes a flipping mechanism for a conventional agglomerative HCT using a rank-two ellipse (R2E, an improved SVD algorithm for sorting purpose seriation by Chen 3 as an external reference. While HCTs always produce permutations with good local behaviour, the rank-two ellipse seriation gives the best global grouping patterns and smooth transitional trends. The resulting algorithm automatically integrates the desirable properties of each method so that users have access to a clustering and visualization environment for gene expression profiles that preserves coherent local clusters and identifies global grouping trends. Conclusion We demonstrate, through four examples, that the proposed method not only possesses better numerical and statistical properties, it also provides more meaningful biomedical insights than other sorting algorithms. We suggest that sorted proximity matrices for genes and arrays, in addition to the gene-by-array expression matrix, can greatly aid in the search for comprehensive understanding of gene expression structures. Software for the proposed methods can be obtained at http://gap.stat.sinica.edu.tw/Software/GAP.

  16. Statistical method on nonrandom clustering with application to somatic mutations in cancer

    Directory of Open Access Journals (Sweden)

    Rejto Paul A

    2010-01-01

    Full Text Available Abstract Background Human cancer is caused by the accumulation of tumor-specific mutations in oncogenes and tumor suppressors that confer a selective growth advantage to cells. As a consequence of genomic instability and high levels of proliferation, many passenger mutations that do not contribute to the cancer phenotype arise alongside mutations that drive oncogenesis. While several approaches have been developed to separate driver mutations from passengers, few approaches can specifically identify activating driver mutations in oncogenes, which are more amenable for pharmacological intervention. Results We propose a new statistical method for detecting activating mutations in cancer by identifying nonrandom clusters of amino acid mutations in protein sequences. A probability model is derived using order statistics assuming that the location of amino acid mutations on a protein follows a uniform distribution. Our statistical measure is the differences between pair-wise order statistics, which is equivalent to the size of an amino acid mutation cluster, and the probabilities are derived from exact and approximate distributions of the statistical measure. Using data in the Catalog of Somatic Mutations in Cancer (COSMIC database, we have demonstrated that our method detects well-known clusters of activating mutations in KRAS, BRAF, PI3K, and β-catenin. The method can also identify new cancer targets as well as gain-of-function mutations in tumor suppressors. Conclusions Our proposed method is useful to discover activating driver mutations in cancer by identifying nonrandom clusters of somatic amino acid mutations in protein sequences.

  17. Internet of Things-Based Arduino Intelligent Monitoring and Cluster Analysis of Seasonal Variation in Physicochemical Parameters of Jungnangcheon, an Urban Stream

    Directory of Open Access Journals (Sweden)

    Byungwan Jo

    2017-03-01

    Full Text Available In the present case study, the use of an advanced, efficient and low-cost technique for monitoring an urban stream was reported. Physicochemical parameters (PcPs of Jungnangcheon stream (Seoul, South Korea were assessed using an Internet of Things (IoT platform. Temperature, dissolved oxygen (DO, and pH parameters were monitored for the three summer months and the first fall month at a fixed location. Analysis was performed using clustering techniques (CTs, such as K-means clustering, agglomerative hierarchical clustering (AHC, and density-based spatial clustering of applications with noise (DBSCAN. An IoT-based Arduino sensor module (ASM network with a 99.99% efficient communication platform was developed to allow collection of stream data with user-friendly software and hardware and facilitated data analysis by interested individuals using their smartphones. Clustering was used to formulate relationships among physicochemical parameters. K-means clustering was used to identify natural clusters using the silhouette coefficient based on cluster compactness and looseness. AHC grouped all data into two clusters as well as temperature, DO and pH into four, eight, and four clusters, respectively. DBSCAN analysis was also performed to evaluate yearly variations in physicochemical parameters. Noise points (NOISE of temperature in 2016 were border points (ƥ, whereas in 2014 and 2015 they remained core points (ɋ, indicating a trend toward increasing stream temperature. We found the stream parameters were within the permissible limits set by the Water Quality Standards for River Water, South Korea.

  18. Colour based fire detection method with temporal intensity variation filtration

    Science.gov (United States)

    Trambitckii, K.; Anding, K.; Musalimov, V.; Linß, G.

    2015-02-01

    Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library.

  19. Colour based fire detection method with temporal intensity variation filtration

    International Nuclear Information System (INIS)

    Trambitckii, K; Musalimov, V; Anding, K; Linß, G

    2015-01-01

    Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library

  20. Some new mathematical methods for variational objective analysis

    Science.gov (United States)

    Wahba, Grace; Johnson, Donald R.

    1994-01-01

    Numerous results were obtained relevant to remote sensing, variational objective analysis, and data assimilation. A list of publications relevant in whole or in part is attached. The principal investigator gave many invited lectures, disseminating the results to the meteorological community as well as the statistical community. A list of invited lectures at meetings is attached, as well as a list of departmental colloquia at various universities and institutes.

  1. Annotated Computer Output for Illustrative Examples of Clustering Using the Mixture Method and Two Comparable Methods from SAS.

    Science.gov (United States)

    1987-06-26

    BUREAU OF STANDAR-S1963-A Nw BOM -ILE COPY -. 4eo .?3sa.9"-,,A WIN* MAT HEMATICAL SCIENCES _*INSTITUTE AD-A184 687 DTICS!ELECTE ANNOTATED COMPUTER OUTPUT...intoduction to the use of mixture models in clustering. Cornell University Biometrics Unit Technical Report BU-920-M and Mathematical Sciences Institute...mixture method and two comparable methods from SAS. Cornell University Biometrics Unit Technical Report BU-921-M and Mathematical Sciences Institute

  2. Cluster-cluster clustering

    International Nuclear Information System (INIS)

    Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C.S.; Yale Univ., New Haven, CT; California Univ., Santa Barbara; Cambridge Univ., England; Sussex Univ., Brighton, England)

    1985-01-01

    The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales. 30 references

  3. Research of the Space Clustering Method for the Airport Noise Data Minings

    Directory of Open Access Journals (Sweden)

    Jiwen Xie

    2014-03-01

    Full Text Available Mining the distribution pattern and evolution of the airport noise from the airport noise data and the geographic information of the monitoring points is of great significance for the scientific and rational governance of airport noise pollution problem. However, most of the traditional clustering methods are based on the closeness of space location or the similarity of non-spatial features, which split the duality of space elements, resulting in that the clustering result has difficult in satisfying both the closeness of space location and the similarity of non-spatial features. This paper, therefore, proposes a spatial clustering algorithm based on dual-distance. This algorithm uses a distance function as the similarity measure function in which spatial features and non-spatial features are combined. The experimental results show that the proposed algorithm can discover the noise distribution pattern around the airport effectively.

  4. Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-05-01

    A novel multi-modal parameter estimation algorithm is introduced. Parameter estimation is an ill-posed inverse problem that might admit many different solutions. This is attributed to the limited amount of measured data used to constrain the inverse problem. The proposed multi-modal model calibration algorithm uses an iterative stochastic ensemble method (ISEM) for parameter estimation. ISEM employs an ensemble of directional derivatives within a Gauss-Newton iteration for nonlinear parameter estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging of close clusters approaching the same local minima. Numerical testing demonstrates the potential of the proposed algorithm in dealing with multi-modal nonlinear parameter estimation for subsurface flow models. © 2013 Elsevier B.V.

  5. Implementation of K-Means Clustering Method for Electronic Learning Model

    Science.gov (United States)

    Latipa Sari, Herlina; Suranti Mrs., Dewi; Natalia Zulita, Leni

    2017-12-01

    Teaching and Learning process at SMK Negeri 2 Bengkulu Tengah has applied e-learning system for teachers and students. The e-learning was based on the classification of normative, productive, and adaptive subjects. SMK Negeri 2 Bengkulu Tengah consisted of 394 students and 60 teachers with 16 subjects. The record of e-learning database was used in this research to observe students’ activity pattern in attending class. K-Means algorithm in this research was used to classify students’ learning activities using e-learning, so that it was obtained cluster of students’ activity and improvement of student’s ability. Implementation of K-Means Clustering method for electronic learning model at SMK Negeri 2 Bengkulu Tengah was conducted by observing 10 students’ activities, namely participation of students in the classroom, submit assignment, view assignment, add discussion, view discussion, add comment, download course materials, view article, view test, and submit test. In the e-learning model, the testing was conducted toward 10 students that yielded 2 clusters of membership data (C1 and C2). Cluster 1: with membership percentage of 70% and it consisted of 6 members, namely 1112438 Anggi Julian, 1112439 Anis Maulita, 1112441 Ardi Febriansyah, 1112452 Berlian Sinurat, 1112460 Dewi Anugrah Anwar and 1112467 Eka Tri Oktavia Sari. Cluster 2:with membership percentage of 30% and it consisted of 4 members, namely 1112463 Dosita Afriyani, 1112471 Erda Novita, 1112474 Eskardi and 1112477 Fachrur Rozi.

  6. Study of methods to increase cluster/dislocation loop densities in electrodes

    Science.gov (United States)

    Yang, Xiaoling; Miley, George H.

    2009-03-01

    Recent research has developed a technique for imbedding ultra-high density deuterium ``clusters'' (50 to 100 atoms per cluster) in various metals such as Palladium (Pd), Beryllium (Be) and Lithium (Li). It was found the thermally dehydrogenated PdHx retained the clusters and exhibited up to 12 percent lower resistance compared to the virginal Pd samplesootnotetextA. G. Lipson, et al. Phys. Solid State. 39 (1997) 1891. SQUID measurements showed that in Pd these condensed matter clusters approach metallic conditions, exhibiting superconducting propertiesootnotetextA. Lipson, et al. Phys. Rev. B 72, 212507 (2005ootnotetextA. G. Lipson, et al. Phys. Lett. A 339, (2005) 414-423. If the fabrication methods under study are successful, a large packing fraction of nuclear reactive clusters can be developed in the electrodes by electrolyte or high pressure gas loading. This will provide a much higher low-energy-nuclear- reaction (LENR) rate than achieved with earlier electrodeootnotetextCastano, C.H., et al. Proc. ICCF-9, Beijing, China 19-24 May, 2002..

  7. Coordinate-Based Clustering Method for Indoor Fingerprinting Localization in Dense Cluttered Environments

    Directory of Open Access Journals (Sweden)

    Wen Liu

    2016-12-01

    Full Text Available Indoor positioning technologies has boomed recently because of the growing commercial interest in indoor location-based service (ILBS. Due to the absence of satellite signal in Global Navigation Satellite System (GNSS, various technologies have been proposed for indoor applications. Among them, Wi-Fi fingerprinting has been attracting much interest from researchers because of its pervasive deployment, flexibility and robustness to dense cluttered indoor environments. One challenge, however, is the deployment of Access Points (AP, which would bring a significant influence on the system positioning accuracy. This paper concentrates on WLAN based fingerprinting indoor location by analyzing the AP deployment influence, and studying the advantages of coordinate-based clustering compared to traditional RSS-based clustering. A coordinate-based clustering method for indoor fingerprinting location, named Smallest-Enclosing-Circle-based (SEC, is then proposed aiming at reducing the positioning error lying in the AP deployment and improving robustness to dense cluttered environments. All measurements are conducted in indoor public areas, such as the National Center For the Performing Arts (as Test-bed 1 and the XiDan Joy City (Floors 1 and 2, as Test-bed 2, and results show that SEC clustering algorithm can improve system positioning accuracy by about 32.7% for Test-bed 1, 71.7% for Test-bed 2 Floor 1 and 73.7% for Test-bed 2 Floor 2 compared with traditional RSS-based clustering algorithms such as K-means.

  8. Coordinate-Based Clustering Method for Indoor Fingerprinting Localization in Dense Cluttered Environments.

    Science.gov (United States)

    Liu, Wen; Fu, Xiao; Deng, Zhongliang

    2016-12-02

    Indoor positioning technologies has boomed recently because of the growing commercial interest in indoor location-based service (ILBS). Due to the absence of satellite signal in Global Navigation Satellite System (GNSS), various technologies have been proposed for indoor applications. Among them, Wi-Fi fingerprinting has been attracting much interest from researchers because of its pervasive deployment, flexibility and robustness to dense cluttered indoor environments. One challenge, however, is the deployment of Access Points (AP), which would bring a significant influence on the system positioning accuracy. This paper concentrates on WLAN based fingerprinting indoor location by analyzing the AP deployment influence, and studying the advantages of coordinate-based clustering compared to traditional RSS-based clustering. A coordinate-based clustering method for indoor fingerprinting location, named Smallest-Enclosing-Circle-based (SEC), is then proposed aiming at reducing the positioning error lying in the AP deployment and improving robustness to dense cluttered environments. All measurements are conducted in indoor public areas, such as the National Center For the Performing Arts (as Test-bed 1) and the XiDan Joy City (Floors 1 and 2, as Test-bed 2), and results show that SEC clustering algorithm can improve system positioning accuracy by about 32.7% for Test-bed 1, 71.7% for Test-bed 2 Floor 1 and 73.7% for Test-bed 2 Floor 2 compared with traditional RSS-based clustering algorithms such as K-means.

  9. IP2P K-means: an efficient method for data clustering on sensor networks

    Directory of Open Access Journals (Sweden)

    Peyman Mirhadi

    2013-03-01

    Full Text Available Many wireless sensor network applications require data gathering as the most important parts of their operations. There are increasing demands for innovative methods to improve energy efficiency and to prolong the network lifetime. Clustering is considered as an efficient topology control methods in wireless sensor networks, which can increase network scalability and lifetime. This paper presents a method, IP2P K-means – Improved P2P K-means, which uses efficient leveling in clustering approach, reduces false labeling and restricts the necessary communication among various sensors, which obviously saves more energy. The proposed method is examined in Network Simulator Ver.2 (NS2 and the preliminary results show that the algorithm works effectively and relatively more precisely.

  10. Method for Determining Appropriate Clustering Criteria of Location-Sensing Data

    Directory of Open Access Journals (Sweden)

    Youngmin Lee

    2016-08-01

    Full Text Available Large quantities of location-sensing data are generated from location-based social network services. These data are provided as point properties with location coordinates acquired from a global positioning system or Wi-Fi signal. To show the point data on multi-scale map services, the data should be represented by clusters following a grid-based clustering method, in which an appropriate grid size should be determined. Currently, there are no criteria for determining the proper grid size, and the modifiable areal unit problem has been formulated for the purpose of addressing this issue. The method proposed in this paper is applies a hexagonal grid to geotagged Twitter point data, considering the grid size in terms of both quantity and quality to minimize the limitations associated with the modifiable areal unit problem. Quantitatively, we reduced the original Twitter point data by an appropriate amount using Töpfer’s radical law. Qualitatively, we maintained the original distribution characteristics using Moran’s I. Finally, we determined the appropriate sizes of clusters from zoom levels 9–13 by analyzing the distribution of data on the graphs. Based on the visualized clustering results, we confirm that the original distribution pattern is effectively maintained using the proposed method.

  11. Smoothed Particle Inference: A Kilo-Parametric Method for X-ray Galaxy Cluster Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, John R.; Marshall, P.J.; /KIPAC, Menlo Park; Andersson, K.; /Stockholm U. /SLAC

    2005-08-05

    We propose an ambitious new method that models the intracluster medium in clusters of galaxies as a set of X-ray emitting smoothed particles of plasma. Each smoothed particle is described by a handful of parameters including temperature, location, size, and elemental abundances. Hundreds to thousands of these particles are used to construct a model cluster of galaxies, with the appropriate complexity estimated from the data quality. This model is then compared iteratively with X-ray data in the form of adaptively binned photon lists via a two-sample likelihood statistic and iterated via Markov Chain Monte Carlo. The complex cluster model is propagated through the X-ray instrument response using direct sampling Monte Carlo methods. Using this approach the method can reproduce many of the features observed in the X-ray emission in a less assumption-dependent way that traditional analyses, and it allows for a more detailed characterization of the density, temperature, and metal abundance structure of clusters. Multi-instrument X-ray analyses and simultaneous X-ray, Sunyaev-Zeldovich (SZ), and lensing analyses are a straight-forward extension of this methodology. Significant challenges still exist in understanding the degeneracy in these models and the statistical noise induced by the complexity of the models.

  12. Relativistic rise measurement by cluster counting method in time expansion chamber

    International Nuclear Information System (INIS)

    Rehak, P.; Walenta, A.H.

    1979-10-01

    A new approach to the measurement of the ionization energy loss for the charged particle identification in the region of the relativistic rise was tested experimentally. The method consists of determining in a special drift chamber (TEC) the number of clusters of the primary ionization. The method gives almost the full relativistic rise and narrower landau distribution. The consequences for a practical detector are discussed

  13. Stepwise threshold clustering: a new method for genotyping MHC loci using next-generation sequencing technology.

    Directory of Open Access Journals (Sweden)

    William E Stutz

    Full Text Available Genes of the vertebrate major histocompatibility complex (MHC are of great interest to biologists because of their important role in immunity and disease, and their extremely high levels of genetic diversity. Next generation sequencing (NGS technologies are quickly becoming the method of choice for high-throughput genotyping of multi-locus templates like MHC in non-model organisms. Previous approaches to genotyping MHC genes using NGS technologies suffer from two problems:1 a "gray zone" where low frequency alleles and high frequency artifacts can be difficult to disentangle and 2 a similar sequence problem, where very similar alleles can be difficult to distinguish as two distinct alleles. Here were present a new method for genotyping MHC loci--Stepwise Threshold Clustering (STC--that addresses these problems by taking full advantage of the increase in sequence data provided by NGS technologies. Unlike previous approaches for genotyping MHC with NGS data that attempt to classify individual sequences as alleles or artifacts, STC uses a quasi-Dirichlet clustering algorithm to cluster similar sequences at increasing levels of sequence similarity. By applying frequency and similarity based criteria to clusters rather than individual sequences, STC is able to successfully identify clusters of sequences that correspond to individual or similar alleles present in the genomes of individual samples. Furthermore, STC does not require duplicate runs of all samples, increasing the number of samples that can be genotyped in a given project. We show how the STC method works using a single sample library. We then apply STC to 295 threespine stickleback (Gasterosteus aculeatus samples from four populations and show that neighboring populations differ significantly in MHC allele pools. We show that STC is a reliable, accurate, efficient, and flexible method for genotyping MHC that will be of use to biologists interested in a variety of downstream applications.

  14. Variational methods for crystalline microstructure analysis and computation

    CERN Document Server

    Dolzmann, Georg

    2003-01-01

    Phase transformations in solids typically lead to surprising mechanical behaviour with far reaching technological applications. The mathematical modeling of these transformations in the late 80s initiated a new field of research in applied mathematics, often referred to as mathematical materials science, with deep connections to the calculus of variations and the theory of partial differential equations. This volume gives a brief introduction to the essential physical background, in particular for shape memory alloys and a special class of polymers (nematic elastomers). Then the underlying mathematical concepts are presented with a strong emphasis on the importance of quasiconvex hulls of sets for experiments, analytical approaches, and numerical simulations.

  15. Quantum Monte Carlo diagonalization method as a variational calculation

    International Nuclear Information System (INIS)

    Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio.

    1997-01-01

    A stochastic method for performing large-scale shell model calculations is presented, which utilizes the auxiliary field Monte Carlo technique and diagonalization method. This method overcomes the limitation of the conventional shell model diagonalization and can extremely widen the feasibility of shell model calculations with realistic interactions for spectroscopic study of nuclear structure. (author)

  16. Perfect Form: Variational Principles, Methods, and Applications in Elementary Physics

    International Nuclear Information System (INIS)

    Isenberg, C

    1997-01-01

    This short book is concerned with the physical applications of variational principles of the calculus. It is intended for undergraduate students who have taken some introductory lectures on the subject and have been exposed to Lagrangian and Hamiltonian mechanics. Throughout the book the author emphasizes the historical background to the subject and provides numerous problems, mainly from the fields of mechanics and optics. Some of these problems are provided with an answer, while others, regretfully, are not. It would have been an added help to the undergraduate reader if complete solutions could have been provided in an appendix. The introductory chapter is concerned with Fermat's Principle and image formation. This is followed by the derivation of the Euler - Lagrange equation. The third chapter returns to the subject of optical paths without making the link with a mechanical variational principle - that comes later. Chapters on the subjects of minimum potential energy, least action and Hamilton's principle follow. This volume provides an 'easy read' for a student keen to learn more about the subject. It is well illustrated and will make a useful addition to all undergraduate physics libraries. (book review)

  17. a Three-Step Spatial-Temporal Clustering Method for Human Activity Pattern Analysis

    Science.gov (United States)

    Huang, W.; Li, S.; Xu, S.

    2016-06-01

    How people move in cities and what they do in various locations at different times form human activity patterns. Human activity pattern plays a key role in in urban planning, traffic forecasting, public health and safety, emergency response, friend recommendation, and so on. Therefore, scholars from different fields, such as social science, geography, transportation, physics and computer science, have made great efforts in modelling and analysing human activity patterns or human mobility patterns. One of the essential tasks in such studies is to find the locations or places where individuals stay to perform some kind of activities before further activity pattern analysis. In the era of Big Data, the emerging of social media along with wearable devices enables human activity data to be collected more easily and efficiently. Furthermore, the dimension of the accessible human activity data has been extended from two to three (space or space-time) to four dimensions (space, time and semantics). More specifically, not only a location and time that people stay and spend are collected, but also what people "say" for in a location at a time can be obtained. The characteristics of these datasets shed new light on the analysis of human mobility, where some of new methodologies should be accordingly developed to handle them. Traditional methods such as neural networks, statistics and clustering have been applied to study human activity patterns using geosocial media data. Among them, clustering methods have been widely used to analyse spatiotemporal patterns. However, to our best knowledge, few of clustering algorithms are specifically developed for handling the datasets that contain spatial, temporal and semantic aspects all together. In this work, we propose a three-step human activity clustering method based on space, time and semantics to fill this gap. One-year Twitter data, posted in Toronto, Canada, is used to test the clustering-based method. The results show that the

  18. A THREE-STEP SPATIAL-TEMPORAL-SEMANTIC CLUSTERING METHOD FOR HUMAN ACTIVITY PATTERN ANALYSIS

    Directory of Open Access Journals (Sweden)

    W. Huang

    2016-06-01

    Full Text Available How people move in cities and what they do in various locations at different times form human activity patterns. Human activity pattern plays a key role in in urban planning, traffic forecasting, public health and safety, emergency response, friend recommendation, and so on. Therefore, scholars from different fields, such as social science, geography, transportation, physics and computer science, have made great efforts in modelling and analysing human activity patterns or human mobility patterns. One of the essential tasks in such studies is to find the locations or places where individuals stay to perform some kind of activities before further activity pattern analysis. In the era of Big Data, the emerging of social media along with wearable devices enables human activity data to be collected more easily and efficiently. Furthermore, the dimension of the accessible human activity data has been extended from two to three (space or space-time to four dimensions (space, time and semantics. More specifically, not only a location and time that people stay and spend are collected, but also what people “say” for in a location at a time can be obtained. The characteristics of these datasets shed new light on the analysis of human mobility, where some of new methodologies should be accordingly developed to handle them. Traditional methods such as neural networks, statistics and clustering have been applied to study human activity patterns using geosocial media data. Among them, clustering methods have been widely used to analyse spatiotemporal patterns. However, to our best knowledge, few of clustering algorithms are specifically developed for handling the datasets that contain spatial, temporal and semantic aspects all together. In this work, we propose a three-step human activity clustering method based on space, time and semantics to fill this gap. One-year Twitter data, posted in Toronto, Canada, is used to test the clustering-based method. The

  19. Interpretation of biological and mechanical variations between the Lowry versus Bradford method for protein quantification

    OpenAIRE

    Tzong-Shi Lu; Szu-Yu Yiao; Kenneth Lim; Roderick V. Jensen; Li-Li Hsiao

    2010-01-01

    Background: The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. Aims: We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. Material & Methods: Differential protein expression patterns was assessed by western bl...

  20. Iterative method of the parameter variation for solution of nonlinear functional equations

    International Nuclear Information System (INIS)

    Davidenko, D.F.

    1975-01-01

    The iteration method of parameter variation is used for solving nonlinear functional equations in Banach spaces. The authors consider some methods for numerical integration of ordinary first-order differential equations and construct the relevant iteration methods of parameter variation, both one- and multifactor. They also discuss problems of mathematical substantiation of the method, study the conditions and rate of convergence, estimate the error. The paper considers the application of the method to specific functional equations

  1. An adjoint sensitivity-based data assimilation method and its comparison with existing variational methods

    Directory of Open Access Journals (Sweden)

    Yonghan Choi

    2014-01-01

    Full Text Available An adjoint sensitivity-based data assimilation (ASDA method is proposed and applied to a heavy rainfall case over the Korean Peninsula. The heavy rainfall case, which occurred on 26 July 2006, caused torrential rainfall over the central part of the Korean Peninsula. The mesoscale convective system (MCS related to the heavy rainfall was classified as training line/adjoining stratiform (TL/AS-type for the earlier period, and back building (BB-type for the later period. In the ASDA method, an adjoint model is run backwards with forecast-error gradient as input, and the adjoint sensitivity of the forecast error to the initial condition is scaled by an optimal scaling factor. The optimal scaling factor is determined by minimising the observational cost function of the four-dimensional variational (4D-Var method, and the scaled sensitivity is added to the original first guess. Finally, the observations at the analysis time are assimilated using a 3D-Var method with the improved first guess. The simulated rainfall distribution is shifted northeastward compared to the observations when no radar data are assimilated or when radar data are assimilated using the 3D-Var method. The rainfall forecasts are improved when radar data are assimilated using the 4D-Var or ASDA method. Simulated atmospheric fields such as horizontal winds, temperature, and water vapour mixing ratio are also improved via the 4D-Var or ASDA method. Due to the improvement in the analysis, subsequent forecasts appropriately simulate the observed features of the TL/AS- and BB-type MCSs and the corresponding heavy rainfall. The computational cost associated with the ASDA method is significantly lower than that of the 4D-Var method.

  2. Pre-crash scenarios at road junctions: A clustering method for car crash data.

    Science.gov (United States)

    Nitsche, Philippe; Thomas, Pete; Stuetz, Rainer; Welsh, Ruth

    2017-10-01

    Given the recent advancements in autonomous driving functions, one of the main challenges is safe and efficient operation in complex traffic situations such as road junctions. There is a need for comprehensive testing, either in virtual simulation environments or on real-world test tracks. This paper presents a novel data analysis method including the preparation, analysis and visualization of car crash data, to identify the critical pre-crash scenarios at T- and four-legged junctions as a basis for testing the safety of automated driving systems. The presented method employs k-medoids to cluster historical junction crash data into distinct partitions and then applies the association rules algorithm to each cluster to specify the driving scenarios in more detail. The dataset used consists of 1056 junction crashes in the UK, which were exported from the in-depth "On-the-Spot" database. The study resulted in thirteen crash clusters for T-junctions, and six crash clusters for crossroads. Association rules revealed common crash characteristics, which were the basis for the scenario descriptions. The results support existing findings on road junction accidents and provide benchmark situations for safety performance tests in order to reduce the possible number parameter combinations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Puzzle of magnetic moments of Ni clusters revisited using quantum Monte Carlo method.

    Science.gov (United States)

    Lee, Hung-Wen; Chang, Chun-Ming; Hsing, Cheng-Rong

    2017-02-28

    The puzzle of the magnetic moments of small nickel clusters arises from the discrepancy between values predicted using density functional theory (DFT) and experimental measurements. Traditional DFT approaches underestimate the magnetic moments of nickel clusters. Two fundamental problems are associated with this puzzle, namely, calculating the exchange-correlation interaction accurately and determining the global minimum structures of the clusters. Theoretically, the two problems can be solved using quantum Monte Carlo (QMC) calculations and the ab initio random structure searching (AIRSS) method correspondingly. Therefore, we combined the fixed-moment AIRSS and QMC methods to investigate the magnetic properties of Ni n (n = 5-9) clusters. The spin moments of the diffusion Monte Carlo (DMC) ground states are higher than those of the Perdew-Burke-Ernzerhof ground states and, in the case of Ni 8-9 , two new ground-state structures have been discovered using the DMC calculations. The predicted results are closer to the experimental findings, unlike the results predicted in previous standard DFT studies.

  4. An Energy-Efficient Cluster-Based Vehicle Detection on Road Network Using Intention Numeration Method

    Directory of Open Access Journals (Sweden)

    Deepa Devasenapathy

    2015-01-01

    Full Text Available The traffic in the road network is progressively increasing at a greater extent. Good knowledge of network traffic can minimize congestions using information pertaining to road network obtained with the aid of communal callers, pavement detectors, and so on. Using these methods, low featured information is generated with respect to the user in the road network. Although the existing schemes obtain urban traffic information, they fail to calculate the energy drain rate of nodes and to locate equilibrium between the overhead and quality of the routing protocol that renders a great challenge. Thus, an energy-efficient cluster-based vehicle detection in road network using the intention numeration method (CVDRN-IN is developed. Initially, sensor nodes that detect a vehicle are grouped into separate clusters. Further, we approximate the strength of the node drain rate for a cluster using polynomial regression function. In addition, the total node energy is estimated by taking the integral over the area. Finally, enhanced data aggregation is performed to reduce the amount of data transmission using digital signature tree. The experimental performance is evaluated with Dodgers loop sensor data set from UCI repository and the performance evaluation outperforms existing work on energy consumption, clustering efficiency, and node drain rate.

  5. An energy-efficient cluster-based vehicle detection on road network using intention numeration method.

    Science.gov (United States)

    Devasenapathy, Deepa; Kannan, Kathiravan

    2015-01-01

    The traffic in the road network is progressively increasing at a greater extent. Good knowledge of network traffic can minimize congestions using information pertaining to road network obtained with the aid of communal callers, pavement detectors, and so on. Using these methods, low featured information is generated with respect to the user in the road network. Although the existing schemes obtain urban traffic information, they fail to calculate the energy drain rate of nodes and to locate equilibrium between the overhead and quality of the routing protocol that renders a great challenge. Thus, an energy-efficient cluster-based vehicle detection in road network using the intention numeration method (CVDRN-IN) is developed. Initially, sensor nodes that detect a vehicle are grouped into separate clusters. Further, we approximate the strength of the node drain rate for a cluster using polynomial regression function. In addition, the total node energy is estimated by taking the integral over the area. Finally, enhanced data aggregation is performed to reduce the amount of data transmission using digital signature tree. The experimental performance is evaluated with Dodgers loop sensor data set from UCI repository and the performance evaluation outperforms existing work on energy consumption, clustering efficiency, and node drain rate.

  6. Research on the method of information system risk state estimation based on clustering particle filter

    Science.gov (United States)

    Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua

    2017-05-01

    With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.

  7. Research on the method of information system risk state estimation based on clustering particle filter

    Directory of Open Access Journals (Sweden)

    Cui Jia

    2017-05-01

    Full Text Available With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.

  8. Water Quality Evaluation of the Yellow River Basin Based on Gray Clustering Method

    Science.gov (United States)

    Fu, X. Q.; Zou, Z. H.

    2018-03-01

    Evaluating the water quality of 12 monitoring sections in the Yellow River Basin comprehensively by grey clustering method based on the water quality monitoring data from the Ministry of environmental protection of China in May 2016 and the environmental quality standard of surface water. The results can reflect the water quality of the Yellow River Basin objectively. Furthermore, the evaluation results are basically the same when compared with the fuzzy comprehensive evaluation method. The results also show that the overall water quality of the Yellow River Basin is good and coincident with the actual situation of the Yellow River basin. Overall, gray clustering method for water quality evaluation is reasonable and feasible and it is also convenient to calculate.

  9. Clustering Multiple Sclerosis Subgroups with Multifractal Methods and Self-Organizing Map Algorithm

    Science.gov (United States)

    Karaca, Yeliz; Cattani, Carlo

    Magnetic resonance imaging (MRI) is the most sensitive method to detect chronic nervous system diseases such as multiple sclerosis (MS). In this paper, Brownian motion Hölder regularity functions (polynomial, periodic (sine), exponential) for 2D image, such as multifractal methods were applied to MR brain images, aiming to easily identify distressed regions, in MS patients. With these regions, we have proposed an MS classification based on the multifractal method by using the Self-Organizing Map (SOM) algorithm. Thus, we obtained a cluster analysis by identifying pixels from distressed regions in MR images through multifractal methods and by diagnosing subgroups of MS patients through artificial neural networks.

  10. Patterns of variation at Ustilago maydis virulence clusters 2A and 19A largely reflect the demographic history of its populations.

    Directory of Open Access Journals (Sweden)

    Ronny Kellner

    Full Text Available The maintenance of an intimate interaction between plant-biotrophic fungi and their hosts over evolutionary times involves strong selection and adaptative evolution of virulence-related genes. The highly specialised maize pathogen Ustilago maydis is assigned with a high evolutionary capability to overcome host resistances due to its high rates of sexual recombination, large population sizes and long distance dispersal. Unlike most studied fungus-plant interactions, the U. maydis - Zea mays pathosystem lacks a typical gene-for-gene interaction. It exerts a large set of secreted fungal virulence factors that are mostly organised in gene clusters. Their contribution to virulence has been experimentally demonstrated but their genetic diversity within U. maydis remains poorly understood. Here, we report on the intraspecific diversity of 34 potential virulence factor genes of U. maydis. We analysed their sequence polymorphisms in 17 isolates of U. maydis from Europe, North and Latin America. We focused on gene cluster 2A, associated with virulence attenuation, cluster 19A that is crucial for virulence, and the cluster-independent effector gene pep1. Although higher compared to four house-keeping genes, the overall levels of intraspecific genetic variation of virulence clusters 2A and 19A, and pep1 are remarkably low and commensurate to the levels of 14 studied non-virulence genes. In addition, each gene is present in all studied isolates and synteny in cluster 2A is conserved. Furthermore, 7 out of 34 virulence genes contain either no polymorphisms or only synonymous substitutions among all isolates. However, genetic variation of clusters 2A and 19A each resolve the large scale population structure of U. maydis indicating subpopulations with decreased gene flow. Hence, the genetic diversity of these virulence-related genes largely reflect the demographic history of U. maydis populations.

  11. Threshold selection for classification of MR brain images by clustering method

    Energy Technology Data Exchange (ETDEWEB)

    Moldovanu, Simona [Faculty of Sciences and Environment, Department of Chemistry, Physics and Environment, Dunărea de Jos University of Galaţi, 47 Domnească St., 800008, Romania, Phone: +40 236 460 780 (Romania); Dumitru Moţoc High School, 15 Milcov St., 800509, Galaţi (Romania); Obreja, Cristian; Moraru, Luminita, E-mail: luminita.moraru@ugal.ro [Faculty of Sciences and Environment, Department of Chemistry, Physics and Environment, Dunărea de Jos University of Galaţi, 47 Domnească St., 800008, Romania, Phone: +40 236 460 780 (Romania)

    2015-12-07

    Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.

  12. Self-consistent field variational cellular method as applied to the band structure calculation of sodium

    International Nuclear Information System (INIS)

    Lino, A.T.; Takahashi, E.K.; Leite, J.R.; Ferraz, A.C.

    1988-01-01

    The band structure of metallic sodium is calculated, using for the first time the self-consistent field variational cellular method. In order to implement the self-consistency in the variational cellular theory, the crystal electronic charge density was calculated within the muffin-tin approximation. The comparison between our results and those derived from other calculations leads to the conclusion that the proposed self-consistent version of the variational cellular method is fast and accurate. (author) [pt

  13. Laplace transform overcoming principle drawbacks in application of the variational iteration method to fractional heat equations

    Directory of Open Access Journals (Sweden)

    Wu Guo-Cheng

    2012-01-01

    Full Text Available This note presents a Laplace transform approach in the determination of the Lagrange multiplier when the variational iteration method is applied to time fractional heat diffusion equation. The presented approach is more straightforward and allows some simplification in application of the variational iteration method to fractional differential equations, thus improving the convergence of the successive iterations.

  14. The Serratia gene cluster encoding biosynthesis of the red antibiotic, prodigiosin, shows species- and strain-dependent genome context variation

    DEFF Research Database (Denmark)

    Harris, Abigail K P; Williamson, Neil R; Slater, Holly

    2004-01-01

    The prodigiosin biosynthesis gene cluster (pig cluster) from two strains of Serratia (S. marcescens ATCC 274 and Serratia sp. ATCC 39006) has been cloned, sequenced and expressed in heterologous hosts. Sequence analysis of the respective pig clusters revealed 14 ORFs in S. marcescens ATCC 274...... and 15 ORFs in Serratia sp. ATCC 39006. In each Serratia species, predicted gene products showed similarity to polyketide synthases (PKSs), non-ribosomal peptide synthases (NRPSs) and the Red proteins of Streptomyces coelicolor A3(2). Comparisons between the two Serratia pig clusters and the red cluster...... from Str. coelicolor A3(2) revealed some important differences. A modified scheme for the biosynthesis of prodigiosin, based on the pathway recently suggested for the synthesis of undecylprodigiosin, is proposed. The distribution of the pig cluster within several Serratia sp. isolates is demonstrated...

  15. Clustering in Ethiopia

    African Journals Online (AJOL)

    Background: The importance of local variations in patterns of health and disease are increasingly recognised, but, particularly in the case of tropical infections, available methods and resources for characterising disease clusters in time and space are limited. Whilst the Global Positioning System. (GPS) allows accurate and ...

  16. Cluster detection methods applied to the Upper Cape Cod cancer data

    Directory of Open Access Journals (Sweden)

    Ozonoff David

    2005-09-01

    Full Text Available Abstract Background A variety of statistical methods have been suggested to assess the degree and/or the location of spatial clustering of disease cases. However, there is relatively little in the literature devoted to comparison and critique of different methods. Most of the available comparative studies rely on simulated data rather than real data sets. Methods We have chosen three methods currently used for examining spatial disease patterns: the M-statistic of Bonetti and Pagano; the Generalized Additive Model (GAM method as applied by Webster; and Kulldorff's spatial scan statistic. We apply these statistics to analyze breast cancer data from the Upper Cape Cancer Incidence Study using three different latency assumptions. Results The three different latency assumptions produced three different spatial patterns of cases and controls. For 20 year latency, all three methods generally concur. However, for 15 year latency and no latency assumptions, the methods produce different results when testing for global clustering. Conclusion The comparative analyses of real data sets by different statistical methods provides insight into directions for further research. We suggest a research program designed around examining real data sets to guide focused investigation of relevant features using simulated data, for the purpose of understanding how to interpret statistical methods applied to epidemiological data with a spatial component.

  17. Applying Clustering Methods in Drawing Maps of Science: Case Study of the Map For Urban Management Science

    Directory of Open Access Journals (Sweden)

    Mohammad Abuei Ardakan

    2010-04-01

    Full Text Available The present paper offers a basic introduction to data clustering and demonstrates the application of clustering methods in drawing maps of science. All approaches towards classification and clustering of information are briefly discussed. Their application to the process of visualization of conceptual information and drawing of science maps are illustrated by reviewing similar researches in this field. By implementing aggregated hierarchical clustering algorithm, which is an algorithm based on complete-link method, the map for urban management science as an emerging, interdisciplinary scientific field is analyzed and reviewed.

  18. Variational, projection methods and Pade approximants in scattering theory

    International Nuclear Information System (INIS)

    Turchetti, G.

    1980-12-01

    Several aspects on the scattering theory are discussed in a perturbative scheme. The Pade approximant method plays an important role in such a scheme. Solitons solutions are also discussed in this same scheme. (L.C.) [pt

  19. Fourth-order perturbative extension of the single-double excitation coupled-cluster method

    International Nuclear Information System (INIS)

    Derevianko, Andrei; Emmons, Erik D.

    2002-01-01

    Fourth-order many-body corrections to matrix elements for atoms with one valence electron are derived. The obtained diagrams are classified using coupled-cluster-inspired separation into contributions from n-particle excitations from the lowest-order wave function. The complete set of fourth-order diagrams involves only connected single, double, and triple excitations and disconnected quadruple excitations. Approximately half of the fourth-order diagrams are not accounted for by the popular coupled-cluster method truncated at single and double excitations (CCSD). Explicit formulas are tabulated for the entire set of fourth-order diagrams missed by the CCSD method and its linearized version, i.e., contributions from connected triple and disconnected quadruple excitations. A partial summation scheme of the derived fourth-order contributions to all orders of perturbation theory is proposed

  20. Cluster models of light nuclei and the method of hyperspherical harmonics: Successes and challenges

    International Nuclear Information System (INIS)

    Danilin, B. V.; Shul'gina, N. B.; Ershov, S. N.; Vaagen, J. S.

    2009-01-01

    Hyperspherical-harmonics method to investigate the lightest nuclei having three-cluster structure is discussed together with recent experiments. Properties of bound states and methods to explore three-body continuum are presented. The challenges created by large neutron excess and halo phenomena are highlighted. Astrophysical aspects of the 7 Li + n → 8 Li + γ reaction and the solar-boron-neutrinos problem are analyzed. Three-cluster structure of highly excited states in 8 Be is shown to be responsible for extreme isospin mixing. Progress in studies of 6 He- and 11 Li-induced inclusive and exclusive nuclear reactions is demonstrated, providing information on the nature of continuum structures of Borromean nuclei.

  1. Application of Different Extraction Methods for Investigation of Nonmetallic Inclusions and Clusters in Steels and Alloys

    Directory of Open Access Journals (Sweden)

    Diana Janis

    2014-01-01

    Full Text Available The characterization of nonmetallic inclusions is of importance for the production of clean steel in order to improve the mechanical properties. In this respect, a three-dimensional (3D investigation is considered to be useful for an accurate evaluation of size, number, morphology of inclusions, and elementary distribution in each inclusion particle. In this study, the application of various extraction methods (chemical extraction/etching by acid or halogen-alcohol solutions, electrolysis, sputtering with glow discharge, and so on for 3D estimation of nonmetallic Al2O3 inclusions and clusters in high-alloyed steels was examined and discussed using an Fe-10 mass% Ni alloy and an 18/8 stainless steel deoxidized with Al. Advantages and limitations of different extraction methods for 3D investigations of inclusions and clusters were discussed in comparison to conventional two-dimensional (2D observations on a polished cross section of metal samples.

  2. The IMACS Cluster Building Survey. I. Description of the Survey and Analysis Methods

    Science.gov (United States)

    Oemler Jr., Augustus; Dressler, Alan; Gladders, Michael G.; Rigby, Jane R.; Bai, Lei; Kelson, Daniel; Villanueva, Edward; Fritz, Jacopo; Rieke, George; Poggianti, Bianca M.; hide

    2013-01-01

    The IMACS Cluster Building Survey uses the wide field spectroscopic capabilities of the IMACS spectrograph on the 6.5 m Baade Telescope to survey the large-scale environment surrounding rich intermediate-redshift clusters of galaxies. The goal is to understand the processes which may be transforming star-forming field galaxies into quiescent cluster members as groups and individual galaxies fall into the cluster from the surrounding supercluster. This first paper describes the survey: the data taking and reduction methods. We provide new calibrations of star formation rates (SFRs) derived from optical and infrared spectroscopy and photometry. We demonstrate that there is a tight relation between the observed SFR per unit B luminosity, and the ratio of the extinctions of the stellar continuum and the optical emission lines.With this, we can obtain accurate extinction-corrected colors of galaxies. Using these colors as well as other spectral measures, we determine new criteria for the existence of ongoing and recent starbursts in galaxies.

  3. THE IMACS CLUSTER BUILDING SURVEY. I. DESCRIPTION OF THE SURVEY AND ANALYSIS METHODS

    Energy Technology Data Exchange (ETDEWEB)

    Oemler, Augustus Jr.; Dressler, Alan; Kelson, Daniel; Villanueva, Edward [Observatories of the Carnegie Institution for Science, 813 Santa Barbara St., Pasadena, CA 91101-1292 (United States); Gladders, Michael G. [Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL 60637 (United States); Rigby, Jane R. [Observational Cosmology Lab, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Bai Lei [Department of Astronomy and Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4 (Canada); Fritz, Jacopo [Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281 S9, B-9000 Gent (Belgium); Rieke, George [Steward Observatory, University of Arizona, Tucson, AZ 8572 (United States); Poggianti, Bianca M.; Vulcani, Benedetta, E-mail: oemler@obs.carnegiescience.edu [INAF-Osservatorio Astronomico di Padova, Vicolo dell' Osservatorio 5, I-35122 Padova (Italy)

    2013-06-10

    The IMACS Cluster Building Survey uses the wide field spectroscopic capabilities of the IMACS spectrograph on the 6.5 m Baade Telescope to survey the large-scale environment surrounding rich intermediate-redshift clusters of galaxies. The goal is to understand the processes which may be transforming star-forming field galaxies into quiescent cluster members as groups and individual galaxies fall into the cluster from the surrounding supercluster. This first paper describes the survey: the data taking and reduction methods. We provide new calibrations of star formation rates (SFRs) derived from optical and infrared spectroscopy and photometry. We demonstrate that there is a tight relation between the observed SFR per unit B luminosity, and the ratio of the extinctions of the stellar continuum and the optical emission lines. With this, we can obtain accurate extinction-corrected colors of galaxies. Using these colors as well as other spectral measures, we determine new criteria for the existence of ongoing and recent starbursts in galaxies.

  4. Comparison of cluster-based and source-attribution methods for estimating transmission risk using large HIV sequence databases.

    Science.gov (United States)

    Le Vu, Stéphane; Ratmann, Oliver; Delpech, Valerie; Brown, Alison E; Gill, O Noel; Tostevin, Anna; Fraser, Christophe; Volz, Erik M

    2018-06-01

    Phylogenetic clustering of HIV sequences from a random sample of patients can reveal epidemiological transmission patterns, but interpretation is hampered by limited theoretical support and statistical properties of clustering analysis remain poorly understood. Alternatively, source attribution methods allow fitting of HIV transmission models and thereby quantify aspects of disease transmission. A simulation study was conducted to assess error rates of clustering methods for detecting transmission risk factors. We modeled HIV epidemics among men having sex with men and generated phylogenies comparable to those that can be obtained from HIV surveillance data in the UK. Clustering and source attribution approaches were applied to evaluate their ability to identify patient attributes as transmission risk factors. We find that commonly used methods show a misleading association between cluster size or odds of clustering and covariates that are correlated with time since infection, regardless of their influence on transmission. Clustering methods usually have higher error rates and lower sensitivity than source attribution method for identifying transmission risk factors. But neither methods provide robust estimates of transmission risk ratios. Source attribution method can alleviate drawbacks from phylogenetic clustering but formal population genetic modeling may be required to estimate quantitative transmission risk factors. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Mathematical methods in physics distributions, Hilbert space operators, variational methods, and applications in quantum physics

    CERN Document Server

    Blanchard, Philippe

    2015-01-01

    The second edition of this textbook presents the basic mathematical knowledge and skills that are needed for courses on modern theoretical physics, such as those on quantum mechanics, classical and quantum field theory, and related areas.  The authors stress that learning mathematical physics is not a passive process and include numerous detailed proofs, examples, and over 200 exercises, as well as hints linking mathematical concepts and results to the relevant physical concepts and theories.  All of the material from the first edition has been updated, and five new chapters have been added on such topics as distributions, Hilbert space operators, and variational methods.   The text is divided into three main parts. Part I is a brief introduction to distribution theory, in which elements from the theories of ultradistributions and hyperfunctions are considered in addition to some deeper results for Schwartz distributions, thus providing a comprehensive introduction to the theory of generalized functions. P...

  6. Thermodynamics of non-ideal QGP using Mayers cluster expansion method

    International Nuclear Information System (INIS)

    Prasanth, J.P; Simji, P.; Bannur, Vishnu M.

    2013-01-01

    The Quark gluon plasma (QGP) is the state in which the individual hadrons dissolve into a system of free (or almost free) quarks and gluons in strongly compressed system at high temperature. The present paper aims to calculate the critical temperature at which a non-ideal three quark plasma condenses into droplet of three quarks (i.e., into a liquid of baryons) using Mayers cluster expansion method

  7. GLOBAL CLASSIFICATION OF DERMATITIS DISEASE WITH K-MEANS CLUSTERING IMAGE SEGMENTATION METHODS

    OpenAIRE

    Prafulla N. Aerkewar1 & Dr. G. H. Agrawal2

    2018-01-01

    The objective of this paper to presents a global technique for classification of different dermatitis disease lesions using the process of k-Means clustering image segmentation method. The word global is used such that the all dermatitis disease having skin lesion on body are classified in to four category using k-means image segmentation and nntool of Matlab. Through the image segmentation technique and nntool can be analyze and study the segmentation properties of skin lesions occurs in...

  8. A hybrid method based on a new clustering technique and multilayer perceptron neural networks for hourly solar radiation forecasting

    International Nuclear Information System (INIS)

    Azimi, R.; Ghayekhloo, M.; Ghofrani, M.

    2016-01-01

    Highlights: • A novel clustering approach is proposed based on the data transformation approach. • A novel cluster selection method based on correlation analysis is presented. • The proposed hybrid clustering approach leads to deep learning for MLPNN. • A hybrid forecasting method is developed to predict solar radiations. • The evaluation results show superior performance of the proposed forecasting model. - Abstract: Accurate forecasting of renewable energy sources plays a key role in their integration into the grid. This paper proposes a hybrid solar irradiance forecasting framework using a Transformation based K-means algorithm, named TB K-means, to increase the forecast accuracy. The proposed clustering method is a combination of a new initialization technique, K-means algorithm and a new gradual data transformation approach. Unlike the other K-means based clustering methods which are not capable of providing a fixed and definitive answer due to the selection of different cluster centroids for each run, the proposed clustering provides constant results for different runs of the algorithm. The proposed clustering is combined with a time-series analysis, a novel cluster selection algorithm and a multilayer perceptron neural network (MLPNN) to develop the hybrid solar radiation forecasting method for different time horizons (1 h ahead, 2 h ahead, …, 48 h ahead). The performance of the proposed TB K-means clustering is evaluated using several different datasets and compared with different variants of K-means algorithm. Solar datasets with different solar radiation characteristics are also used to determine the accuracy and processing speed of the developed forecasting method with the proposed TB K-means and other clustering techniques. The results of direct comparison with other well-established forecasting models demonstrate the superior performance of the proposed hybrid forecasting method. Furthermore, a comparative analysis with the benchmark solar

  9. Dynamic Fuzzy Clustering Method for Decision Support in Electricity Markets Negotiation

    Directory of Open Access Journals (Sweden)

    Ricardo FAIA

    2016-10-01

    Full Text Available Artificial Intelligence (AI methods contribute to the construction of systems where there is a need to automate the tasks. They are typically used for problems that have a large response time, or when a mathematical method cannot be used to solve the problem. However, the application of AI brings an added complexity to the development of such applications. AI has been frequently applied in the power systems field, namely in Electricity Markets (EM. In this area, AI applications are essentially used to forecast / estimate the prices of electricity or to search for the best opportunity to sell the product. This paper proposes a clustering methodology that is combined with fuzzy logic in order to perform the estimation of EM prices. The proposed method is based on the application of a clustering methodology that groups historic energy contracts according to their prices’ similarity. The optimal number of groups is automatically calculated taking into account the preference for the balance between the estimation error and the number of groups. The centroids of each cluster are used to define a dynamic fuzzy variable that approximates the tendency of contracts’ history. The resulting fuzzy variable allows estimating expected prices for contracts instantaneously and approximating missing values in the historic contracts.

  10. Using hierarchical clustering methods to classify motor activities of COPD patients from wearable sensor data

    Directory of Open Access Journals (Sweden)

    Reilly John J

    2005-06-01

    Full Text Available Abstract Background Advances in miniature sensor technology have led to the development of wearable systems that allow one to monitor motor activities in the field. A variety of classifiers have been proposed in the past, but little has been done toward developing systematic approaches to assess the feasibility of discriminating the motor tasks of interest and to guide the choice of the classifier architecture. Methods A technique is introduced to address this problem according to a hierarchical framework and its use is demonstrated for the application of detecting motor activities in patients with chronic obstructive pulmonary disease (COPD undergoing pulmonary rehabilitation. Accelerometers were used to collect data for 10 different classes of activity. Features were extracted to capture essential properties of the data set and reduce the dimensionality of the problem at hand. Cluster measures were utilized to find natural groupings in the data set and then construct a hierarchy of the relationships between clusters to guide the process of merging clusters that are too similar to distinguish reliably. It provides a means to assess whether the benefits of merging for performance of a classifier outweigh the loss of resolution incurred through merging. Results Analysis of the COPD data set demonstrated that motor tasks related to ambulation can be reliably discriminated from tasks performed in a seated position with the legs in motion or stationary using two features derived from one accelerometer. Classifying motor tasks within the category of activities related to ambulation requires more advanced techniques. While in certain cases all the tasks could be accurately classified, in others merging clusters associated with different motor tasks was necessary. When merging clusters, it was found that the proposed method could lead to more than 12% improvement in classifier accuracy while retaining resolution of 4 tasks. Conclusion Hierarchical

  11. A variational Bayesian method to inverse problems with impulsive noise

    KAUST Repository

    Jin, Bangti

    2012-01-01

    We propose a novel numerical method for solving inverse problems subject to impulsive noises which possibly contain a large number of outliers. The approach is of Bayesian type, and it exploits a heavy-tailed t distribution for data noise to achieve

  12. Comparison between the Variational Iteration Method and the Homotopy Perturbation Method for the Sturm-Liouville Differential Equation

    Directory of Open Access Journals (Sweden)

    R. Darzi

    2010-01-01

    Full Text Available We applied the variational iteration method and the homotopy perturbation method to solve Sturm-Liouville eigenvalue and boundary value problems. The main advantage of these methods is the flexibility to give approximate and exact solutions to both linear and nonlinear problems without linearization or discretization. The results show that both methods are simple and effective.

  13. Comparison between the Variational Iteration Method and the Homotopy Perturbation Method for the Sturm-Liouville Differential Equation

    OpenAIRE

    Darzi R; Neamaty A

    2010-01-01

    We applied the variational iteration method and the homotopy perturbation method to solve Sturm-Liouville eigenvalue and boundary value problems. The main advantage of these methods is the flexibility to give approximate and exact solutions to both linear and nonlinear problems without linearization or discretization. The results show that both methods are simple and effective.

  14. A study of several CAD methods for classification of clustered microcalcifications

    Science.gov (United States)

    Wei, Liyang; Yang, Yongyi; Nishikawa, Robert M.; Jiang, Yulei

    2005-04-01

    In this paper we investigate several state-of-the-art machine-learning methods for automated classification of clustered microcalcifications (MCs), aimed to assisting radiologists for more accurate diagnosis of breast cancer in a computer-aided diagnosis (CADx) scheme. The methods we consider include: support vector machine (SVM), kernel Fisher discriminant (KFD), and committee machines (ensemble averaging and AdaBoost), most of which have been developed recently in statistical learning theory. We formulate differentiation of malignant from benign MCs as a supervised learning problem, and apply these learning methods to develop the classification algorithms. As input, these methods use image features automatically extracted from clustered MCs. We test these methods using a database of 697 clinical mammograms from 386 cases, which include a wide spectrum of difficult-to-classify cases. We use receiver operating characteristic (ROC) analysis to evaluate and compare the classification performance by the different methods. In addition, we also investigate how to combine information from multiple-view mammograms of the same case so that the best decision can be made by a classifier. In our experiments, the kernel-based methods (i.e., SVM, KFD) yield the best performance, significantly outperforming a well-established CADx approach based on neural network learning.

  15. Environmental data processing by clustering methods for energy forecast and planning

    Energy Technology Data Exchange (ETDEWEB)

    Di Piazza, Annalisa [Dipartimento di Ingegneria Idraulica e Applicazioni Ambientali (DIIAA), viale delle Scienze, Universita degli Studi di Palermo, 90128 Palermo (Italy); Di Piazza, Maria Carmela; Ragusa, Antonella; Vitale, Gianpaolo [Consiglio Nazionale delle Ricerche Istituto di Studi sui Sistemi Intelligenti per l' Automazione (ISSIA - CNR), sezione di Palermo, Via Dante, 12, 90141 Palermo (Italy)

    2011-03-15

    This paper presents a statistical approach based on the k-means clustering technique to manage environmental sampled data to evaluate and to forecast of the energy deliverable by different renewable sources in a given site. In particular, wind speed and solar irradiance sampled data are studied in association to the energy capability of a wind generator and a photovoltaic (PV) plant, respectively. The proposed method allows the sub-sets of useful data, describing the energy capability of a site, to be extracted from a set of experimental observations belonging the considered site. The data collection is performed in Sicily, in the south of Italy, as case study. As far as the wind generation is concerned, a suitable generator, matching the wind profile of the studied sites, has been selected for the evaluation of the producible energy. With respect to the photovoltaic generation, the irradiance data have been taken from the acquisition system of an actual installation. It is demonstrated, in both cases, that the use of the k-means clustering method allows data that do not contribute to the produced energy to be grouped into a cluster, moreover it simplifies the problem of the energy assessment since it permits to obtain the desired information on energy capability by managing a reduced amount of experimental samples. In the studied cases, the proposed method permitted a reduction of the 50% of the data with a maximum discrepancy of 10% in energy estimation compared to the classical statistical approach. Therefore, the adopted k-means clustering technique represents an useful tool for an appropriate and less demanding energy forecast and planning in distributed generation systems. (author)

  16. VARIATIONS OF THE ENERGY METHOD FOR STUDYING CONSTRUCTION STABILITY

    Directory of Open Access Journals (Sweden)

    A. M. Dibirgadzhiev

    2017-01-01

    Full Text Available Objectives. The aim of the work is to find the most rational form of expression of the potential energy of a nonlinear system with the subsequent use of algebraic means and geometric images of catastrophe theory for studying the behaviour of a construction under load. Various forms of stability criteria for the equilibrium states of constructions are investigated. Some aspects of the using various forms of expression of the system’s total energy are considered, oriented to the subsequent use of the catastrophe theory methods for solving the nonlinear problems of construction calculation associated with discontinuous phenomena.Methods. According to the form of the potential energy expression, the mathematical description of the problem being solved is linked to a specific catastrophe of a universal character from the list of catastrophes. After this, the behaviour of the system can be predicted on the basis of the fundamental propositions formulated in catastrophe theory without integrating the corresponding system of nonlinear differential equations of high order in partial derivatives, to which the solution of such problems is reduced.Results. The result is presented in the form of uniform geometric images containing all the necessary qualitative and quantitative information about the deformation of whole construction classes under load for a wide range of changes in the values of external (control and internal (behavioural parameters.Conclusion. Methods based on catastrophe theory are an effective mathematical tool for solving non-linear boundary-value problems with parameters associated with discontinuous phenomena, which are poorly analysable by conventional methods. However, they have not yet received due attention from researchers, especially in the field of stability calculations, which remains a complex, relevant and attractive problem within structural mechanics. To solve a concrete nonlinear boundary value problem for calculating

  17. Iterative and variational homogenization methods for filled elastomers

    Science.gov (United States)

    Goudarzi, Taha

    Elastomeric composites have increasingly proved invaluable in commercial technological applications due to their unique mechanical properties, especially their ability to undergo large reversible deformation in response to a variety of stimuli (e.g., mechanical forces, electric and magnetic fields, changes in temperature). Modern advances in organic materials science have revealed that elastomeric composites hold also tremendous potential to enable new high-end technologies, especially as the next generation of sensors and actuators featured by their low cost together with their biocompatibility, and processability into arbitrary shapes. This potential calls for an in-depth investigation of the macroscopic mechanical/physical behavior of elastomeric composites directly in terms of their microscopic behavior with the objective of creating the knowledge base needed to guide their bottom-up design. The purpose of this thesis is to generate a mathematical framework to describe, explain, and predict the macroscopic nonlinear elastic behavior of filled elastomers, arguably the most prominent class of elastomeric composites, directly in terms of the behavior of their constituents --- i.e., the elastomeric matrix and the filler particles --- and their microstructure --- i.e., the content, size, shape, and spatial distribution of the filler particles. This will be accomplished via a combination of novel iterative and variational homogenization techniques capable of accounting for interphasial phenomena and finite deformations. Exact and approximate analytical solutions for the fundamental nonlinear elastic response of dilute suspensions of rigid spherical particles (either firmly bonded or bonded through finite size interphases) in Gaussian rubber are first generated. These results are in turn utilized to construct approximate solutions for the nonlinear elastic response of non-Gaussian elastomers filled with a random distribution of rigid particles (again, either firmly

  18. Measurement of time series variation of thermal diffusivity of magnetic fluid under magnetic field by forced Rayleigh scattering method

    Energy Technology Data Exchange (ETDEWEB)

    Motozawa, Masaaki, E-mail: motozawa.masaaki@shizuoka.ac.jp [Shizuoka University, 3-5-1 Johoku, Naka-ku, Hamamatsu-shi, Shizuoka 432-8561 (Japan); Muraoka, Takashi [Shizuoka University, 3-5-1 Johoku, Naka-ku, Hamamatsu-shi, Shizuoka 432-8561 (Japan); Motosuke, Masahiro, E-mail: mot@rs.tus.ac.jp [Tokyo University of Science, 6-3-1 Niijuku, Katsushika-ku, Tokyo 125-8585 (Japan); Fukuta, Mitsuhiro, E-mail: fukuta.mitsuhiro@shizuoka.ac.jp [Shizuoka University, 3-5-1 Johoku, Naka-ku, Hamamatsu-shi, Shizuoka 432-8561 (Japan)

    2017-04-15

    It can be expected that the thermal diffusivity of a magnetic fluid varies from time to time after applying a magnetic field because of the growth of the inner structure of a magnetic fluid such as chain-like clusters. In this study, time series variation of the thermal diffusivity of a magnetic fluid caused by applying a magnetic field was investigated experimentally. For the measurement of time series variation of thermal diffusivity, we attempted to apply the forced Rayleigh scattering method (FRSM), which has high temporal and high spatial resolution. We set up an optical system for the FRSM and measured the thermal diffusivity. A magnetic field was applied to a magnetic fluid in parallel and perpendicular to the heat flux direction, and the magnetic field intensity was 70 mT. The FRSM was successfully applied to measurement of the time series variation of the magnetic fluid from applying a magnetic field. The results show that a characteristic configuration in the time series variation of the thermal diffusivity of magnetic fluid was obtained in the case of applying a magnetic field parallel to the heat flux direction. In contrast, in the case of applying a magnetic field perpendicular to the heat flux, the thermal diffusivity of the magnetic fluid hardly changed during measurement. - Highlights: • Thermal diffusivity was measured by forced Rayleigh scattering method (FRSM). • FRSM has high temporal and high spatial resolutions for measurement. • We attempted to apply FRSM to magnetic fluid (MF). • Time series variation of thermal diffusivity of MF was successfully measured by FRSM. • Anisotropic thermal diffusivity of magnetic fluid was also successfully confirmed.

  19. Van der Waals potentials between metal clusters and helium atoms obtained with density functional theory and linear response methods

    International Nuclear Information System (INIS)

    Liebrecht, M.

    2014-01-01

    The importance of van der Waals interactions in many diverse research fields such as, e. g., polymer science, nano--materials, structural biology, surface science and condensed matter physics created a high demand for efficient and accurate methods that can describe van der Waals interactions from first principles. These methods should be able to deal with large and complex systems to predict functions and properties of materials that are technologically and biologically relevant. Van der Waals interactions arise due to quantum mechanical correlation effects and finding appropriate models an numerical techniques to describe this type of interaction is still an ongoing challenge in electronic structure and condensed matter theory. This thesis introduces a new variational approach to obtain intermolecular interaction potentials between clusters and helium atoms by means of density functional theory and linear response methods. It scales almost linearly with the number of electrons and can therefore be applied to much larger systems than standard quantum chemistry techniques. The main focus of this work is the development of an ab-initio method to account for London dispersion forces, which are purely attractive and dominate the interaction of non--polar atoms and molecules at large distances. (author) [de

  20. Variational method for objective analysis of scalar variable and its ...

    Indian Academy of Sciences (India)

    e-mail: sinha@tropmet.res.in. In this study real time data have been used to compare the standard and triangle method by ... The work presented in this paper is about a vari- ... But when the balance is needed ..... tred at 17:30h IST of 11 June within half a degree of ..... Ogura Y and Chen Y L 1977 A life history of an intense.

  1. Parity among interpretation methods of MLEE patterns and disparity among clustering methods in epidemiological typing of Candida albicans.

    Science.gov (United States)

    Boriollo, Marcelo Fabiano Gomes; Rosa, Edvaldo Antonio Ribeiro; Gonçalves, Reginaldo Bruno; Höfling, José Francisco

    2006-03-01

    The typing of C. albicans by MLEE (multilocus enzyme electrophoresis) is dependent on the interpretation of enzyme electrophoretic patterns, and the study of the epidemiological relationships of these yeasts can be conducted by cluster analysis. Therefore, the aims of the present study were to first determine the discriminatory power of genetic interpretation (deduction of the allelic composition of diploid organisms) and numerical interpretation (mere determination of the presence and absence of bands) of MLEE patterns, and then to determine the concordance (Pearson product-moment correlation coefficient) and similarity (Jaccard similarity coefficient) of the groups of strains generated by three cluster analysis models, and the discriminatory power of such models as well [model A: genetic interpretation, genetic distance matrix of Nei (d(ij)) and UPGMA dendrogram; model B: genetic interpretation, Dice similarity matrix (S(D1)) and UPGMA dendrogram; model C: numerical interpretation, Dice similarity matrix (S(D2)) and UPGMA dendrogram]. MLEE was found to be a powerful and reliable tool for the typing of C. albicans due to its high discriminatory power (>0.9). Discriminatory power indicated that numerical interpretation is a method capable of discriminating a greater number of strains (47 versus 43 subtypes), but also pointed to model B as a method capable of providing a greater number of groups, suggesting its use for the typing of C. albicans by MLEE and cluster analysis. Very good agreement was only observed between the elements of the matrices S(D1) and S(D2), but a large majority of the groups generated in the three UPGMA dendrograms showed similarity S(J) between 4.8% and 75%, suggesting disparities in the conclusions obtained by the cluster assays.

  2. Comparison of variations detection between whole-genome amplification methods used in single-cell resequencing

    DEFF Research Database (Denmark)

    Hou, Yong; Wu, Kui; Shi, Xulian

    2015-01-01

    methods, focusing particularly on variations detection. Low-coverage whole-genome sequencing revealed that DOP-PCR had the highest duplication ratio, but an even read distribution and the best reproducibility and accuracy for detection of copy-number variations (CNVs). However, MDA had significantly...... performance using SCRS amplified by different WGA methods. It will guide researchers to determine which WGA method is best suited to individual experimental needs at single-cell level....

  3. A quasiparticle-based multi-reference coupled-cluster method.

    Science.gov (United States)

    Rolik, Zoltán; Kállay, Mihály

    2014-10-07

    The purpose of this paper is to introduce a quasiparticle-based multi-reference coupled-cluster (MRCC) approach. The quasiparticles are introduced via a unitary transformation which allows us to represent a complete active space reference function and other elements of an orthonormal multi-reference (MR) basis in a determinant-like form. The quasiparticle creation and annihilation operators satisfy the fermion anti-commutation relations. On the basis of these quasiparticles, a generalization of the normal-ordered operator products for the MR case can be introduced as an alternative to the approach of Mukherjee and Kutzelnigg [Recent Prog. Many-Body Theor. 4, 127 (1995); Mukherjee and Kutzelnigg, J. Chem. Phys. 107, 432 (1997)]. Based on the new normal ordering any quasiparticle-based theory can be formulated using the well-known diagram techniques. Beyond the general quasiparticle framework we also present a possible realization of the unitary transformation. The suggested transformation has an exponential form where the parameters, holding exclusively active indices, are defined in a form similar to the wave operator of the unitary coupled-cluster approach. The definition of our quasiparticle-based MRCC approach strictly follows the form of the single-reference coupled-cluster method and retains several of its beneficial properties. Test results for small systems are presented using a pilot implementation of the new approach and compared to those obtained by other MR methods.

  4. Clustering method for counting passengers getting in a bus with single camera

    Science.gov (United States)

    Yang, Tao; Zhang, Yanning; Shao, Dapei; Li, Ying

    2010-03-01

    Automatic counting of passengers is very important for both business and security applications. We present a single-camera-based vision system that is able to count passengers in a highly crowded situation at the entrance of a traffic bus. The unique characteristics of the proposed system include, First, a novel feature-point-tracking- and online clustering-based passenger counting framework, which performs much better than those of background-modeling-and foreground-blob-tracking-based methods. Second, a simple and highly accurate clustering algorithm is developed that projects the high-dimensional feature point trajectories into a 2-D feature space by their appearance and disappearance times and counts the number of people through online clustering. Finally, all test video sequences in the experiment are captured from a real traffic bus in Shanghai, China. The results show that the system can process two 320×240 video sequences at a frame rate of 25 fps simultaneously, and can count passengers reliably in various difficult scenarios with complex interaction and occlusion among people. The method achieves high accuracy rates up to 96.5%.

  5. A New Cluster Analysis-Marker-Controlled Watershed Method for Separating Particles of Granular Soils.

    Science.gov (United States)

    Alam, Md Ferdous; Haque, Asadul

    2017-10-18

    An accurate determination of particle-level fabric of granular soils from tomography data requires a maximum correct separation of particles. The popular marker-controlled watershed separation method is widely used to separate particles. However, the watershed method alone is not capable of producing the maximum separation of particles when subjected to boundary stresses leading to crushing of particles. In this paper, a new separation method, named as Monash Particle Separation Method (MPSM), has been introduced. The new method automatically determines the optimal contrast coefficient based on cluster evaluation framework to produce the maximum accurate separation outcomes. Finally, the particles which could not be separated by the optimal contrast coefficient were separated by integrating cuboid markers generated from the clustering by Gaussian mixture models into the routine watershed method. The MPSM was validated on a uniformly graded sand volume subjected to one-dimensional compression loading up to 32 MPa. It was demonstrated that the MPSM is capable of producing the best possible separation of particles required for the fabric analysis.

  6. application of single-linkage clustering method in the analysis of ...

    African Journals Online (AJOL)

    Admin

    ANALYSIS OF GROWTH RATE OF GROSS DOMESTIC PRODUCT. (GDP) AT ... The end result of the algorithm is a tree of clusters called a dendrogram, which shows how the clusters are ..... Number of cluster sum from from observations of ...

  7. Variational configuration interaction methods and comparison with perturbation theory

    International Nuclear Information System (INIS)

    Pople, J.A.; Seeger, R.; Krishnan, R.

    1977-01-01

    A configuration interaction (CI) procedure which includes all single and double substitutions from an unrestricted Hartree-Fock single determinant is described. This has the feature that Moller-Plesset perturbation results to second and third order are obtained in the first CI iterative cycle. The procedure also avoids the necessity of a full two-electron integral transformation. A simple expression for correcting the final CI energy for lack of size consistency is proposed. Finally, calculations on a series of small molecules are presented to compare these CI methods with perturbation theory

  8. A novel intrusion detection method based on OCSVM and K-means recursive clustering

    Directory of Open Access Journals (Sweden)

    Leandros A. Maglaras

    2015-01-01

    Full Text Available In this paper we present an intrusion detection module capable of detecting malicious network traffic in a SCADA (Supervisory Control and Data Acquisition system, based on the combination of One-Class Support Vector Machine (OCSVM with RBF kernel and recursive k-means clustering. Important parameters of OCSVM, such as Gaussian width o and parameter v affect the performance of the classifier. Tuning of these parameters is of great importance in order to avoid false positives and over fitting. The combination of OCSVM with recursive k- means clustering leads the proposed intrusion detection module to distinguish real alarms from possible attacks regardless of the values of parameters o and v, making it ideal for real-time intrusion detection mechanisms for SCADA systems. Extensive simulations have been conducted with datasets extracted from small and medium sized HTB SCADA testbeds, in order to compare the accuracy, false alarm rate and execution time against the base line OCSVM method.

  9. New Target for an Old Method: Hubble Measures Globular Cluster Parallax

    Science.gov (United States)

    Hensley, Kerry

    2018-05-01

    Measuring precise distances to faraway objects has long been a challenge in astrophysics. Now, one of the earliest techniques used to measure the distance to astrophysical objects has been applied to a metal-poor globular cluster for the first time.A Classic TechniqueAn artists impression of the European Space Agencys Gaia spacecraft. Gaia is on track to map the positions and motions of a billion stars. [ESA]Distances to nearby stars are often measured using the parallax technique tracing the tiny apparent motion of a target star against the background of more distant stars as Earth orbits the Sun. This technique has come a long way since it was first used in the 1800s to measure the distance to stars a few tens of light-years away; with the advent of space observatories like Hipparcos and Gaia, parallax can now be used to map the positions of stars out to thousands of light-years.Precise distance measurements arent only important for setting the scale of the universe, however; they can also help us better understand stellar evolution over the course of cosmic history. Stellar evolution models are often anchored to a reference star cluster, the properties of which must be known precisely. These precise properties can be readily determined for young, nearby open clusters using parallax measurements. But stellar evolution models that anchor on themore-distant, ancient, metal-poor globular clusters have been hampered by theless-precise indirect methods used tomeasure distance to these faraway clusters until now.Top: An image of NGC 6397 overlaid with the area scanned by Hubble (dashed green) and the footprint of the camera (solid green). The blue ellipse represents the parallax motion of a star in the cluster, exaggerated by a factor of ten thousand. Bottom: An example scan from this field. [Adapted from Brown et al. 2018]New Measurement to an Old ClusterThomas Brown (Space Telescope Science Institute) and collaborators used the Hubble Space Telescope todetermine the

  10. Total error components - isolation of laboratory variation from method performance

    International Nuclear Information System (INIS)

    Bottrell, D.; Bleyler, R.; Fisk, J.; Hiatt, M.

    1992-01-01

    The consideration of total error across sampling and analytical components of environmental measurements is relatively recent. The U.S. Environmental Protection Agency (EPA), through the Contract Laboratory Program (CLP), provides complete analyses and documented reports on approximately 70,000 samples per year. The quality assurance (QA) functions of the CLP procedures provide an ideal data base-CLP Automated Results Data Base (CARD)-to evaluate program performance relative to quality control (QC) criteria and to evaluate the analysis of blind samples. Repetitive analyses of blind samples within each participating laboratory provide a mechanism to separate laboratory and method performance. Isolation of error sources is necessary to identify effective options to establish performance expectations, and to improve procedures. In addition, optimized method performance is necessary to identify significant effects that result from the selection among alternative procedures in the data collection process (e.g., sampling device, storage container, mode of sample transit, etc.). This information is necessary to evaluate data quality; to understand overall quality; and to provide appropriate, cost-effective information required to support a specific decision

  11. Global survey of star clusters in the Milky Way. VI. Age distribution and cluster formation history

    Science.gov (United States)

    Piskunov, A. E.; Just, A.; Kharchenko, N. V.; Berczik, P.; Scholz, R.-D.; Reffert, S.; Yen, S. X.

    2018-06-01

    Context. The all-sky Milky Way Star Clusters (MWSC) survey provides uniform and precise ages, along with other relevant parameters, for a wide variety of clusters in the extended solar neighbourhood. Aims: In this study we aim to construct the cluster age distribution, investigate its spatial variations, and discuss constraints on cluster formation scenarios of the Galactic disk during the last 5 Gyrs. Methods: Due to the spatial extent of the MWSC, we have considered spatial variations of the age distribution along galactocentric radius RG, and along Z-axis. For the analysis of the age distribution we used 2242 clusters, which all lie within roughly 2.5 kpc of the Sun. To connect the observed age distribution to the cluster formation history we built an analytical model based on simple assumptions on the cluster initial mass function and on the cluster mass-lifetime relation, fit it to the observations, and determined the parameters of the cluster formation law. Results: Comparison with the literature shows that earlier results strongly underestimated the number of evolved clusters with ages t ≳ 100 Myr. Recent studies based on all-sky catalogues agree better with our data, but still lack the oldest clusters with ages t ≳ 1 Gyr. We do not observe a strong variation in the age distribution along RG, though we find an enhanced fraction of older clusters (t > 1 Gyr) in the inner disk. In contrast, the distribution strongly varies along Z. The high altitude distribution practically does not contain clusters with t < 1 Gyr. With simple assumptions on the cluster formation history, the cluster initial mass function and the cluster lifetime we can reproduce the observations. The cluster formation rate and the cluster lifetime are strongly degenerate, which does not allow us to disentangle different formation scenarios. In all cases the cluster formation rate is strongly declining with time, and the cluster initial mass function is very shallow at the high mass end.

  12. Magnetic field-induced cluster formation and variation of magneto-optical signals in zinc-substituted ferrofluids

    Energy Technology Data Exchange (ETDEWEB)

    Nair, S.S. [Department of Physics, Cochin University of Science and Technology, Cochin 682 022 (India)]. E-mail: swapna@cusat.ac.in; Rajesh, S. [Department of Physics, Cochin University of Science and Technology, Cochin 682 022 (India); Abraham, V.S. [School of Engineering and Sciences, International University of Bremen, 28759 (Germany); Anantharaman, M.R. [Department of Physics, Cochin University of Science and Technology, Cochin 682 022 (India)]. E-mail: mraiyer@yahoo.com; Nampoori, V.P.N. [International School of Photonics, Cochin University of Science and Technology, Cochin-22 (India)

    2006-10-15

    Fine magnetic particles (size{approx_equal}100 A) belonging to the series Zn {sub x} Fe{sub 1-} {sub x} Fe{sub 2}O{sub 4} were synthesized by cold co-precipitation methods and their structural properties were evaluated using X-ray diffraction. Magnetization studies have been carried out using vibrating sample magnetometry (VSM) showing near-zero loss loop characteristics. Ferrofluids were then prepared employing these fine magnetic powders using oleic acid as surfactant and kerosene as carrier liquid by modifying the usually reported synthesis technique in order to induce anisotropy and enhance the magneto-optical signals. Liquid thin films of these fluids were prepared and field-induced laser transmission through these films was studied. The transmitted light intensity decreases at the centre with applied magnetic field in a linear fashion when subjected to low magnetic fields and saturate at higher fields. This is in accordance with the saturation in cluster formation. The pattern exhibited by these films in the presence of different magnetic fields was observed with the help of a CCD camera and was recorded photographically.

  13. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    Science.gov (United States)

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  14. Large-scale atomic calculations using variational methods

    Energy Technology Data Exchange (ETDEWEB)

    Joensson, Per

    1995-01-01

    Atomic properties, such as radiative lifetimes, hyperfine structures and isotope shift, have been studied both theoretically and experimentally. Computer programs which calculate these properties from multiconfiguration Hartree-Fock (MCHF) and configuration interaction (CI) wave functions have been developed and tested. To study relativistic effects, a program which calculates hyperfine structures from multiconfiguration Dirac-Fock (MCDF) wave functions has also been written. A new method of dealing with radial non-orthogonalities in transition matrix elements has been investigated. This method allows two separate orbital sets to be used for the initial and final states, respectively. It is shown that, once the usual orthogonality restrictions have been overcome, systematic MCHF calculations are able to predict oscillator strengths in light atoms with high accuracy. In connection with recent high-power laser experiments, time-dependent calculations of the atomic response to intense laser fields have been performed. Using the frozen-core approximation, where the atom is modeled as an active electron moving in the average field of the core electrons and the nucleus, the active electron has been propagated in time under the influence of the laser field. Radiative lifetimes and hyperfine structures of excited states in sodium and silver have been experimentally determined using time-resolved laser spectroscopy. By recording the fluorescence light decay following laser excitation in the vacuum ultraviolet spectral region, the radiative lifetimes and hyperfine structures of the 7p{sup 2}P states in silver have been measured. The delayed-coincidence technique has been used to make very accurate measurements of the radiative lifetimes and hyperfine structures of the lowest 2P states in sodium and silver. 77 refs, 2 figs, 14 tabs.

  15. Large-scale atomic calculations using variational methods

    International Nuclear Information System (INIS)

    Joensson, Per.

    1995-01-01

    Atomic properties, such as radiative lifetimes, hyperfine structures and isotope shift, have been studied both theoretically and experimentally. Computer programs which calculate these properties from multiconfiguration Hartree-Fock (MCHF) and configuration interaction (CI) wave functions have been developed and tested. To study relativistic effects, a program which calculates hyperfine structures from multiconfiguration Dirac-Fock (MCDF) wave functions has also been written. A new method of dealing with radial non-orthogonalities in transition matrix elements has been investigated. This method allows two separate orbital sets to be used for the initial and final states, respectively. It is shown that, once the usual orthogonality restrictions have been overcome, systematic MCHF calculations are able to predict oscillator strengths in light atoms with high accuracy. In connection with recent high-power laser experiments, time-dependent calculations of the atomic response to intense laser fields have been performed. Using the frozen-core approximation, where the atom is modeled as an active electron moving in the average field of the core electrons and the nucleus, the active electron has been propagated in time under the influence of the laser field. Radiative lifetimes and hyperfine structures of excited states in sodium and silver have been experimentally determined using time-resolved laser spectroscopy. By recording the fluorescence light decay following laser excitation in the vacuum ultraviolet spectral region, the radiative lifetimes and hyperfine structures of the 7p 2 P states in silver have been measured. The delayed-coincidence technique has been used to make very accurate measurements of the radiative lifetimes and hyperfine structures of the lowest 2P states in sodium and silver. 77 refs, 2 figs, 14 tabs

  16. Multiple-Features-Based Semisupervised Clustering DDoS Detection Method

    Directory of Open Access Journals (Sweden)

    Yonghao Gu

    2017-01-01

    Full Text Available DDoS attack stream from different agent host converged at victim host will become very large, which will lead to system halt or network congestion. Therefore, it is necessary to propose an effective method to detect the DDoS attack behavior from the massive data stream. In order to solve the problem that large numbers of labeled data are not provided in supervised learning method, and the relatively low detection accuracy and convergence speed of unsupervised k-means algorithm, this paper presents a semisupervised clustering detection method using multiple features. In this detection method, we firstly select three features according to the characteristics of DDoS attacks to form detection feature vector. Then, Multiple-Features-Based Constrained-K-Means (MF-CKM algorithm is proposed based on semisupervised clustering. Finally, using MIT Laboratory Scenario (DDoS 1.0 data set, we verify that the proposed method can improve the convergence speed and accuracy of the algorithm under the condition of using a small amount of labeled data sets.

  17. New method for reconstruction of star spatial distribution in globular clusters and its application to flare stars in Pleiades

    International Nuclear Information System (INIS)

    Kosarev, E.L.

    1980-01-01

    A new method to reconstruct spatial star distribution in globular clusters is presented. The method gives both the estimation of unknown spatial distribution and the probable reconstruction error. This error has statistical origin and depends only on the number of stars in a cluster. The method is applied to reconstruct the spatial density of 441 flare stars in Pleiades. The spatial density has a maximum in the centre of the cluster of about 1.6-2.5 pc -3 and with increasing distance from the center smoothly falls down to zero approximately with the Gaussian law with a scale parameter of 3.5 pc

  18. K-Line Patterns’ Predictive Power Analysis Using the Methods of Similarity Match and Clustering

    Directory of Open Access Journals (Sweden)

    Lv Tao

    2017-01-01

    Full Text Available Stock price prediction based on K-line patterns is the essence of candlestick technical analysis. However, there are some disputes on whether the K-line patterns have predictive power in academia. To help resolve the debate, this paper uses the data mining methods of pattern recognition, pattern clustering, and pattern knowledge mining to research the predictive power of K-line patterns. The similarity match model and nearest neighbor-clustering algorithm are proposed for solving the problem of similarity match and clustering of K-line series, respectively. The experiment includes testing the predictive power of the Three Inside Up pattern and Three Inside Down pattern with the testing dataset of the K-line series data of Shanghai 180 index component stocks over the latest 10 years. Experimental results show that (1 the predictive power of a pattern varies a great deal for different shapes and (2 each of the existing K-line patterns requires further classification based on the shape feature for improving the prediction performance.

  19. Interpretation of biological and mechanical variations between the Lowry versus Bradford method for protein quantification.

    Science.gov (United States)

    Lu, Tzong-Shi; Yiao, Szu-Yu; Lim, Kenneth; Jensen, Roderick V; Hsiao, Li-Li

    2010-07-01

    The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. MATERIAL #ENTITYSTARTX00026; Differential protein expression patterns was assessed by western blot following protein quantification by the Lowry and Bradford methods. We have observed significant variations in protein concentrations following assessment with the Lowry versus Bradford methods, using identical samples. Greater variations in protein concentration readings were observed over time and in samples with higher concentrations, with the Bradford method. Identical samples quantified using both methods yielded significantly different expression patterns on Western blot. We show for the first time that methodical variations observed in these protein assay techniques, can potentially translate into differential protein expression patterns, that can be falsely taken to be biologically significant. Our study therefore highlights the pivotal need to carefully consider methodical approaches to protein quantification in techniques that report quantitative differences.

  20. Electricity Consumption Clustering Using Smart Meter Data

    Directory of Open Access Journals (Sweden)

    Alexander Tureczek

    2018-04-01

    Full Text Available Electricity smart meter consumption data is enabling utilities to analyze consumption information at unprecedented granularity. Much focus has been directed towards consumption clustering for diversifying tariffs; through modern clustering methods, cluster analyses have been performed. However, the clusters developed exhibit a large variation with resulting shadow clusters, making it impossible to truly identify the individual clusters. Using clearly defined dwelling types, this paper will present methods to improve clustering by harvesting inherent structure from the smart meter data. This paper clusters domestic electricity consumption using smart meter data from the Danish city of Esbjerg. Methods from time series analysis and wavelets are applied to enable the K-Means clustering method to account for autocorrelation in data and thereby improve the clustering performance. The results show the importance of data knowledge and we identify sub-clusters of consumption within the dwelling types and enable K-Means to produce satisfactory clustering by accounting for a temporal component. Furthermore our study shows that careful preprocessing of the data to account for intrinsic structure enables better clustering performance by the K-Means method.

  1. A Novel Method to Predict Genomic Islands Based on Mean Shift Clustering Algorithm.

    Directory of Open Access Journals (Sweden)

    Daniel M de Brito

    Full Text Available Genomic Islands (GIs are regions of bacterial genomes that are acquired from other organisms by the phenomenon of horizontal transfer. These regions are often responsible for many important acquired adaptations of the bacteria, with great impact on their evolution and behavior. Nevertheless, these adaptations are usually associated with pathogenicity, antibiotic resistance, degradation and metabolism. Identification of such regions is of medical and industrial interest. For this reason, different approaches for genomic islands prediction have been proposed. However, none of them are capable of predicting precisely the complete repertory of GIs in a genome. The difficulties arise due to the changes in performance of different algorithms in the face of the variety of nucleotide distribution in different species. In this paper, we present a novel method to predict GIs that is built upon mean shift clustering algorithm. It does not require any information regarding the number of clusters, and the bandwidth parameter is automatically calculated based on a heuristic approach. The method was implemented in a new user-friendly tool named MSGIP--Mean Shift Genomic Island Predictor. Genomes of bacteria with GIs discussed in other papers were used to evaluate the proposed method. The application of this tool revealed the same GIs predicted by other methods and also different novel unpredicted islands. A detailed investigation of the different features related to typical GI elements inserted in these new regions confirmed its effectiveness. Stand-alone and user-friendly versions for this new methodology are available at http://msgip.integrativebioinformatics.me.

  2. A new method to cluster genomes based on cumulative Fourier power spectrum.

    Science.gov (United States)

    Dong, Rui; Zhu, Ziyue; Yin, Changchuan; He, Rong L; Yau, Stephen S-T

    2018-06-20

    Analyzing phylogenetic relationships using mathematical methods has always been of importance in bioinformatics. Quantitative research may interpret the raw biological data in a precise way. Multiple Sequence Alignment (MSA) is used frequently to analyze biological evolutions, but is very time-consuming. When the scale of data is large, alignment methods cannot finish calculation in reasonable time. Therefore, we present a new method using moments of cumulative Fourier power spectrum in clustering the DNA sequences. Each sequence is translated into a vector in Euclidean space. Distances between the vectors can reflect the relationships between sequences. The mapping between the spectra and moment vector is one-to-one, which means that no information is lost in the power spectra during the calculation. We cluster and classify several datasets including Influenza A, primates, and human rhinovirus (HRV) datasets to build up the phylogenetic trees. Results show that the new proposed cumulative Fourier power spectrum is much faster and more accurately than MSA and another alignment-free method known as k-mer. The research provides us new insights in the study of phylogeny, evolution, and efficient DNA comparison algorithms for large genomes. The computer programs of the cumulative Fourier power spectrum are available at GitHub (https://github.com/YaulabTsinghua/cumulative-Fourier-power-spectrum). Copyright © 2018. Published by Elsevier B.V.

  3. Theoretical study of the F2 molecule using the variational cellular method

    International Nuclear Information System (INIS)

    Lima, M.A.P.; Leite, J.R.; Fazzio, A.

    1981-02-01

    Variational Cellular Method calculations for F 2 have been carried out at several internuclear distances. The ground and excited state potential curves are presented. The overall agreement between the VCM results and ab initio calculations is fairly good. (Author) [pt

  4. Complementary variational principle method applied to thermal conductivities of a plasma in a uniform magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Sehgal, A K; Gupta, S C [Punjabi Univ., Patiala (India). Dept. of Physics

    1982-12-14

    The complementary variational principles method (CVP) is applied to the thermal conductivities of a plasma in a uniform magnetic field. The results of computations show that the CVP derived results are very useful.

  5. [A cloud detection algorithm for MODIS images combining Kmeans clustering and multi-spectral threshold method].

    Science.gov (United States)

    Wang, Wei; Song, Wei-Guo; Liu, Shi-Xing; Zhang, Yong-Ming; Zheng, Hong-Yang; Tian, Wei

    2011-04-01

    An improved method for detecting cloud combining Kmeans clustering and the multi-spectral threshold approach is described. On the basis of landmark spectrum analysis, MODIS data is categorized into two major types initially by Kmeans method. The first class includes clouds, smoke and snow, and the second class includes vegetation, water and land. Then a multi-spectral threshold detection is applied to eliminate interference such as smoke and snow for the first class. The method is tested with MODIS data at different time under different underlying surface conditions. By visual method to test the performance of the algorithm, it was found that the algorithm can effectively detect smaller area of cloud pixels and exclude the interference of underlying surface, which provides a good foundation for the next fire detection approach.

  6. Data Clustering on Breast Cancer Data Using Firefly Algorithm with Golden Ratio Method

    Directory of Open Access Journals (Sweden)

    DEMIR, M.

    2015-05-01

    Full Text Available Heuristic methods are problem solving methods. In general, they obtain near-optimal solutions, and they do not take the care of provability of this case. The heuristic methods do not guarantee to obtain the optimal results; however, they guarantee to obtain near-optimal solutions in considerable time. In this paper, an application was performed by using firefly algorithm - one of the heuristic methods. The golden ratio was applied to different steps of firefly algorithm and different parameters of firefly algorithm to develop a new algorithm - called Firefly Algorithm with Golden Ratio (FAGR. It was shown that the golden ratio made firefly algorithm be superior to the firefly algorithm without golden ratio. At this aim, the developed algorithm was applied to WBCD database (breast cancer database to cluster data obtained from breast cancer patients. The highest obtained success rate among all executions is 96% and the highest obtained average success rate in all executions is 94.5%.

  7. An image segmentation method based on fuzzy C-means clustering and Cuckoo search algorithm

    Science.gov (United States)

    Wang, Mingwei; Wan, Youchuan; Gao, Xianjun; Ye, Zhiwei; Chen, Maolin

    2018-04-01

    Image segmentation is a significant step in image analysis and machine vision. Many approaches have been presented in this topic; among them, fuzzy C-means (FCM) clustering is one of the most widely used methods for its high efficiency and ambiguity of images. However, the success of FCM could not be guaranteed because it easily traps into local optimal solution. Cuckoo search (CS) is a novel evolutionary algorithm, which has been tested on some optimization problems and proved to be high-efficiency. Therefore, a new segmentation technique using FCM and blending of CS algorithm is put forward in the paper. Further, the proposed method has been measured on several images and compared with other existing FCM techniques such as genetic algorithm (GA) based FCM and particle swarm optimization (PSO) based FCM in terms of fitness value. Experimental results indicate that the proposed method is robust, adaptive and exhibits the better performance than other methods involved in the paper.

  8. Variational Homotopy Perturbation Method for Solving Higher Dimensional Initial Boundary Value Problems

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Noor

    2008-01-01

    Full Text Available We suggest and analyze a technique by combining the variational iteration method and the homotopy perturbation method. This method is called the variational homotopy perturbation method (VHPM. We use this method for solving higher dimensional initial boundary value problems with variable coefficients. The developed algorithm is quite efficient and is practically well suited for use in these problems. The proposed scheme finds the solution without any discritization, transformation, or restrictive assumptions and avoids the round-off errors. Several examples are given to check the reliability and efficiency of the proposed technique.

  9. Onto-clust--a methodology for combining clustering analysis and ontological methods for identifying groups of comorbidities for developmental disorders.

    Science.gov (United States)

    Peleg, Mor; Asbeh, Nuaman; Kuflik, Tsvi; Schertz, Mitchell

    2009-02-01

    Children with developmental disorders usually exhibit multiple developmental problems (comorbidities). Hence, such diagnosis needs to revolve on developmental disorder groups. Our objective is to systematically identify developmental disorder groups and represent them in an ontology. We developed a methodology that combines two methods (1) a literature-based ontology that we created, which represents developmental disorders and potential developmental disorder groups, and (2) clustering for detecting comorbid developmental disorders in patient data. The ontology is used to interpret and improve clustering results and the clustering results are used to validate the ontology and suggest directions for its development. We evaluated our methodology by applying it to data of 1175 patients from a child development clinic. We demonstrated that the ontology improves clustering results, bringing them closer to an expert generated gold-standard. We have shown that our methodology successfully combines an ontology with a clustering method to support systematic identification and representation of developmental disorder groups.

  10. Don't spin the pen: two alternative methods for second-stage sampling in urban cluster surveys

    Directory of Open Access Journals (Sweden)

    Rose Angela MC

    2007-06-01

    Full Text Available Abstract In two-stage cluster surveys, the traditional method used in second-stage sampling (in which the first household in a cluster is selected is time-consuming and may result in biased estimates of the indicator of interest. Firstly, a random direction from the center of the cluster is selected, usually by spinning a pen. The houses along that direction are then counted out to the boundary of the cluster, and one is then selected at random to be the first household surveyed. This process favors households towards the center of the cluster, but it could easily be improved. During a recent meningitis vaccination coverage survey in Maradi, Niger, we compared this method of first household selection to two alternatives in urban zones: 1 using a superimposed grid on the map of the cluster area and randomly selecting an intersection; and 2 drawing the perimeter of the cluster area using a Global Positioning System (GPS and randomly selecting one point within the perimeter. Although we only compared a limited number of clusters using each method, we found the sampling grid method to be the fastest and easiest for field survey teams, although it does require a map of the area. Selecting a random GPS point was also found to be a good method, once adequate training can be provided. Spinning the pen and counting households to the boundary was the most complicated and time-consuming. The two methods tested here represent simpler, quicker and potentially more robust alternatives to spinning the pen for cluster surveys in urban areas. However, in rural areas, these alternatives would favor initial household selection from lower density (or even potentially empty areas. Bearing in mind these limitations, as well as available resources and feasibility, investigators should choose the most appropriate method for their particular survey context.

  11. Partial differential equations with variable exponents variational methods and qualitative analysis

    CERN Document Server

    Radulescu, Vicentiu D

    2015-01-01

    Partial Differential Equations with Variable Exponents: Variational Methods and Qualitative Analysis provides researchers and graduate students with a thorough introduction to the theory of nonlinear partial differential equations (PDEs) with a variable exponent, particularly those of elliptic type. The book presents the most important variational methods for elliptic PDEs described by nonhomogeneous differential operators and containing one or more power-type nonlinearities with a variable exponent. The authors give a systematic treatment of the basic mathematical theory and constructive meth

  12. Quantum mechanical algebraic variational methods for inelastic and reactive molecular collisions

    Science.gov (United States)

    Schwenke, David W.; Haug, Kenneth; Zhao, Meishan; Truhlar, Donald G.; Sun, Yan

    1988-01-01

    The quantum mechanical problem of reactive or nonreactive scattering of atoms and molecules is formulated in terms of square-integrable basis sets with variational expressions for the reactance matrix. Several formulations involving expansions of the wave function (the Schwinger variational principle) or amplitude density (a generalization of the Newton variational principle), single-channel or multichannel distortion potentials, and primitive or contracted basis functions are presented and tested. The test results, for inelastic and reactive atom-diatom collisions, suggest that the methods may be useful for a variety of collision calculations and may allow the accurate quantal treatment of systems for which other available methods would be prohibitively expensive.

  13. Variational methods for problems from plasticity theory and for generalized Newtonian fluids

    CERN Document Server

    Fuchs, Martin

    2000-01-01

    Variational methods are applied to prove the existence of weak solutions for boundary value problems from the deformation theory of plasticity as well as for the slow, steady state flow of generalized Newtonian fluids including the Bingham and Prandtl-Eyring model. For perfect plasticity the role of the stress tensor is emphasized by studying the dual variational problem in appropriate function spaces. The main results describe the analytic properties of weak solutions, e.g. differentiability of velocity fields and continuity of stresses. The monograph addresses researchers and graduate students interested in applications of variational and PDE methods in the mechanics of solids and fluids.

  14. A comparison of methods for the analysis of binomial clustered outcomes in behavioral research.

    Science.gov (United States)

    Ferrari, Alberto; Comelli, Mario

    2016-12-01

    In behavioral research, data consisting of a per-subject proportion of "successes" and "failures" over a finite number of trials often arise. This clustered binary data are usually non-normally distributed, which can distort inference if the usual general linear model is applied and sample size is small. A number of more advanced methods is available, but they are often technically challenging and a comparative assessment of their performances in behavioral setups has not been performed. We studied the performances of some methods applicable to the analysis of proportions; namely linear regression, Poisson regression, beta-binomial regression and Generalized Linear Mixed Models (GLMMs). We report on a simulation study evaluating power and Type I error rate of these models in hypothetical scenarios met by behavioral researchers; plus, we describe results from the application of these methods on data from real experiments. Our results show that, while GLMMs are powerful instruments for the analysis of clustered binary outcomes, beta-binomial regression can outperform them in a range of scenarios. Linear regression gave results consistent with the nominal level of significance, but was overall less powerful. Poisson regression, instead, mostly led to anticonservative inference. GLMMs and beta-binomial regression are generally more powerful than linear regression; yet linear regression is robust to model misspecification in some conditions, whereas Poisson regression suffers heavily from violations of the assumptions when used to model proportion data. We conclude providing directions to behavioral scientists dealing with clustered binary data and small sample sizes. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Analysis of Diffusion Problems using Homotopy Perturbation and Variational Iteration Methods

    DEFF Research Database (Denmark)

    Barari, Amin; Poor, A. Tahmasebi; Jorjani, A.

    2010-01-01

    In this paper, variational iteration method and homotopy perturbation method are applied to different forms of diffusion equation. The diffusion equations have found wide applications in heat transfer problems, theory of consolidation and many other problems in engineering. The methods proposed...

  16. Application of He's variational iteration method to the fifth-order boundary value problems

    International Nuclear Information System (INIS)

    Shen, S

    2008-01-01

    Variational iteration method is introduced to solve the fifth-order boundary value problems. This method provides an efficient approach to solve this type of problems without discretization and the computation of the Adomian polynomials. Numerical results demonstrate that this method is a promising and powerful tool for solving the fifth-order boundary value problems

  17. Dancoff factors with partial absorption in cluster geometry by the direct method

    International Nuclear Information System (INIS)

    Rodrigues, Leticia Jenisch; Leite, Sergio de Queiroz Bogado; Vilhena, Marco Tullio de; Bodmann, Bardo Ernest Josef

    2007-01-01

    Accurate analysis of resonance absorption in heterogeneous systems is essential in problems like criticality, breeding ratios and fuel depletion calculations. In compact arrays of fuel rods, resonance absorption is strongly affected by the Dancoff factor, defined in this study as the probability that a neutron emitted from the surface of a fuel element, enters another fuel element without any collision in the moderator or cladding. In the original WIMS code, Black Dancoff factors were computed in cluster geometry by the collision probability method, for each one of the symmetrically distinct fuel pin positions in the cell. Recent improvements to the code include a new routine (PIJM) that was created to incorporate a more efficient scheme for computing the collision matrices. In that routine, each system region is considered individually, minimizing convergence problems and reducing the number of neutron track lines required in the in-plane integrations of the Bickley functions for any given accuracy. In the present work, PIJM is extended to compute Grey Dancoff factors for two-dimensional cylindrical cells in cluster geometry. The effectiveness of the method is accessed by comparing Grey Dancoff factors as calculated by PIJM, with those available in the literature by the Monte Carlo method, for the irregular geometry of the Canadian CANDU37 assembly. Dancoff factors at five symmetrically distinct fuel pin positions are found in very good agreement with the literature results (author)

  18. A robust automatic leukocyte recognition method based on island-clustering texture

    Directory of Open Access Journals (Sweden)

    Xiaoshun Li

    2016-01-01

    Full Text Available A leukocyte recognition method for human peripheral blood smear based on island-clustering texture (ICT is proposed. By analyzing the features of the five typical classes of leukocyte images, a new ICT model is established. Firstly, some feature points are extracted in a gray leukocyte image by mean-shift clustering to be the centers of islands. Secondly, the growing region is employed to create regions of the islands in which the seeds are just these feature points. These islands distribution can describe a new texture. Finally, a distinguished parameter vector of these islands is created as the ICT features by combining the ICT features with the geometric features of the leukocyte. Then the five typical classes of leukocytes can be recognized successfully at the correct recognition rate of more than 92.3% with a total sample of 1310 leukocytes. Experimental results show the feasibility of the proposed method. Further analysis reveals that the method is robust and results can provide important information for disease diagnosis.

  19. A Spectrum Sensing Method Based on Signal Feature and Clustering Algorithm in Cognitive Wireless Multimedia Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yongwei Zhang

    2017-01-01

    Full Text Available In order to solve the problem of difficulty in determining the threshold in spectrum sensing technologies based on the random matrix theory, a spectrum sensing method based on clustering algorithm and signal feature is proposed for Cognitive Wireless Multimedia Sensor Networks. Firstly, the wireless communication signal features are obtained according to the sampling signal covariance matrix. Then, the clustering algorithm is used to classify and test the signal features. Different signal features and clustering algorithms are compared in this paper. The experimental results show that the proposed method has better sensing performance.

  20. Integration of Qualitative and Quantitative Methods: Building and Interpreting Clusters from Grounded Theory and Discourse Analysis

    Directory of Open Access Journals (Sweden)

    Aldo Merlino

    2007-01-01

    Full Text Available Qualitative methods present a wide spectrum of application possibilities as well as opportunities for combining qualitative and quantitative methods. In the social sciences fruitful theoretical discussions and a great deal of empirical research have taken place. This article introduces an empirical investigation which demonstrates the logic of combining methodologies as well as the collection and interpretation, both sequential as simultaneous, of qualitative and quantitative data. Specifically, the investigation process will be described, beginning with a grounded theory methodology and its combination with the techniques of structural semiotics discourse analysis to generate—in a first phase—an instrument for quantitative measuring and to understand—in a second phase—clusters obtained by quantitative analysis. This work illustrates how qualitative methods allow for the comprehension of the discursive and behavioral elements under study, and how they function as support making sense of and giving meaning to quantitative data. URN: urn:nbn:de:0114-fqs0701219

  1. A comparison of three clustering methods for finding subgroups in MRI, SMS or clinical data: SPSS TwoStep Cluster analysis, Latent Gold and SNOB.

    Science.gov (United States)

    Kent, Peter; Jensen, Rikke K; Kongsted, Alice

    2014-10-02

    There are various methodological approaches to identifying clinically important subgroups and one method is to identify clusters of characteristics that differentiate people in cross-sectional and/or longitudinal data using Cluster Analysis (CA) or Latent Class Analysis (LCA). There is a scarcity of head-to-head comparisons that can inform the choice of which clustering method might be suitable for particular clinical datasets and research questions. Therefore, the aim of this study was to perform a head-to-head comparison of three commonly available methods (SPSS TwoStep CA, Latent Gold LCA and SNOB LCA). The performance of these three methods was compared: (i) quantitatively using the number of subgroups detected, the classification probability of individuals into subgroups, the reproducibility of results, and (ii) qualitatively using subjective judgments about each program's ease of use and interpretability of the presentation of results.We analysed five real datasets of varying complexity in a secondary analysis of data from other research projects. Three datasets contained only MRI findings (n = 2,060 to 20,810 vertebral disc levels), one dataset contained only pain intensity data collected for 52 weeks by text (SMS) messaging (n = 1,121 people), and the last dataset contained a range of clinical variables measured in low back pain patients (n = 543 people). Four artificial datasets (n = 1,000 each) containing subgroups of varying complexity were also analysed testing the ability of these clustering methods to detect subgroups and correctly classify individuals when subgroup membership was known. The results from the real clinical datasets indicated that the number of subgroups detected varied, the certainty of classifying individuals into those subgroups varied, the findings had perfect reproducibility, some programs were easier to use and the interpretability of the presentation of their findings also varied. The results from the artificial datasets

  2. Pattern Classification of Tropical Cyclone Tracks over the Western North Pacific using a Fuzzy Clustering Method

    Science.gov (United States)

    Kim, H.; Ho, C.; Kim, J.

    2008-12-01

    This study presents the pattern classification of tropical cyclone (TC) tracks over the western North Pacific (WNP) basin during the typhoon season (June through October) for 1965-2006 (total 42 years) using a fuzzy clustering method. After the fuzzy c-mean clustering algorithm to the TC trajectory interpolated into 20 segments of equivalent length, we divided the whole tracks into 7 patterns. The optimal number of the fuzzy cluster is determined by several validity measures. The classified TC track patterns represent quite different features in the recurving latitudes, genesis locations, and geographical pathways: TCs mainly forming in east-northern part of the WNP and striking Korean and Japan (C1); mainly forming in west-southern part of the WNP, traveling long pathway, and partly striking Japan (C2); mainly striking Taiwan and East China (C3); traveling near the east coast of Japan (C4); traveling the distant ocean east of Japan (C5); moving toward South China and Vietnam straightly (C6); and forming in the South China Sea (C7). Atmospheric environments related to each cluster show physically consistent with each TC track patterns. The straight track pattern is closely linked to a developed anticyclonic circulation to the north of the TC. It implies that this ridge acts as a steering flow forcing TCs to move to the northwest with a more west-oriented track. By contrast, recurving patterns occur commonly under the influence of the strong anomalous westerlies over the TC pathway but there definitely exist characteristic anomalous circulations over the mid- latitudes by pattern. Some clusters are closely related to the well-known large-scale phenomena. The C1 and C2 are highly related to the ENSO phase: The TCs in the C1 (C2) is more active during La Niña (El Niño). The TC activity in the C3 is associated with the WNP summer monsoon. The TCs in the C4 is more (less) vigorous during the easterly (westerly) phase of the stratospheric quasi-biennial oscillation

  3. Solution of Nonlinear Partial Differential Equations by New Laplace Variational Iteration Method

    Directory of Open Access Journals (Sweden)

    Eman M. A. Hilal

    2014-01-01

    Full Text Available The aim of this study is to give a good strategy for solving some linear and nonlinear partial differential equations in engineering and physics fields, by combining Laplace transform and the modified variational iteration method. This method is based on the variational iteration method, Laplace transforms, and convolution integral, introducing an alternative Laplace correction functional and expressing the integral as a convolution. Some examples in physical engineering are provided to illustrate the simplicity and reliability of this method. The solutions of these examples are contingent only on the initial conditions.

  4. Statistical method for determining ages of globular clusters by fitting isochrones

    International Nuclear Information System (INIS)

    Flannery, B.P.; Johnson, B.C.

    1982-01-01

    We describe a statistical procedure to compare models of stellar evolution and atmospheres with color-magnitude diagrams of globular clusters. The isochrone depends on five parameters: m-M, age, [Fe/H], Y, and α, but in practice we can only determine m-M and age for an assumed composition. The technique allows us to determine parameters of the model, their uncertainty, and to assess goodness of fit. We test the method, and evaluate the effect of assumptions on an extensive set of Monte Carlo simulations. We apply the method to extensive observations of NGC 6752 and M5, and to smaller data sets for the clusters M3, M5, M15, and M92. We determine age and m-M for two assumed values of helium Y = (0.2, 0.3), and three values of metallicity with a spread in [Fe/H] of +- 0.3 dex. These result in a spread in age of 5-8 Gyr (1 Gyr = 10 9 yr), and a spread in m-M of 0.5 mag. The mean age is generally younger by 2-3 Gyr than previous estimates. Likely uncertainty associated with an individual fit can be small as 0.4 Gyr. Most importantly, we find that two uncalibratable sources of systematic error make the results suspect. These are uncertainty in the stellar temperatures induced by choice of mixing length, and known errors in stellar atmospheres. These effects could reduce age estimates by an additional 5 Gyr. We conclude that observations do not preclude ages as young as 10 Gyr for globular clusters

  5. Variational methods in the kinetic modeling of nuclear reactors: Recent advances

    International Nuclear Information System (INIS)

    Dulla, S.; Picca, P.; Ravetto, P.

    2009-01-01

    The variational approach can be very useful in the study of approximate methods, giving a sound mathematical background to numerical algorithms and computational techniques. The variational approach has been applied to nuclear reactor kinetic equations, to obtain a formulation of standard methods such as point kinetics and quasi-statics. more recently, the multipoint method has also been proposed for the efficient simulation of space-energy transients in nuclear reactors and in source-driven subcritical systems. The method is now founded on a variational basis that allows a consistent definition of integral parameters. The mathematical structure of multipoint and modal methods is also investigated, evidencing merits and shortcomings of both techniques. Some numerical results for simple systems are presented and the errors with respect to reference calculations are reported and discussed. (authors)

  6. A variationally coupled FE-BE method for elasticity and fracture mechanics

    Science.gov (United States)

    Lu, Y. Y.; Belytschko, T.; Liu, W. K.

    1991-01-01

    A new method for coupling finite element and boundary element subdomains in elasticity and fracture mechanics problems is described. The essential feature of this new method is that a single variational statement is obtained for the entire domain, and in this process the terms associated with tractions on the interfaces between the subdomains are eliminated. This provides the additional advantage that the ambiguities associated with the matching of discontinuous tractions are circumvented. The method leads to a direct procedure for obtaining the discrete equations for the coupled problem without any intermediate steps. In order to evaluate this method and compare it with previous methods, a patch test for coupled procedures has been devised. Evaluation of this variationally coupled method and other methods, such as stiffness coupling and constraint traction matching coupling, shows that this method is substantially superior. Solutions for a series of fracture mechanics problems are also reported to illustrate the effectiveness of this method.

  7. Comparison Of Keyword Based Clustering Of Web Documents By Using Openstack 4j And By Traditional Method

    Directory of Open Access Journals (Sweden)

    Shiza Anand

    2015-08-01

    Full Text Available As the number of hypertext documents are increasing continuously day by day on world wide web. Therefore clustering methods will be required to bind documents into the clusters repositories according to the similarity lying between the documents. Various clustering methods exist such as Hierarchical Based K-means Fuzzy Logic Based Centroid Based etc. These keyword based clustering methods takes much more amount of time for creating containers and putting documents in their respective containers. These traditional methods use File Handling techniques of different programming languages for creating repositories and transferring web documents into these containers. In contrast openstack4j SDK is a new technique for creating containers and shifting web documents into these containers according to the similarity in much more less amount of time as compared to the traditional methods. Another benefit of this technique is that this SDK understands and reads all types of files such as jpg html pdf doc etc. This paper compares the time required for clustering of documents by using openstack4j and by traditional methods and suggests various search engines to adopt this technique for clustering so that they give result to the user querries in less amount of time.

  8. Lexical preferences in Dutch verbal cluster ordering

    NARCIS (Netherlands)

    Bloem, J.; Bellamy, K.; Karvovskaya, E.; Kohlberger, M.; Saad, G.

    2016-01-01

    This study discusses lexical preferences as a factor affecting the word order variation in Dutch verbal clusters. There are two grammatical word orders for Dutch two-verb clusters, with no clear meaning difference. Using the method of collostructional analysis, I find significant associations

  9. Using spectral element method to solve variational inequalities with applications in finance

    International Nuclear Information System (INIS)

    Moradipour, M.; Yousefi, S.A.

    2015-01-01

    Under the Black–Scholes model, the value of an American option solves a time dependent variational inequality problem (VIP). In this paper, first we discretize the variational inequality of American option in temporal direction by applying the Rannacher time stepping and achieve a sequence of elliptic variational inequalities. Second we discretize the spatial domain of variational inequalities by using spectral element methods with high order Lagrangian polynomials introduced on Gauss–Legendre–Lobatto points. Also by computing integrals by the Gauss–Legendre–Lobatto quadrature rule we derive a sequence of the linear complementarity problems (LCPs) having a positive definite sparse coefficient matrix. To find the unique solutions of the LCPs, we use the projected successive over-relaxation (PSOR) algorithm. Furthermore we present some existence and uniqueness theorems for the variational inequalities and LCPs. Finally, theoretical results are verified on the relevant numerical examples.

  10. Co-variations and clustering of chronic disease behavioral risk factors in China: China Chronic Disease and Risk Factor Surveillance, 2007.

    Directory of Open Access Journals (Sweden)

    Yichong Li

    Full Text Available BACKGROUND: Chronic diseases have become the leading causes of mortality in China and related behavioral risk factors (BRFs changed dramatically in past decades. We aimed to examine the prevalence, co-variations, clustering and the independent correlates of five BRFs at the national level. METHODOLOGY/PRINCIPAL FINDINGS: We used data from the 2007 China Chronic Disease and Risk Factor Surveillance, in which multistage clustering sampling was adopted to collect a nationally representative sample of 49,247 Chinese aged 15 to 69 years. We estimated the prevalence and clustering (mean number of BRFs of five BRFs: tobacco use, excessive alcohol drinking, insufficient intake of vegetable and fruit, physical inactivity, and overweight or obesity. We conducted binary logistic regression models to examine the co-variations among five BRFs with adjustment of demographic and socioeconomic factors, chronic conditions and other BRFs. Ordinal logistic regression was constructed to investigate the independent associations between each covariate and the clustering of BRFs within individuals. Overall, 57.0% of Chinese population had at least two BRFs and the mean number of BRFs is 1.80 (95% confidence interval: 1.78-1.83. Eight of the ten pairs of bivariate associations between the five BRFs were found statistically significant. Chinese with older age, being a male, living in rural areas, having lower education level and lower yearly household income experienced increased likelihood of having more BRFs. CONCLUSIONS/SIGNIFICANCE: Current BRFs place the majority of Chinese aged 15 to 69 years at risk for the future development of chronic disease, which calls for urgent public health programs to reduce these risk factors. Prominent correlations between BRFs imply that a combined package of interventions targeting multiple BRFs might be appropriate. These interventions should target elder population, men, and rural residents, especially those with lower SES.

  11. Stability of maximum-likelihood-based clustering methods: exploring the backbone of classifications

    International Nuclear Information System (INIS)

    Mungan, Muhittin; Ramasco, José J

    2010-01-01

    Components of complex systems are often classified according to the way they interact with each other. In graph theory such groups are known as clusters or communities. Many different techniques have been recently proposed to detect them, some of which involve inference methods using either Bayesian or maximum likelihood approaches. In this paper, we study a statistical model designed for detecting clusters based on connection similarity. The basic assumption of the model is that the graph was generated by a certain grouping of the nodes and an expectation maximization algorithm is employed to infer that grouping. We show that the method admits further development to yield a stability analysis of the groupings that quantifies the extent to which each node influences its neighbors' group membership. Our approach naturally allows for the identification of the key elements responsible for the grouping and their resilience to changes in the network. Given the generality of the assumptions underlying the statistical model, such nodes are likely to play special roles in the original system. We illustrate this point by analyzing several empirical networks for which further information about the properties of the nodes is available. The search and identification of stabilizing nodes constitutes thus a novel technique to characterize the relevance of nodes in complex networks

  12. A method for improved clustering and classification of microscopy images using quantitative co-localization coefficients

    LENUS (Irish Health Repository)

    Singan, Vasanth R

    2012-06-08

    AbstractBackgroundThe localization of proteins to specific subcellular structures in eukaryotic cells provides important information with respect to their function. Fluorescence microscopy approaches to determine localization distribution have proved to be an essential tool in the characterization of unknown proteins, and are now particularly pertinent as a result of the wide availability of fluorescently-tagged constructs and antibodies. However, there are currently very few image analysis options able to effectively discriminate proteins with apparently similar distributions in cells, despite this information being important for protein characterization.FindingsWe have developed a novel method for combining two existing image analysis approaches, which results in highly efficient and accurate discrimination of proteins with seemingly similar distributions. We have combined image texture-based analysis with quantitative co-localization coefficients, a method that has traditionally only been used to study the spatial overlap between two populations of molecules. Here we describe and present a novel application for quantitative co-localization, as applied to the study of Rab family small GTP binding proteins localizing to the endomembrane system of cultured cells.ConclusionsWe show how quantitative co-localization can be used alongside texture feature analysis, resulting in improved clustering of microscopy images. The use of co-localization as an additional clustering parameter is non-biased and highly applicable to high-throughput image data sets.

  13. Measuring Group Synchrony: A Cluster-Phase Method for Analyzing Multivariate Movement Time-Series

    Directory of Open Access Journals (Sweden)

    Michael eRichardson

    2012-10-01

    Full Text Available A new method for assessing group synchrony is introduced as being potentially useful for objectively determining degree of group cohesiveness or entitativity. The cluster-phase method of Frank and Richardson (2010 was used to analyze movement data from the rocking chair movements of six-member groups who rocked their chairs while seated in a circle facing the center. In some trials group members had no information about others’ movements (their eyes were shut or they had their eyes open and gazed at a marker in the center of the group. As predicted, the group level synchrony measure was able to distinguish between situations where synchrony would have been possible and situations where it would be impossible. Moreover, other aspects of the analysis illustrated how the cluster phase measures can be used to determine the type of patterning of group synchrony, and, when integrated with multi-level modeling, can be used to examine individual-level differences in synchrony and dyadic level synchrony as well.

  14. The multi-scattering-Xα method for analysis of the electronic structure of atomic clusters

    International Nuclear Information System (INIS)

    Bahurmuz, A.A.; Woo, C.H.

    1984-12-01

    A computer program, MSXALPHA, has been developed to carry out a quantum-mechanical analysis of the electronic structure of molecules and atomic clusters using the Multi-Scattering-Xα (MSXα) method. The MSXALPHA program is based on a code obtained from the University of Alberta; several improvements and new features were incorporated to increase generality and efficiency. The major ones are: (1) minimization of core memory usage, (2) reduction of execution time, (3) introduction of a dynamic core allocation scheme for a large number of arrays, (4) incorporation of an atomic program to generate numerical orbitals used to construct the initial molecular potential, and (5) inclusion of a routine to evaluate total energy. This report is divided into three parts. The first discusses the theory of the MSXα method. The second gives a detailed description of the program, MSXALPHA. The third discusses the results of calculations carried out for the methane molecule (CH 4 ) and a four-atom zirconium cluster (Zr 4 )

  15. A numerical study of spin-dependent organization of alkali-metal atomic clusters using density-functional method

    International Nuclear Information System (INIS)

    Liu Xuan; Ito, Haruhiko; Torikai, Eiko

    2012-01-01

    We calculate the different geometric isomers of spin clusters composed of a small number of alkali-metal atoms using the UB3LYP density-functional method. The electron density distribution of clusters changes according to the value of total spin. Steric structures as well as planar structures arise when the number of atoms increases. The lowest spin state is the most stable and Li n , Na n , K n , Rb n , and Cs n with n = 2–8 can be formed in higher spin states. In the highest spin state, the preparation of clusters depends on the kind and the number of constituent atoms. The interaction energy between alkali-metal atoms and rare-gas atoms is smaller than the binding energy of spin clusters. Consequently, it is possible to self-organize the alkali-metal-atom clusters on a non-wetting substrate coated with rare-gas atoms.

  16. Validity studies among hierarchical methods of cluster analysis using cophenetic correlation coefficient

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Priscilla R.; Munita, Casimiro S.; Lapolli, André L., E-mail: prii.ramos@gmail.com, E-mail: camunita@ipen.br, E-mail: alapolli@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    The literature presents many methods for partitioning of data base, and is difficult choose which is the most suitable, since the various combinations of methods based on different measures of dissimilarity can lead to different patterns of grouping and false interpretations. Nevertheless, little effort has been expended in evaluating these methods empirically using an archaeological data base. In this way, the objective of this work is make a comparative study of the different cluster analysis methods and identify which is the most appropriate. For this, the study was carried out using a data base of the Archaeometric Studies Group from IPEN-CNEN/SP, in which 45 samples of ceramic fragments from three archaeological sites were analyzed by instrumental neutron activation analysis (INAA) which were determinate the mass fraction of 13 elements (As, Ce, Cr, Eu, Fe, Hf, La, Na, Nd, Sc, Sm, Th, U). The methods used for this study were: single linkage, complete linkage, average linkage, centroid and Ward. The validation was done using the cophenetic correlation coefficient and comparing these values the average linkage method obtained better results. A script of the statistical program R with some functions was created to obtain the cophenetic correlation. By means of these values was possible to choose the most appropriate method to be used in the data base. (author)

  17. Validity studies among hierarchical methods of cluster analysis using cophenetic correlation coefficient

    International Nuclear Information System (INIS)

    Carvalho, Priscilla R.; Munita, Casimiro S.; Lapolli, André L.

    2017-01-01

    The literature presents many methods for partitioning of data base, and is difficult choose which is the most suitable, since the various combinations of methods based on different measures of dissimilarity can lead to different patterns of grouping and false interpretations. Nevertheless, little effort has been expended in evaluating these methods empirically using an archaeological data base. In this way, the objective of this work is make a comparative study of the different cluster analysis methods and identify which is the most appropriate. For this, the study was carried out using a data base of the Archaeometric Studies Group from IPEN-CNEN/SP, in which 45 samples of ceramic fragments from three archaeological sites were analyzed by instrumental neutron activation analysis (INAA) which were determinate the mass fraction of 13 elements (As, Ce, Cr, Eu, Fe, Hf, La, Na, Nd, Sc, Sm, Th, U). The methods used for this study were: single linkage, complete linkage, average linkage, centroid and Ward. The validation was done using the cophenetic correlation coefficient and comparing these values the average linkage method obtained better results. A script of the statistical program R with some functions was created to obtain the cophenetic correlation. By means of these values was possible to choose the most appropriate method to be used in the data base. (author)

  18. A Novel Double Cluster and Principal Component Analysis-Based Optimization Method for the Orbit Design of Earth Observation Satellites

    Directory of Open Access Journals (Sweden)

    Yunfeng Dong

    2017-01-01

    Full Text Available The weighted sum and genetic algorithm-based hybrid method (WSGA-based HM, which has been applied to multiobjective orbit optimizations, is negatively influenced by human factors through the artificial choice of the weight coefficients in weighted sum method and the slow convergence of GA. To address these two problems, a cluster and principal component analysis-based optimization method (CPC-based OM is proposed, in which many candidate orbits are gradually randomly generated until the optimal orbit is obtained using a data mining method, that is, cluster analysis based on principal components. Then, the second cluster analysis of the orbital elements is introduced into CPC-based OM to improve the convergence, developing a novel double cluster and principal component analysis-based optimization method (DCPC-based OM. In DCPC-based OM, the cluster analysis based on principal components has the advantage of reducing the human influences, and the cluster analysis based on six orbital elements can reduce the search space to effectively accelerate convergence. The test results from a multiobjective numerical benchmark function and the orbit design results of an Earth observation satellite show that DCPC-based OM converges more efficiently than WSGA-based HM. And DCPC-based OM, to some degree, reduces the influence of human factors presented in WSGA-based HM.

  19. A New Approximation Method for Solving Variational Inequalities and Fixed Points of Nonexpansive Mappings

    Directory of Open Access Journals (Sweden)

    Klin-eam Chakkrid

    2009-01-01

    Full Text Available Abstract A new approximation method for solving variational inequalities and fixed points of nonexpansive mappings is introduced and studied. We prove strong convergence theorem of the new iterative scheme to a common element of the set of fixed points of nonexpansive mapping and the set of solutions of the variational inequality for the inverse-strongly monotone mapping which solves some variational inequalities. Moreover, we apply our main result to obtain strong convergence to a common fixed point of nonexpansive mapping and strictly pseudocontractive mapping in a Hilbert space.

  20. A Clustering K-Anonymity Privacy-Preserving Method for Wearable IoT Devices

    Directory of Open Access Journals (Sweden)

    Fang Liu

    2018-01-01

    Full Text Available Wearable technology is one of the greatest applications of the Internet of Things. The popularity of wearable devices has led to a massive scale of personal (user-specific data. Generally, data holders (manufacturers of wearable devices are willing to share these data with others to get benefits. However, significant privacy concerns would arise when sharing the data with the third party in an improper manner. In this paper, we first propose a specific threat model about the data sharing process of wearable devices’ data. Then we propose a K-anonymity method based on clustering to preserve privacy of wearable IoT devices’ data and guarantee the usability of the collected data. Experiment results demonstrate the effectiveness of the proposed method.

  1. Comparison and combination of "direct" and fragment based local correlation methods: Cluster in molecules and domain based local pair natural orbital perturbation and coupled cluster theories

    Science.gov (United States)

    Guo, Yang; Becker, Ute; Neese, Frank

    2018-03-01

    Local correlation theories have been developed in two main flavors: (1) "direct" local correlation methods apply local approximation to the canonical equations and (2) fragment based methods reconstruct the correlation energy from a series of smaller calculations on subsystems. The present work serves two purposes. First, we investigate the relative efficiencies of the two approaches using the domain-based local pair natural orbital (DLPNO) approach as the "direct" method and the cluster in molecule (CIM) approach as the fragment based approach. Both approaches are applied in conjunction with second-order many-body perturbation theory (MP2) as well as coupled-cluster theory with single-, double- and perturbative triple excitations [CCSD(T)]. Second, we have investigated the possible merits of combining the two approaches by performing CIM calculations with DLPNO methods serving as the method of choice for performing the subsystem calculations. Our cluster-in-molecule approach is closely related to but slightly deviates from approaches in the literature since we have avoided real space cutoffs. Moreover, the neglected distant pair correlations in the previous CIM approach are considered approximately. Six very large molecules (503-2380 atoms) were studied. At both MP2 and CCSD(T) levels of theory, the CIM and DLPNO methods show similar efficiency. However, DLPNO methods are more accurate for 3-dimensional systems. While we have found only little incentive for the combination of CIM with DLPNO-MP2, the situation is different for CIM-DLPNO-CCSD(T). This combination is attractive because (1) the better parallelization opportunities offered by CIM; (2) the methodology is less memory intensive than the genuine DLPNO-CCSD(T) method and, hence, allows for large calculations on more modest hardware; and (3) the methodology is applicable and efficient in the frequently met cases, where the largest subsystem calculation is too large for the canonical CCSD(T) method.

  2. The use of Adomian decomposition method for solving problems in calculus of variations

    Directory of Open Access Journals (Sweden)

    Mehdi Dehghan

    2006-01-01

    Full Text Available In this paper, a numerical method is presented for finding the solution of some variational problems. The main objective is to find the solution of an ordinary differential equation which arises from the variational problem. This work is done using Adomian decomposition method which is a powerful tool for solving large amount of problems. In this approach, the solution is found in the form of a convergent power series with easily computed components. To show the efficiency of the method, numerical results are presented.

  3. METHODS FOR CLUSTERING TIME SERIES DATA ACQUIRED FROM MOBILE HEALTH APPS.

    Science.gov (United States)

    Tignor, Nicole; Wang, Pei; Genes, Nicholas; Rogers, Linda; Hershman, Steven G; Scott, Erick R; Zweig, Micol; Yvonne Chan, Yu-Feng; Schadt, Eric E

    2017-01-01

    In our recent Asthma Mobile Health Study (AMHS), thousands of asthma patients across the country contributed medical data through the iPhone Asthma Health App on a daily basis for an extended period of time. The collected data included daily self-reported asthma symptoms, symptom triggers, and real time geographic location information. The AMHS is just one of many studies occurring in the context of now many thousands of mobile health apps aimed at improving wellness and better managing chronic disease conditions, leveraging the passive and active collection of data from mobile, handheld smart devices. The ability to identify patient groups or patterns of symptoms that might predict adverse outcomes such as asthma exacerbations or hospitalizations from these types of large, prospectively collected data sets, would be of significant general interest. However, conventional clustering methods cannot be applied to these types of longitudinally collected data, especially survey data actively collected from app users, given heterogeneous patterns of missing values due to: 1) varying survey response rates among different users, 2) varying survey response rates over time of each user, and 3) non-overlapping periods of enrollment among different users. To handle such complicated missing data structure, we proposed a probability imputation model to infer missing data. We also employed a consensus clustering strategy in tandem with the multiple imputation procedure. Through simulation studies under a range of scenarios reflecting real data conditions, we identified favorable performance of the proposed method over other strategies that impute the missing value through low-rank matrix completion. When applying the proposed new method to study asthma triggers and symptoms collected as part of the AMHS, we identified several patient groups with distinct phenotype patterns. Further validation of the methods described in this paper might be used to identify clinically important

  4. Study of the Cl2 molecule by the variational cellular method

    International Nuclear Information System (INIS)

    Rosato, A.; Lima, M.A.P.

    1984-01-01

    A self-consistent calculation based on the Variational Cellular Method is performed on the Cl 2 molecule. The results obtained for the ground state potential curve and the first excited state, the dissociation energy, the molecular orbital energies and other related parameters are compared with other methods of calculations and with available data and the agreement is satisfatory. (Author) [pt

  5. A variation method in the optimization problem of the minority game model

    International Nuclear Information System (INIS)

    Blazhyijevs'kij, L.; Yanyishevs'kij, V.

    2009-01-01

    This article contains the results of applying a variation method in the investigation of the optimization problem in the minority game model. That suggested approach is shown to give relevant results about phase transition in the model. Other methods pertinent to the problem have also been assessed.

  6. A study on linear and nonlinear Schrodinger equations by the variational iteration method

    International Nuclear Information System (INIS)

    Wazwaz, Abdul-Majid

    2008-01-01

    In this work, we introduce a framework to obtain exact solutions to linear and nonlinear Schrodinger equations. The He's variational iteration method (VIM) is used for analytic treatment of these equations. Numerical examples are tested to show the pertinent features of this method

  7. Modified variational iteration method for an El Niño Southern Oscillation delayed oscillator

    International Nuclear Information System (INIS)

    Cao Xiao-Qun; Song Jun-Qiang; Zhu Xiao-Qian; Zhang Li-Lun; Zhang Wei-Min; Zhao Jun

    2012-01-01

    This paper studies a delayed air—sea coupled oscillator describing the physical mechanism of El Niño Southern Oscillation. The approximate expansions of the delayed differential equation's solution are obtained successfully by the modified variational iteration method. The numerical results illustrate the effectiveness and correctness of the method by comparing with the exact solution of the reduced model. (general)

  8. Introduction to the Special Issue on Advancing Methods for Analyzing Dialect Variation.

    Science.gov (United States)

    Clopper, Cynthia G

    2017-07-01

    Documenting and analyzing dialect variation is traditionally the domain of dialectology and sociolinguistics. However, modern approaches to acoustic analysis of dialect variation have their roots in Peterson and Barney's [(1952). J. Acoust. Soc. Am. 24, 175-184] foundational work on the acoustic analysis of vowels that was published in the Journal of the Acoustical Society of America (JASA) over 6 decades ago. Although Peterson and Barney (1952) were not primarily concerned with dialect variation, their methods laid the groundwork for the acoustic methods that are still used by scholars today to analyze vowel variation within and across languages. In more recent decades, a number of methodological advances in the study of vowel variation have been published in JASA, including work on acoustic vowel overlap and vowel normalization. The goal of this special issue was to honor that tradition by bringing together a set of papers describing the application of emerging acoustic, articulatory, and computational methods to the analysis of dialect variation in vowels and beyond.

  9. The cosmological analysis of X-ray cluster surveys - I. A new method for interpreting number counts

    Science.gov (United States)

    Clerc, N.; Pierre, M.; Pacaud, F.; Sadibekova, T.

    2012-07-01

    We present a new method aimed at simplifying the cosmological analysis of X-ray cluster surveys. It is based on purely instrumental observable quantities considered in a two-dimensional X-ray colour-magnitude diagram (hardness ratio versus count rate). The basic principle is that even in rather shallow surveys, substantial information on cluster redshift and temperature is present in the raw X-ray data and can be statistically extracted; in parallel, such diagrams can be readily predicted from an ab initio cosmological modelling. We illustrate the methodology for the case of a 100-deg2XMM survey having a sensitivity of ˜10-14 erg s-1 cm-2 and fit at the same time, the survey selection function, the cluster evolutionary scaling relations and the cosmology; our sole assumption - driven by the limited size of the sample considered in the case study - is that the local cluster scaling relations are known. We devote special attention to the realistic modelling of the count-rate measurement uncertainties and evaluate the potential of the method via a Fisher analysis. In the absence of individual cluster redshifts, the count rate and hardness ratio (CR-HR) method appears to be much more efficient than the traditional approach based on cluster counts (i.e. dn/dz, requiring redshifts). In the case where redshifts are available, our method performs similar to the traditional mass function (dn/dM/dz) for the purely cosmological parameters, but constrains better parameters defining the cluster scaling relations and their evolution. A further practical advantage of the CR-HR method is its simplicity: this fully top-down approach totally bypasses the tedious steps consisting in deriving cluster masses from X-ray temperature measurements.

  10. Why so GLUMM? Detecting depression clusters through graphing lifestyle-environs using machine-learning methods (GLUMM).

    Science.gov (United States)

    Dipnall, J F; Pasco, J A; Berk, M; Williams, L J; Dodd, S; Jacka, F N; Meyer, D

    2017-01-01

    Key lifestyle-environ risk factors are operative for depression, but it is unclear how risk factors cluster. Machine-learning (ML) algorithms exist that learn, extract, identify and map underlying patterns to identify groupings of depressed individuals without constraints. The aim of this research was to use a large epidemiological study to identify and characterise depression clusters through "Graphing lifestyle-environs using machine-learning methods" (GLUMM). Two ML algorithms were implemented: unsupervised Self-organised mapping (SOM) to create GLUMM clusters and a supervised boosted regression algorithm to describe clusters. Ninety-six "lifestyle-environ" variables were used from the National health and nutrition examination study (2009-2010). Multivariate logistic regression validated clusters and controlled for possible sociodemographic confounders. The SOM identified two GLUMM cluster solutions. These solutions contained one dominant depressed cluster (GLUMM5-1, GLUMM7-1). Equal proportions of members in each cluster rated as highly depressed (17%). Alcohol consumption and demographics validated clusters. Boosted regression identified GLUMM5-1 as more informative than GLUMM7-1. Members were more likely to: have problems sleeping; unhealthy eating; ≤2 years in their home; an old home; perceive themselves underweight; exposed to work fumes; experienced sex at ≤14 years; not perform moderate recreational activities. A positive relationship between GLUMM5-1 (OR: 7.50, Pdepression was found, with significant interactions with those married/living with partner (P=0.001). Using ML based GLUMM to form ordered depressive clusters from multitudinous lifestyle-environ variables enabled a deeper exploration of the heterogeneous data to uncover better understandings into relationships between the complex mental health factors. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  11. New successive variational method of tensor-optimized antisymmetrized molecular dynamics for nuclear many-body systems

    Science.gov (United States)

    Myo, Takayuki; Toki, Hiroshi; Ikeda, Kiyomi; Horiuchi, Hisashi; Suhara, Tadahiro

    2017-07-01

    We recently proposed a new variational theory of “tensor-optimized antisymmetrized molecular dynamics” (TOAMD), which treats the strong interaction explicitly for finite nuclei [T. Myo et al., Prog. Theor. Exp. Phys. 2015, 073D02 (2015)]. In TOAMD, the correlation functions for the tensor force and the short-range repulsion and their multiple products are successively operated to the AMD state. The correlated Hamiltonian is expanded into many-body operators by using the cluster expansion and all the resulting operators are taken into account in the calculation without any truncation. We show detailed results for TOAMD with the nucleon-nucleon interaction AV8‧ for s-shell nuclei. The binding energy and the Hamiltonian components are successively converged to exact values of the few-body calculations. We also apply TOAMD to the Malfliet-Tjon central potential having a strong short-range repulsion. TOAMD can treat the short-range correlation and provided accurate energies of s-shell nuclei, reproducing the results of few-body calculations. It turns out that the numerical accuracy of TOAMD with double products of the correlation functions is beyond the variational Monte Carlo method with Jastrow's product-type correlation functions.

  12. Hadron formation in a non-ideal quark gluon plasma using Mayer's method of cluster expansion

    International Nuclear Information System (INIS)

    Prasanth, J.P.; Bannur, Vishnu M.

    2015-01-01

    This work investigates the applicability of using the Mayer's cluster expansion method to derive the equation of state (EoS) of the quark-antiquark plasma. Dissociation of heavier hadrons in QGP is studied. The possibility of the existence of quarkonium after deconfinement at higher temperature than the critical temperature T > T c is investigated. The EoS has been studied by calculating second and third cluster integrals. The results are compared and discussed with available works. (author)

  13. Own-wage labor supply elasticities: variation across time and estimation methods

    Directory of Open Access Journals (Sweden)

    Olivier Bargain

    2016-10-01

    Full Text Available Abstract There is a huge variation in the size of labor supply elasticities in the literature, which hampers policy analysis. While recent studies show that preference heterogeneity across countries explains little of this variation, we focus on two other important features: observation period and estimation method. We start with a thorough survey of existing evidence for both Western Europe and the USA, over a long period and from different empirical approaches. Then, our meta-analysis attempts to disentangle the role of time changes and estimation methods. We highlight the key role of time changes, documenting the incredible fall in labor supply elasticities since the 1980s not only for the USA but also in the EU. In contrast, we find no compelling evidence that the choice of estimation method explains variation in elasticity estimates. From our analysis, we derive important guidelines for policy simulations.

  14. Size variation and collapse of emphysema holes at inspiration and expiration CT scan: evaluation with modified length scale method and image co-registration.

    Science.gov (United States)

    Oh, Sang Young; Lee, Minho; Seo, Joon Beom; Kim, Namkug; Lee, Sang Min; Lee, Jae Seung; Oh, Yeon Mok

    2017-01-01

    A novel approach of size-based emphysema clustering has been developed, and the size variation and collapse of holes in emphysema clusters are evaluated at inspiratory and expiratory computed tomography (CT). Thirty patients were visually evaluated for the size-based emphysema clustering technique and a total of 72 patients were evaluated for analyzing collapse of the emphysema hole in this study. A new approach for the size differentiation of emphysema holes was developed using the length scale, Gaussian low-pass filtering, and iteration approach. Then, the volumetric CT results of the emphysema patients were analyzed using the new method, and deformable registration was carried out between inspiratory and expiratory CT. Blind visual evaluations of EI by two readers had significant correlations with the classification using the size-based emphysema clustering method ( r -values of reader 1: 0.186, 0.890, 0.915, and 0.941; reader 2: 0.540, 0.667, 0.919, and 0.942). The results of collapse of emphysema holes using deformable registration were compared with the pulmonary function test (PFT) parameters using the Pearson's correlation test. The mean extents of low-attenuation area (LAA), E1 (holes may be useful for understanding the dynamic collapse of emphysema and its functional relation.

  15. A parametric method for assessing diversification-rate variation in phylogenetic trees.

    Science.gov (United States)

    Shah, Premal; Fitzpatrick, Benjamin M; Fordyce, James A

    2013-02-01

    Phylogenetic hypotheses are frequently used to examine variation in rates of diversification across the history of a group. Patterns of diversification-rate variation can be used to infer underlying ecological and evolutionary processes responsible for patterns of cladogenesis. Most existing methods examine rate variation through time. Methods for examining differences in diversification among groups are more limited. Here, we present a new method, parametric rate comparison (PRC), that explicitly compares diversification rates among lineages in a tree using a variety of standard statistical distributions. PRC can identify subclades of the tree where diversification rates are at variance with the remainder of the tree. A randomization test can be used to evaluate how often such variance would appear by chance alone. The method also allows for comparison of diversification rate among a priori defined groups. Further, the application of the PRC method is not restricted to monophyletic groups. We examined the performance of PRC using simulated data, which showed that PRC has acceptable false-positive rates and statistical power to detect rate variation. We apply the PRC method to the well-studied radiation of North American Plethodon salamanders, and support the inference that the large-bodied Plethodon glutinosus clade has a higher historical rate of diversification compared to other Plethodon salamanders. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.

  16. A constrained Hartree-Fock-Bogoliubov equation derived from the double variational method

    International Nuclear Information System (INIS)

    Onishi, Naoki; Horibata, Takatoshi.

    1980-01-01

    The double variational method is applied to the intrinsic state of the generalized BCS wave function. A constrained Hartree-Fock-Bogoliubov equation is derived explicitly in the form of an eigenvalue equation. A method of obtaining approximate overlap and energy overlap integrals is proposed. This will help development of numerical calculations of the angular momentum projection method, especially for general intrinsic wave functions without any symmetry restrictions. (author)

  17. Noniterative Multireference Coupled Cluster Methods on Heterogeneous CPU-GPU Systems

    Energy Technology Data Exchange (ETDEWEB)

    Bhaskaran-Nair, Kiran; Ma, Wenjing; Krishnamoorthy, Sriram; Villa, Oreste; van Dam, Hubertus JJ; Apra, Edoardo; Kowalski, Karol

    2013-04-09

    A novel parallel algorithm for non-iterative multireference coupled cluster (MRCC) theories, which merges recently introduced reference-level parallelism (RLP) [K. Bhaskaran-Nair, J.Brabec, E. Aprà, H.J.J. van Dam, J. Pittner, K. Kowalski, J. Chem. Phys. 137, 094112 (2012)] with the possibility of accelerating numerical calculations using graphics processing unit (GPU) is presented. We discuss the performance of this algorithm on the example of the MRCCSD(T) method (iterative singles and doubles and perturbative triples), where the corrections due to triples are added to the diagonal elements of the MRCCSD (iterative singles and doubles) effective Hamiltonian matrix. The performance of the combined RLP/GPU algorithm is illustrated on the example of the Brillouin-Wigner (BW) and Mukherjee (Mk) state-specific MRCCSD(T) formulations.

  18. Using the clustered circular layout as an informative method for visualizing protein-protein interaction networks.

    Science.gov (United States)

    Fung, David C Y; Wilkins, Marc R; Hart, David; Hong, Seok-Hee

    2010-07-01

    The force-directed layout is commonly used in computer-generated visualizations of protein-protein interaction networks. While it is good for providing a visual outline of the protein complexes and their interactions, it has two limitations when used as a visual analysis method. The first is poor reproducibility. Repeated running of the algorithm does not necessarily generate the same layout, therefore, demanding cognitive readaptation on the investigator's part. The second limitation is that it does not explicitly display complementary biological information, e.g. Gene Ontology, other than the protein names or gene symbols. Here, we present an alternative layout called the clustered circular layout. Using the human DNA replication protein-protein interaction network as a case study, we compared the two network layouts for their merits and limitations in supporting visual analysis.

  19. Equation-of-motion coupled cluster method for high spin double electron attachment calculations

    Energy Technology Data Exchange (ETDEWEB)

    Musiał, Monika, E-mail: musial@ich.us.edu.pl; Lupa, Łukasz; Kucharski, Stanisław A. [Institute of Chemistry, University of Silesia, Szkolna 9, 40-006 Katowice (Poland)

    2014-03-21

    The new formulation of the equation-of-motion (EOM) coupled cluster (CC) approach applicable to the calculations of the double electron attachment (DEA) states for the high spin components is proposed. The new EOM equations are derived for the high spin triplet and quintet states. In both cases the new equations are easier to solve but the substantial simplification is observed in the case of quintets. Out of 21 diagrammatic terms contributing to the standard DEA-EOM-CCSDT equations for the R{sub 2} and R{sub 3} amplitudes only four terms survive contributing to the R{sub 3} part. The implemented method has been applied to the calculations of the excited states (singlets, triplets, and quintets) energies of the carbon and silicon atoms and potential energy curves for selected states of the Na{sub 2} (triplets) and B{sub 2} (quintets) molecules.

  20. Novel strategy to implement active-space coupled-cluster methods

    Science.gov (United States)

    Rolik, Zoltán; Kállay, Mihály

    2018-03-01

    A new approach is presented for the efficient implementation of coupled-cluster (CC) methods including higher excitations based on a molecular orbital space partitioned into active and inactive orbitals. In the new framework, the string representation of amplitudes and intermediates is used as long as it is beneficial, but the contractions are evaluated as matrix products. Using a new diagrammatic technique, the CC equations are represented in a compact form due to the string notations we introduced. As an application of these ideas, a new automated implementation of the single-reference-based multi-reference CC equations is presented for arbitrary excitation levels. The new program can be considered as an improvement over the previous implementations in many respects; e.g., diagram contributions are evaluated by efficient vectorized subroutines. Timings for test calculations for various complete active-space problems are presented. As an application of the new code, the weak interactions in the Be dimer were studied.

  1. Novel Signal Noise Reduction Method through Cluster Analysis, Applied to Photoplethysmography.

    Science.gov (United States)

    Waugh, William; Allen, John; Wightman, James; Sims, Andrew J; Beale, Thomas A W

    2018-01-01

    Physiological signals can often become contaminated by noise from a variety of origins. In this paper, an algorithm is described for the reduction of sporadic noise from a continuous periodic signal. The design can be used where a sample of a periodic signal is required, for example, when an average pulse is needed for pulse wave analysis and characterization. The algorithm is based on cluster analysis for selecting similar repetitions or pulses from a periodic single. This method selects individual pulses without noise, returns a clean pulse signal, and terminates when a sufficiently clean and representative signal is received. The algorithm is designed to be sufficiently compact to be implemented on a microcontroller embedded within a medical device. It has been validated through the removal of noise from an exemplar photoplethysmography (PPG) signal, showing increasing benefit as the noise contamination of the signal increases. The algorithm design is generalised to be applicable for a wide range of physiological (physical) signals.

  2. Laplace transform homotopy perturbation method for the approximation of variational problems.

    Science.gov (United States)

    Filobello-Nino, U; Vazquez-Leal, H; Rashidi, M M; Sedighi, H M; Perez-Sesma, A; Sandoval-Hernandez, M; Sarmiento-Reyes, A; Contreras-Hernandez, A D; Pereyra-Diaz, D; Hoyos-Reyes, C; Jimenez-Fernandez, V M; Huerta-Chua, J; Castro-Gonzalez, F; Laguna-Camacho, J R

    2016-01-01

    This article proposes the application of Laplace Transform-Homotopy Perturbation Method and some of its modifications in order to find analytical approximate solutions for the linear and nonlinear differential equations which arise from some variational problems. As case study we will solve four ordinary differential equations, and we will show that the proposed solutions have good accuracy, even we will obtain an exact solution. In the sequel, we will see that the square residual error for the approximate solutions, belongs to the interval [0.001918936920, 0.06334882582], which confirms the accuracy of the proposed methods, taking into account the complexity and difficulty of variational problems.

  3. A New Method to Constrain Supernova Fractions Using X-ray Observations of Clusters of Galaxies

    Science.gov (United States)

    Bulbul, Esra; Smith, Randall K.; Loewenstein, Michael

    2012-01-01

    Supernova (SN) explosions enrich the intracluster medium (ICM) both by creating and dispersing metals. We introduce a method to measure the number of SNe and relative contribution of Type Ia supernovae (SNe Ia) and core-collapse supernovae (SNe cc) by directly fitting X-ray spectral observations. The method has been implemented as an XSPEC model called snapec. snapec utilizes a single-temperature thermal plasma code (apec) to model the spectral emission based on metal abundances calculated using the latest SN yields from SN Ia and SN cc explosion models. This approach provides a self-consistent single set of uncertainties on the total number of SN explosions and relative fraction of SN types in the ICM over the cluster lifetime by directly allowing these parameters to be determined by SN yields provided by simulations. We apply our approach to XMM-Newton European Photon Imaging Camera (EPIC), Reflection Grating Spectrometer (RGS), and 200 ks simulated Astro-H observations of a cooling flow cluster, A3112.We find that various sets of SN yields present in the literature produce an acceptable fit to the EPIC and RGS spectra of A3112. We infer that 30.3% plus or minus 5.4% to 37.1% plus or minus 7.1% of the total SN explosions are SNe Ia, and the total number of SN explosions required to create the observed metals is in the range of (1.06 plus or minus 0.34) x 10(exp 9), to (1.28 plus or minus 0.43) x 10(exp 9), fromsnapec fits to RGS spectra. These values may be compared to the enrichment expected based on well-established empirically measured SN rates per star formed. The proportions of SNe Ia and SNe cc inferred to have enriched the ICM in the inner 52 kiloparsecs of A3112 is consistent with these specific rates, if one applies a correction for the metals locked up in stars. At the same time, the inferred level of SN enrichment corresponds to a star-to-gas mass ratio that is several times greater than the 10% estimated globally for clusters in the A3112 mass range.

  4. Enlargement of induced variations by combined method of chronic irradiations with callus culture in sugarcane

    International Nuclear Information System (INIS)

    Nagatomi, Shigeki

    1993-01-01

    The present study was conducted to elucidate the effects of gamma ray irradiation and callus culture upon induced variation of the regeneratives. The populations regenerated from young leaf tissue of chronic irradiated plnats grown under a gamma field receiving a total dose of 300 and 100 Gy, showed rather wider variation on quantitative characters than plants from populations of the non-irradiated. This variation extended in both negative and positive directions. Analysis of variance also revealed that variation and heritability in broad sense of most agronomic characters increased significantly among the subclones as the irradiation done rose. Principal component analysis also indicated that the subclones from the irradiated population were more variable than the non-irradiated. Such variation with higher heritability could be transmitted to the following generations by clonal propagation and utilized as genetic sources in mutation breeding. The combined method with chronic irradiation followed by tissue culture is evaluated as an effective method of widening mutation spectrum and increasing mutation frequency in regenerated plants. In addition, this method is valid to improve any crop species which can regenerate plants through callus culture. (author)

  5. An Improved Variational Method for Hyperspectral Image Pansharpening with the Constraint of Spectral Difference Minimization

    Science.gov (United States)

    Huang, Z.; Chen, Q.; Shen, Y.; Chen, Q.; Liu, X.

    2017-09-01

    Variational pansharpening can enhance the spatial resolution of a hyperspectral (HS) image using a high-resolution panchromatic (PAN) image. However, this technology may lead to spectral distortion that obviously affect the accuracy of data analysis. In this article, we propose an improved variational method for HS image pansharpening with the constraint of spectral difference minimization. We extend the energy function of the classic variational pansharpening method by adding a new spectral fidelity term. This fidelity term is designed following the definition of spectral angle mapper, which means that for every pixel, the spectral difference value of any two bands in the HS image is in equal proportion to that of the two corresponding bands in the pansharpened image. Gradient descent method is adopted to find the optimal solution of the modified energy function, and the pansharpened image can be reconstructed. Experimental results demonstrate that the constraint of spectral difference minimization is able to preserve the original spectral information well in HS images, and reduce the spectral distortion effectively. Compared to original variational method, our method performs better in both visual and quantitative evaluation, and achieves a good trade-off between spatial and spectral information.

  6. Variational Iteration Method for Fifth-Order Boundary Value Problems Using He's Polynomials

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Noor

    2008-01-01

    Full Text Available We apply the variational iteration method using He's polynomials (VIMHP for solving the fifth-order boundary value problems. The proposed method is an elegant combination of variational iteration and the homotopy perturbation methods and is mainly due to Ghorbani (2007. The suggested algorithm is quite efficient and is practically well suited for use in these problems. The proposed iterative scheme finds the solution without any discritization, linearization, or restrictive assumptions. Several examples are given to verify the reliability and efficiency of the method. The fact that the proposed technique solves nonlinear problems without using Adomian's polynomials can be considered as a clear advantage of this algorithm over the decomposition method.

  7. Damage evolution analysis of coal samples under cyclic loading based on single-link cluster method

    Science.gov (United States)

    Zhang, Zhibo; Wang, Enyuan; Li, Nan; Li, Xuelong; Wang, Xiaoran; Li, Zhonghui

    2018-05-01

    In this paper, the acoustic emission (AE) response of coal samples under cyclic loading is measured. The results show that there is good positive relation between AE parameters and stress. The AE signal of coal samples under cyclic loading exhibits an obvious Kaiser Effect. The single-link cluster (SLC) method is applied to analyze the spatial evolution characteristics of AE events and the damage evolution process of coal samples. It is found that a subset scale of the SLC structure becomes smaller and smaller when the number of cyclic loading increases, and there is a negative linear relationship between the subset scale and the degree of damage. The spatial correlation length ξ of an SLC structure is calculated. The results show that ξ fluctuates around a certain value from the second cyclic loading process to the fifth cyclic loading process, but spatial correlation length ξ clearly increases in the sixth loading process. Based on the criterion of microcrack density, the coal sample failure process is the transformation from small-scale damage to large-scale damage, which is the reason for changes in the spatial correlation length. Through a systematic analysis, the SLC method is an effective method to research the damage evolution process of coal samples under cyclic loading, and will provide important reference values for studying coal bursts.

  8. DLTAP: A Network-efficient Scheduling Method for Distributed Deep Learning Workload in Containerized Cluster Environment

    OpenAIRE

    Qiao Wei; Li Ying; Wu Zhong-Hai

    2017-01-01

    Deep neural networks (DNNs) have recently yielded strong results on a range of applications. Training these DNNs using a cluster of commodity machines is a promising approach since training is time consuming and compute-intensive. Furthermore, putting DNN tasks into containers of clusters would enable broader and easier deployment of DNN-based algorithms. Toward this end, this paper addresses the problem of scheduling DNN tasks in the containerized cluster environment. Efficiently scheduling ...

  9. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour; Chá con-Rebollo, Tomas

    2015-01-01

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base

  10. An error estimate for Tremolieres method for the discretization of parabolic variational inequalities

    International Nuclear Information System (INIS)

    Uko, L.U.

    1990-02-01

    We study a scheme for the time-discretization of parabolic variational inequalities that is often easier to use than the classical method of Rothe. We show that if the data are compatible in a certain sense, then this scheme is of order ≥1/2. (author). 10 refs

  11. Reactive power control methods for improved reliability of wind power inverters under wind speed variations

    DEFF Research Database (Denmark)

    Ma, Ke; Liserre, Marco; Blaabjerg, Frede

    2012-01-01

    method to relieve the thermal cycling of power switching devices under severe wind speed variations, by circulating reactive power among the parallel power converters in a WTS or among the WTS's in a wind park. The amount of reactive power is adjusted to limit the junction temperature fluctuation...

  12. Variation in Measurements of Transtibial Stump Model Volume A Comparison of Five Methods

    NARCIS (Netherlands)

    Bolt, A.; de Boer-Wilzing, V. G.; Geertzen, J. H. B.; Emmelot, C. H.; Baars, E. C. T.; Dijkstra, P. U.

    Objective: To determine the right moment for fitting the first prosthesis, it is necessary to know when the volume of the stump has stabilized. The aim of this study is to analyze variation in measurements of transtibial stump model volumes using the water immersion method, the Design TT system, the

  13. Systematic Convergence in Applying Variational Method to Double-Well Potential

    Science.gov (United States)

    Mei, Wai-Ning

    2016-01-01

    In this work, we demonstrate the application of the variational method by computing the ground- and first-excited state energies of a double-well potential. We start with the proper choice of the trial wave functions using optimized parameters, and notice that accurate expectation values in excellent agreement with the numerical results can be…

  14. Variational and penalization methods for studying connecting orbits of Hamiltonian systems

    Directory of Open Access Journals (Sweden)

    Chao-Nien Chen

    2000-08-01

    Full Text Available In this article, we consider a class of second order Hamiltonian systems that possess infinite or finite number of equilibria. Variational arguments will be used to study the existence of connecting orbits joining pairs of equilibria. Applying penalization methods, we obtain various patterns for multibump homoclinics and heteroclinics of Hamiltonian systems.

  15. Interactively Applying the Variational Method to the Dihydrogen Molecule: Exploring Bonding and Antibonding

    Science.gov (United States)

    Cruzeiro, Vinícius Wilian D.; Roitberg, Adrian; Polfer, Nicolas C.

    2016-01-01

    In this work we are going to present how an interactive platform can be used as a powerful tool to allow students to better explore a foundational problem in quantum chemistry: the application of the variational method to the dihydrogen molecule using simple Gaussian trial functions. The theoretical approach for the hydrogen atom is quite…

  16. Variational formulation and projectional methods for the second order transport equation

    International Nuclear Information System (INIS)

    Borysiewicz, M.; Stankiewicz, R.

    1979-01-01

    Herein the variational problem for a second-order boundary value problem for the neutron transport equation is formulated. The projectional methods solving the problem are examined. The approach is compared with that based on the original untransformed form of the neutron transport equation

  17. Antidepressant prescribing in five European countries: application of common methods to assess the variation in prevalence.

    NARCIS (Netherlands)

    Abbing-Karahagopian, V.; Huerta, C.; Souverein, P.C.; Abajo, F. de; Leufkens, H.G.M.; Slattery, J.; Alvarez, Y.; Montserrat, M.; Gill, M.; Hesse, U.; Requena, G.; Vries, F. de; Rottenkolber, M.; Schmiedl, S.; Reynolds, R.; Schlinger, R.; Groot, M. de; Klungel, O.H.; Staa, T.P. van; Dijk, L. van; Egberts, A.C.G.; Gardarsdottir, H.; Bruin, M.L. de

    2013-01-01

    Background: Drug utilization studies have applied different methods on various data types to describe medication use which may hamper comparisons across populations. Objectives: The aim of this study was to describe the variation in the prevalence of antidepressant prescribing, applying standard

  18. Evaluation of methods to determine the spectral variations of aerosol optical thickness

    Digital Repository Service at National Institute of Oceanography (India)

    Suresh, T.; Talaulikar, M.; Rodrigues, A.; Desa, E.; Chauhan, P.

    The methods used to derive spectral variations of aerosol optical thickness, AOT are evaluated. For our analysis we have used the AOT measured using a hand held sunphotometer at the coastal station on the west coast of India, Dona-Paula, Goa...

  19. Solving Ratio-Dependent Predatorprey System with Constant Effort Harvesting Using Variational Iteration Method

    DEFF Research Database (Denmark)

    Ghotbi, Abdoul R; Barari, Amin

    2009-01-01

    Due to wide range of interest in use of bio-economic models to gain insight in to the scientific management of renewable resources like fisheries and forestry, variational iteration method (VIM) is employed to approximate the solution of the ratio-dependent predator-prey system with constant effort...

  20. A novel variational method for deriving Lagrangian and Hamiltonian models of inductor-capacitor circuits

    NARCIS (Netherlands)

    Moreau, L.; Aeyels, D.

    2004-01-01

    We study the dynamical equations of nonlinear inductor-capacitor circuits. We present a novel Lagrangian description of the dynamics and provide a variational interpretation, which is based on the maximum principle of optimal control theory. This gives rise to an alternative method for deriving the

  1. Total variation regularization for seismic waveform inversion using an adaptive primal dual hybrid gradient method

    Science.gov (United States)

    Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan

    2018-04-01

    Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.

  2. An accurate method for quantifying and analyzing copy number variation in porcine KIT by an oligonucleotide ligation assay

    Directory of Open Access Journals (Sweden)

    Cho In-Cheol

    2007-11-01

    Full Text Available Abstract Background Aside from single nucleotide polymorphisms, copy number variations (CNVs are the most important factors in susceptibility to genetic disorders because they affect expression levels of genes. In previous studies, pyrosequencing, mini-sequencing, real-time PCR, invader assays and other techniques have been used to detect CNVs. However, the higher the copy number in a genome, the more difficult it is to resolve the copies, so a more accurate method for measuring CNVs and assigning genotype is needed. Results PCR followed by a quantitative oligonucleotide ligation assay (qOLA was developed for quantifying CNVs. The accuracy and precision of the assay were evaluated for porcine KIT, which was selected as a model locus. Overall, the root mean squares of bias and standard deviation of qOLA were 2.09 and 0.45, respectively. These values are less than half of those in the published pyrosequencing assay for analyzing CNV in porcine KIT. Using a combined method of qOLA and another pyrosequencing for quantitative analysis of KIT copies with spliced forms, we confirmed the segregation of KIT alleles in 145 F1 animals with pedigree information and verified the correct assignment of genotypes. In a diagnostic test on 100 randomly sampled commercial pigs, there was perfect agreement between the genotypes obtained by grouping observations on a scatter plot and by clustering using the nearest centroid sorting method implemented in PROC FASTCLUS of the SAS package. In a test on 159 Large White pigs, there were only two discrepancies between genotypes assigned by the two clustering methods (98.7% agreement, confirming that the quantitative ligation assay established here makes genotyping possible through the accurate measurement of high KIT copy numbers (>4 per diploid genome. Moreover, the assay is sensitive enough for use on DNA from hair follicles, indicating that DNA from various sources could be used. Conclusion We have established a high

  3. Adaptive variational mode decomposition method for signal processing based on mode characteristic

    Science.gov (United States)

    Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng

    2018-07-01

    Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.

  4. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour

    2015-01-07

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a nite number of modes.

  5. Study of rare-gas dimer ions by the variational cellular method

    International Nuclear Information System (INIS)

    Wentzcovitch, R.M.M.

    1982-01-01

    The Variational Cellular Method to study ionized molecules in their ground and excited states with the scope of testing the validity of such method in these cases have been used. The ions studied are Ne +2 , Ar +2 , where the latter is the system with the largest number of electrons tested by VCM so far. The electronic transitions in these systems are important mechanisms of efficiency decay for the noble gas halide lasers ('excimer lasers'). (Author) [pt

  6. Dancoff factors with partial neutrons absorption in cluster geometry by the direct method

    International Nuclear Information System (INIS)

    Rodrigues, Leticia Jenisch

    2007-01-01

    Accurate analysis of resonance absorption in heterogeneous systems is essential in problems like criticality, breeding ratios and fuel depletion calculations. In compact arrays of fuel rods, resonance absorption is strongly affected by the Dancoff factor, defined in mis study as the probability that a neutron emitted from the surface of a fuel element, enters another fuel element without any collusion in the moderator or cladding. In fact, in the most practical cases of irregular cells, it is observed that inaccuracies in computing both Grey and Black Dancoff factors, i.e. for partially and perfectly absorbing fuel rods, can lead to considerable errors in the calculated values of such integral quantities. For this reason, much effort has been made in the past decades to further improve the models for calculating Dancoff factors, a task that has been accomplished in connection with the development of faster computers. In the WIMS code, Black Dancoff factors based on the above mentioned collusion probability definition are computed in cluster geometry, for each one of the symmetrically distinct fuel pin positions in the cell. Sets of equally-spaced parallel lines are drawn in subroutine PIJ, at a number of discrete equally-incremented azimuthal angles, covering the whole system and forming a mesh over which the in-plane integrations of the Bickley functions are carried out by simple trapezoidal rule, leading to the first-flight collusion matrices. Although fast, the method in PIJ is inefficient, since the constructed mesh does not depended on the system details, so that regions of small relative volumes are crossed out by relatively few lines, which affects the convergence of the calculated probabilities. A new routine (PIJM) was then created to incorporate a more efficient integration scheme considering each system region individually, minimizing convergence problems and reducing the number of neutron track lines required in the in-plane integrations for any given

  7. A Method of Flow-Shop Re-Scheduling Dealing with Variation of Productive Capacity

    Directory of Open Access Journals (Sweden)

    Kenzo KURIHARA

    2005-02-01

    Full Text Available We can make optimum scheduling results using various methods that are proposed by many researchers. However, it is very difficult to process the works on time without delaying the schedule. There are two major causes that disturb the planned optimum schedules; they are (1the variation of productive capacity, and (2the variation of products' quantities themselves. In this paper, we deal with the former variation, or productive capacities, at flow-shop works. When production machines in a shop go out of order at flow-shops, we cannot continue to operate the productions and we have to stop the production line. To the contrary, we can continue to operate the shops even if some workers absent themselves. Of course, in this case, the production capacities become lower, because workers need to move from a machine to another to overcome the shortage of workers and some shops cannot be operated because of the worker shortage. We developed a new re-scheduling method based on Branch-and Bound method. We proposed an equation for calculating the lower bound for our Branch-and Bound method in a practical time. Some evaluation experiments are done using practical data of real flow-shop works. We compared our results with those of another simple scheduling method, and we confirmed the total production time of our result is shorter than that of another method by 4%.

  8. Identifying an unknown function in a parabolic equation with overspecified data via He's variational iteration method

    International Nuclear Information System (INIS)

    Dehghan, Mehdi; Tatari, Mehdi

    2008-01-01

    In this research, the He's variational iteration technique is used for computing an unknown time-dependent parameter in an inverse quasilinear parabolic partial differential equation. Parabolic partial differential equations with overspecified data play a crucial role in applied mathematics and physics, as they appear in various engineering models. The He's variational iteration method is an analytical procedure for finding solutions of differential equations, is based on the use of Lagrange multipliers for identification of an optimal value of a parameter in a functional. To show the efficiency of the new approach, several test problems are presented for one-, two- and three-dimensional cases

  9. Uniqueness theorems for variational problems by the method of transformation groups

    CERN Document Server

    Reichel, Wolfgang

    2004-01-01

    A classical problem in the calculus of variations is the investigation of critical points of functionals {\\cal L} on normed spaces V. The present work addresses the question: Under what conditions on the functional {\\cal L} and the underlying space V does {\\cal L} have at most one critical point? A sufficient condition for uniqueness is given: the presence of a "variational sub-symmetry", i.e., a one-parameter group G of transformations of V, which strictly reduces the values of {\\cal L}. The "method of transformation groups" is applied to second-order elliptic boundary value problems on Riemannian manifolds. Further applications include problems of geometric analysis and elasticity.

  10. Numerical realization of the variational method for generating self-trapped beams.

    Science.gov (United States)

    Duque, Erick I; Lopez-Aguayo, Servando; Malomed, Boris A

    2018-03-19

    We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schrödinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.

  11. Numerical realization of the variational method for generating self-trapped beams

    Science.gov (United States)

    Duque, Erick I.; Lopez-Aguayo, Servando; Malomed, Boris A.

    2018-03-01

    We introduce a numerical variational method based on the Rayleigh-Ritz optimization principle for predicting two-dimensional self-trapped beams in nonlinear media. This technique overcomes the limitation of the traditional variational approximation in performing analytical Lagrangian integration and differentiation. Approximate soliton solutions of a generalized nonlinear Schr\\"odinger equation are obtained, demonstrating robustness of the beams of various types (fundamental, vortices, multipoles, azimuthons) in the course of their propagation. The algorithm offers possibilities to produce more sophisticated soliton profiles in general nonlinear models.

  12. An efficient implementation of parallel molecular dynamics method on SMP cluster architecture

    International Nuclear Information System (INIS)

    Suzuki, Masaaki; Okuda, Hiroshi; Yagawa, Genki

    2003-01-01

    The authors have applied MPI/OpenMP hybrid parallel programming model to parallelize a molecular dynamics (MD) method on a symmetric multiprocessor (SMP) cluster architecture. In that architecture, it can be expected that the hybrid parallel programming model, which uses the message passing library such as MPI for inter-SMP node communication and the loop directive such as OpenMP for intra-SNP node parallelization, is the most effective one. In this study, the parallel performance of the hybrid style has been compared with that of conventional flat parallel programming style, which uses only MPI, both in cases the fast multipole method (FMM) is employed for computing long-distance interactions and that is not employed. The computer environments used here are Hitachi SR8000/MPP placed at the University of Tokyo. The results of calculation are as follows. Without FMM, the parallel efficiency using 16 SMP nodes (128 PEs) is: 90% with the hybrid style, 75% with the flat-MPI style for MD simulation with 33,402 atoms. With FMM, the parallel efficiency using 16 SMP nodes (128 PEs) is: 60% with the hybrid style, 48% with the flat-MPI style for MD simulation with 117,649 atoms. (author)

  13. Comparison of Three Plot Selection Methods for Estimating Change in Temporally Variable, Spatially Clustered Populations.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, William L. [Bonneville Power Administration, Portland, OR (US). Environment, Fish and Wildlife

    2001-07-01

    Monitoring population numbers is important for assessing trends and meeting various legislative mandates. However, sampling across time introduces a temporal aspect to survey design in addition to the spatial one. For instance, a sample that is initially representative may lose this attribute if there is a shift in numbers and/or spatial distribution in the underlying population that is not reflected in later sampled plots. Plot selection methods that account for this temporal variability will produce the best trend estimates. Consequently, I used simulation to compare bias and relative precision of estimates of population change among stratified and unstratified sampling designs based on permanent, temporary, and partial replacement plots under varying levels of spatial clustering, density, and temporal shifting of populations. Permanent plots produced more precise estimates of change than temporary plots across all factors. Further, permanent plots performed better than partial replacement plots except for high density (5 and 10 individuals per plot) and 25% - 50% shifts in the population. Stratified designs always produced less precise estimates of population change for all three plot selection methods, and often produced biased change estimates and greatly inflated variance estimates under sampling with partial replacement. Hence, stratification that remains fixed across time should be avoided when monitoring populations that are likely to exhibit large changes in numbers and/or spatial distribution during the study period. Key words: bias; change estimation; monitoring; permanent plots; relative precision; sampling with partial replacement; temporary plots.

  14. Clustering analysis

    International Nuclear Information System (INIS)

    Romli

    1997-01-01

    Cluster analysis is the name of group of multivariate techniques whose principal purpose is to distinguish similar entities from the characteristics they process.To study this analysis, there are several algorithms that can be used. Therefore, this topic focuses to discuss the algorithms, such as, similarity measures, and hierarchical clustering which includes single linkage, complete linkage and average linkage method. also, non-hierarchical clustering method, which is popular name K -mean method ' will be discussed. Finally, this paper will be described the advantages and disadvantages of every methods

  15. Digital Image Stabilization Method Based on Variational Mode Decomposition and Relative Entropy

    Directory of Open Access Journals (Sweden)

    Duo Hao

    2017-11-01

    Full Text Available Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD and relative entropy (RE. In this paper, the global motion vector (GMV is initially decomposed into several narrow-banded modes by VMD. REs, which exhibit the difference of probability distribution between two modes, are then calculated to identify the intentional and jitter motion modes. Finally, the summation of the jitter motion modes constitutes jitter motions, whereas the subtraction of the resulting sum from the GMV represents the intentional motions. The proposed stabilization method is compared with several known methods, namely, medium filter (MF, Kalman filter (KF, wavelet decomposition (MD method, empirical mode decomposition (EMD-based method, and enhanced EMD-based method, to evaluate stabilization performance. Experimental results show that the proposed method outperforms the other stabilization methods.

  16. Non-Hierarchical Clustering as a method to analyse an open-ended ...

    African Journals Online (AJOL)

    Apple

    Keywords: algebraic thinking; cluster analysis; mathematics education; quantitative analysis. Introduction. Extensive ..... C1, C2 and C3 represent the three centroids of the three clusters formed. .... 6ALd. All these strategies are algebraic and 'high- ... 1995), of the didactical aspects related to teaching .... Brazil, 18-23 July.

  17. Clustered iterative stochastic ensemble method for multi-modal calibration of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim

    2013-01-01

    estimation. ISEM is augmented with a clustering step based on k-means algorithm to form sub-ensembles. These sub-ensembles are used to explore different parts of the search space. Clusters are updated at regular intervals of the algorithm to allow merging

  18. Comparison of clustering methods for tracking features in RGB-D images

    CSIR Research Space (South Africa)

    Pancham, Ardhisha

    2016-10-01

    Full Text Available difficult to track individually over an image sequence. Clustering techniques have been recommended and used to cluster image features to improve tracking results. New and affordable RGB-D cameras, provide both color and depth information. This paper...

  19. Facile fabrication of controllable zinc oxide nanorod clusters on polyacrylonitrile nanofibers via repeatedly alternating immersion method

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Ying; Li, Xia; Yu, Hou-Yong, E-mail: phdyu@zstu.edu.cn [Zhejiang Sci-Tech University, The Key Laboratory of Advanced Textile Materials and Manufacturing Technology of Ministry of Education, College of Materials and Textiles (China); Hu, Guo-Liang; Yao, Ju-Ming, E-mail: yaoj@zstu.edu.cn [Zhejiang Sci-Tech University, National Engineering Lab for Textile Fiber Materials and Processing Technology (China)

    2016-12-15

    Polyacrylonitrile/zinc oxide (PAN/ZnO) composite nanofiber membranes with different ZnO morphologies were fabricated by repeatedly alternating hot–cold immersion and single alternating hot–cold immersion methods. The influence of the PAN/ZnCl{sub 2} ratio and different immersion methods on the morphology, microstructure, and properties of the nanofiber membranes was investigated by using field-emission scanning electron microscopy (FE-SEM), Fourier-transform infrared spectroscopy (FT-IR), X-ray diffraction (XRD) analysis, thermogravimetric analysis (TGA), and ultraviolet–visible (UV–Vis) spectroscopy. A possible mechanism for different morphologies of PAN/ZnO nanofiber membranes with different PAN/ZnCl{sub 2} ratio through different immersion processes was presented, and well-dispersed ZnO nanorod clusters with smallest average dimeter of 115 nm and hexagonal wurtzite structure were successfully anchored onto the PAN nanofiber surface for R-7/1 nanofiber membrane. Compared to S-5/1 prepared by single alternating hot–cold immersion method, the PAN/ZnO nanofiber membrane fabricated by repeatedly alternating hot–cold immersion method (especially for R-7/1) showed improved thermal stability and high photocatalytic activity for methylene blue (MB). Compared to S-5/1, decomposition temperature at 5% weight loss (T{sub 5%}) was increased by 43 °C from 282 to 325 °C for R-7/1; meanwhile, R-7/1 showed higher photocatalytic degradation ratio of approximately 100% (after UV light irradiation for 8 h) than 65% for S-5/1 even after irradiation for 14 h. Moreover, the degradation efficiency of R-7/1 with good reuse stability remained above 94% after 3 cycles.

  20. Statistical Significance for Hierarchical Clustering

    Science.gov (United States)

    Kimes, Patrick K.; Liu, Yufeng; Hayes, D. Neil; Marron, J. S.

    2017-01-01

    Summary Cluster analysis has proved to be an invaluable tool for the exploratory and unsupervised analysis of high dimensional datasets. Among methods for clustering, hierarchical approaches have enjoyed substantial popularity in genomics and other fields for their ability to simultaneously uncover multiple layers of clustering structure. A critical and challenging question in cluster analysis is whether the identified clusters represent important underlying structure or are artifacts of natural sampling variation. Few approaches have been proposed for addressing this problem in the context of hierarchical clustering, for which the problem is further complicated by the natural tree structure of the partition, and the multiplicity of tests required to parse the layers of nested clusters. In this paper, we propose a Monte Carlo based approach for testing statistical significance in hierarchical clustering which addresses these issues. The approach is implemented as a sequential testing procedure guaranteeing control of the family-wise error rate. Theoretical justification is provided for our approach, and its power to detect true clustering structure is illustrated through several simulation studies and applications to two cancer gene expression datasets. PMID:28099990