WorldWideScience

Sample records for cluster sampling design

  1. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Directory of Open Access Journals (Sweden)

    Lauren Hund

    Full Text Available Lot quality assurance sampling (LQAS surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  2. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Science.gov (United States)

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  3. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys

    OpenAIRE

    Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we comp...

  4. Extending cluster lot quality assurance sampling designs for surveillance programs.

    Science.gov (United States)

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Extending cluster Lot Quality Assurance Sampling designs for surveillance programs

    OpenAIRE

    Hund, Lauren; Pagano, Marcello

    2014-01-01

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than ...

  6. Understanding the cluster randomised crossover design: a graphical illustraton of the components of variation and a sample size tutorial.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B

    2017-08-15

    In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of

  7. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    Science.gov (United States)

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure

  8. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    Science.gov (United States)

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  9. Microsoft Hyper-V cluster design

    CERN Document Server

    Siron, Eric

    2013-01-01

    This book is written in a friendly and practical style with numerous tutorials centred on common as well as atypical Hyper-V cluster designs. This book also features a sample cluster design throughout to help you learn how to design a Hyper-V in a real-world scenario.Microsoft Hyper-V Cluster Design is perfect for the systems administrator who has a good understanding of Windows Server in an Active Directory domain and is ready to expand into a highly available virtualized environment. It only expects that you will be familiar with basic hypervisor terminology.

  10. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    Science.gov (United States)

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  11. Systematic Sampling and Cluster Sampling of Packet Delays

    OpenAIRE

    Lindh, Thomas

    2006-01-01

    Based on experiences of a traffic flow performance meter this papersuggests and evaluates cluster sampling and systematic sampling as methods toestimate average packet delays. Systematic sampling facilitates for exampletime analysis, frequency analysis and jitter measurements. Cluster samplingwith repeated trains of periodically spaced sampling units separated by randomstarting periods, and systematic sampling are evaluated with respect to accuracyand precision. Packet delay traces have been ...

  12. Stochastic coupled cluster theory: Efficient sampling of the coupled cluster expansion

    Science.gov (United States)

    Scott, Charles J. C.; Thom, Alex J. W.

    2017-09-01

    We consider the sampling of the coupled cluster expansion within stochastic coupled cluster theory. Observing the limitations of previous approaches due to the inherently non-linear behavior of a coupled cluster wavefunction representation, we propose new approaches based on an intuitive, well-defined condition for sampling weights and on sampling the expansion in cluster operators of different excitation levels. We term these modifications even and truncated selections, respectively. Utilising both approaches demonstrates dramatically improved calculation stability as well as reduced computational and memory costs. These modifications are particularly effective at higher truncation levels owing to the large number of terms within the cluster expansion that can be neglected, as demonstrated by the reduction of the number of terms to be sampled when truncating at triple excitations by 77% and hextuple excitations by 98%.

  13. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    Science.gov (United States)

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with

  14. Sample design effects in landscape genetics

    Science.gov (United States)

    Oyler-McCance, Sara J.; Fedy, Bradley C.; Landguth, Erin L.

    2012-01-01

    An important research gap in landscape genetics is the impact of different field sampling designs on the ability to detect the effects of landscape pattern on gene flow. We evaluated how five different sampling regimes (random, linear, systematic, cluster, and single study site) affected the probability of correctly identifying the generating landscape process of population structure. Sampling regimes were chosen to represent a suite of designs common in field studies. We used genetic data generated from a spatially-explicit, individual-based program and simulated gene flow in a continuous population across a landscape with gradual spatial changes in resistance to movement. Additionally, we evaluated the sampling regimes using realistic and obtainable number of loci (10 and 20), number of alleles per locus (5 and 10), number of individuals sampled (10-300), and generational time after the landscape was introduced (20 and 400). For a simulated continuously distributed species, we found that random, linear, and systematic sampling regimes performed well with high sample sizes (>200), levels of polymorphism (10 alleles per locus), and number of molecular markers (20). The cluster and single study site sampling regimes were not able to correctly identify the generating process under any conditions and thus, are not advisable strategies for scenarios similar to our simulations. Our research emphasizes the importance of sampling data at ecologically appropriate spatial and temporal scales and suggests careful consideration for sampling near landscape components that are likely to most influence the genetic structure of the species. In addition, simulating sampling designs a priori could help guide filed data collection efforts.

  15. Re-estimating sample size in cluster randomized trials with active recruitment within clusters

    NARCIS (Netherlands)

    van Schie, Sander; Moerbeek, Mirjam

    2014-01-01

    Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster

  16. Group sequential designs for stepped-wedge cluster randomised trials.

    Science.gov (United States)

    Grayling, Michael J; Wason, James Ms; Mander, Adrian P

    2017-10-01

    The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into

  17. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  18. Clustering of samples and elements based on multi-variable chemical data

    International Nuclear Information System (INIS)

    Op de Beeck, J.

    1984-01-01

    Clustering and classification are defined in the context of multivariable chemical analysis data. Classical multi-variate techniques, commonly used to interpret such data, are shown to be based on probabilistic and geometrical principles which are not justified for analytical data, since in that case one assumes or expects a system of more or less systematically related objects (samples) as defined by measurements on more or less systematically interdependent variables (elements). For the specific analytical problem of data set concerning a large number of trace elements determined in a large number of samples, a deterministic cluster analysis can be used to develop the underlying classification structure. Three main steps can be distinguished: diagnostic evaluation and preprocessing of the raw input data; computation of a symmetric matrix with pairwise standardized dissimilarity values between all possible pairs of samples and/or elements; and ultrametric clustering strategy to produce the final classification as a dendrogram. The software packages designed to perform these tasks are discussed and final results are given. Conclusions are formulated concerning the dangers of using multivariate, clustering and classification software packages as a black-box

  19. Spectral embedded clustering: a framework for in-sample and out-of-sample spectral clustering.

    Science.gov (United States)

    Nie, Feiping; Zeng, Zinan; Tsang, Ivor W; Xu, Dong; Zhang, Changshui

    2011-11-01

    Spectral clustering (SC) methods have been successfully applied to many real-world applications. The success of these SC methods is largely based on the manifold assumption, namely, that two nearby data points in the high-density region of a low-dimensional data manifold have the same cluster label. However, such an assumption might not always hold on high-dimensional data. When the data do not exhibit a clear low-dimensional manifold structure (e.g., high-dimensional and sparse data), the clustering performance of SC will be degraded and become even worse than K -means clustering. In this paper, motivated by the observation that the true cluster assignment matrix for high-dimensional data can be always embedded in a linear space spanned by the data, we propose the spectral embedded clustering (SEC) framework, in which a linearity regularization is explicitly added into the objective function of SC methods. More importantly, the proposed SEC framework can naturally deal with out-of-sample data. We also present a new Laplacian matrix constructed from a local regression of each pattern and incorporate it into our SEC framework to capture both local and global discriminative information for clustering. Comprehensive experiments on eight real-world high-dimensional datasets demonstrate the effectiveness and advantages of our SEC framework over existing SC methods and K-means-based clustering methods. Our SEC framework significantly outperforms SC using the Nyström algorithm on unseen data.

  20. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali.

    Science.gov (United States)

    Minetti, Andrea; Riera-Montes, Margarita; Nackers, Fabienne; Roederer, Thomas; Koudika, Marie Hortense; Sekkenes, Johanne; Taconet, Aurore; Fermon, Florence; Touré, Albouhary; Grais, Rebecca F; Checchi, Francesco

    2012-10-12

    Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS) approach has been proposed as an alternative, as smaller sample sizes are required. We explored (i) the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii) the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i) health areas not requiring supplemental activities; ii) health areas requiring additional vaccination; iii) health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3), standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Small sample cluster surveys (10 × 15) are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.

  1. Cluster Dynamics: Laying the Foundation for Tailoring the Design of Cluster ASSE

    Science.gov (United States)

    2016-02-25

    AFRL-AFOSR-VA-TR-2016-0081 CLUSTER DYNAMICS: LAYING THE FOUNDATION FOR TAILORING THE DESIGN OF CLUSTER ASSE Albert Castleman PENNSYLVANIA STATE...15-10-2015 4. TITLE AND SUBTITLE CLUSTER DYNAMICS: LAYING THE FOUNDATION FOR TAILORING THE DESIGN OF CLUSTER ASSEMBLED NANOSCALE MATERIALS 5a... clusters as the building blocks of new materials with tailored properties that are beneficial to the AFOSR. Our continuing program is composed of two

  2. Changing cluster composition in cluster randomised controlled trials: design and analysis considerations

    Science.gov (United States)

    2014-01-01

    Background There are many methodological challenges in the conduct and analysis of cluster randomised controlled trials, but one that has received little attention is that of post-randomisation changes to cluster composition. To illustrate this, we focus on the issue of cluster merging, considering the impact on the design, analysis and interpretation of trial outcomes. Methods We explored the effects of merging clusters on study power using standard methods of power calculation. We assessed the potential impacts on study findings of both homogeneous cluster merges (involving clusters randomised to the same arm of a trial) and heterogeneous merges (involving clusters randomised to different arms of a trial) by simulation. To determine the impact on bias and precision of treatment effect estimates, we applied standard methods of analysis to different populations under analysis. Results Cluster merging produced a systematic reduction in study power. This effect depended on the number of merges and was most pronounced when variability in cluster size was at its greatest. Simulations demonstrate that the impact on analysis was minimal when cluster merges were homogeneous, with impact on study power being balanced by a change in observed intracluster correlation coefficient (ICC). We found a decrease in study power when cluster merges were heterogeneous, and the estimate of treatment effect was attenuated. Conclusions Examples of cluster merges found in previously published reports of cluster randomised trials were typically homogeneous rather than heterogeneous. Simulations demonstrated that trial findings in such cases would be unbiased. However, simulations also showed that any heterogeneous cluster merges would introduce bias that would be hard to quantify, as well as having negative impacts on the precision of estimates obtained. Further methodological development is warranted to better determine how to analyse such trials appropriately. Interim recommendations

  3. Performance of small cluster surveys and the clustered LQAS design to estimate local-level vaccination coverage in Mali

    Directory of Open Access Journals (Sweden)

    Minetti Andrea

    2012-10-01

    Full Text Available Abstract Background Estimation of vaccination coverage at the local level is essential to identify communities that may require additional support. Cluster surveys can be used in resource-poor settings, when population figures are inaccurate. To be feasible, cluster samples need to be small, without losing robustness of results. The clustered LQAS (CLQAS approach has been proposed as an alternative, as smaller sample sizes are required. Methods We explored (i the efficiency of cluster surveys of decreasing sample size through bootstrapping analysis and (ii the performance of CLQAS under three alternative sampling plans to classify local VC, using data from a survey carried out in Mali after mass vaccination against meningococcal meningitis group A. Results VC estimates provided by a 10 × 15 cluster survey design were reasonably robust. We used them to classify health areas in three categories and guide mop-up activities: i health areas not requiring supplemental activities; ii health areas requiring additional vaccination; iii health areas requiring further evaluation. As sample size decreased (from 10 × 15 to 10 × 3, standard error of VC and ICC estimates were increasingly unstable. Results of CLQAS simulations were not accurate for most health areas, with an overall risk of misclassification greater than 0.25 in one health area out of three. It was greater than 0.50 in one health area out of two under two of the three sampling plans. Conclusions Small sample cluster surveys (10 × 15 are acceptably robust for classification of VC at local level. We do not recommend the CLQAS method as currently formulated for evaluating vaccination programmes.

  4. Precision, time, and cost: a comparison of three sampling designs in an emergency setting

    Science.gov (United States)

    Deitchler, Megan; Deconinck, Hedwig; Bergeron, Gilles

    2008-01-01

    The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 × 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 × 30 cluster survey with two alternative sampling designs: a 33 × 6 cluster design (33 clusters, 6 observations per cluster) and a 67 × 3 cluster design (67 clusters, 3 observations per cluster). Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 × 30 design to provide more precise results (i.e. narrower 95% confidence intervals) than the 33 × 6 and 67 × 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 × 6 and 67 × 3 designs provide wider confidence intervals than the 30 × 30 design for child anthropometric indicators, the 33 × 6 and 67 × 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 × 30 design does not. For the household-level indicators tested in this study, the 67 × 3 design provides the most precise results. However, our results show that neither the 33 × 6 nor the 67 × 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 × 6 and 67 × 3 designs required substantially less time and cost than that required for the 30 × 30 design. The findings of this study suggest the 33 × 6 and 67 × 3 designs can provide useful time- and resource-saving alternatives to the 30 × 30 method of data collection in emergency

  5. Precision, time, and cost: a comparison of three sampling designs in an emergency setting

    Directory of Open Access Journals (Sweden)

    Deconinck Hedwig

    2008-05-01

    Full Text Available Abstract The conventional method to collect data on the health, nutrition, and food security status of a population affected by an emergency is a 30 × 30 cluster survey. This sampling method can be time and resource intensive and, accordingly, may not be the most appropriate one when data are needed rapidly for decision making. In this study, we compare the precision, time and cost of the 30 × 30 cluster survey with two alternative sampling designs: a 33 × 6 cluster design (33 clusters, 6 observations per cluster and a 67 × 3 cluster design (67 clusters, 3 observations per cluster. Data for each sampling design were collected concurrently in West Darfur, Sudan in September-October 2005 in an emergency setting. Results of the study show the 30 × 30 design to provide more precise results (i.e. narrower 95% confidence intervals than the 33 × 6 and 67 × 3 design for most child-level indicators. Exceptions are indicators of immunization and vitamin A capsule supplementation coverage which show a high intra-cluster correlation. Although the 33 × 6 and 67 × 3 designs provide wider confidence intervals than the 30 × 30 design for child anthropometric indicators, the 33 × 6 and 67 × 3 designs provide the opportunity to conduct a LQAS hypothesis test to detect whether or not a critical threshold of global acute malnutrition prevalence has been exceeded, whereas the 30 × 30 design does not. For the household-level indicators tested in this study, the 67 × 3 design provides the most precise results. However, our results show that neither the 33 × 6 nor the 67 × 3 design are appropriate for assessing indicators of mortality. In this field application, data collection for the 33 × 6 and 67 × 3 designs required substantially less time and cost than that required for the 30 × 30 design. The findings of this study suggest the 33 × 6 and 67 × 3 designs can provide useful time- and resource-saving alternatives to the 30 × 30 method of data

  6. Evaluation of primary immunization coverage of infants under universal immunization programme in an urban area of Bangalore city using cluster sampling and lot quality assurance sampling techniques

    Directory of Open Access Journals (Sweden)

    Punith K

    2008-01-01

    Full Text Available Research Question: Is LQAS technique better than cluster sampling technique in terms of resources to evaluate the immunization coverage in an urban area? Objective: To assess and compare the lot quality assurance sampling against cluster sampling in the evaluation of primary immunization coverage. Study Design: Population-based cross-sectional study. Study Setting: Areas under Mathikere Urban Health Center. Study Subjects: Children aged 12 months to 23 months. Sample Size: 220 in cluster sampling, 76 in lot quality assurance sampling. Statistical Analysis: Percentages and Proportions, Chi square Test. Results: (1 Using cluster sampling, the percentage of completely immunized, partially immunized and unimmunized children were 84.09%, 14.09% and 1.82%, respectively. With lot quality assurance sampling, it was 92.11%, 6.58% and 1.31%, respectively. (2 Immunization coverage levels as evaluated by cluster sampling technique were not statistically different from the coverage value as obtained by lot quality assurance sampling techniques. Considering the time and resources required, it was found that lot quality assurance sampling is a better technique in evaluating the primary immunization coverage in urban area.

  7. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  8. Electronic structure and properties of designer clusters and cluster-assemblies

    International Nuclear Information System (INIS)

    Khanna, S.N.; Jena, P.

    1995-01-01

    Using self-consistent calculations based on density functional theory, we demonstrate that electronic shell filling and close atomic packing criteria can be used to design ultra-stable clusters. Interaction of these clusters with each other and with gas atoms is found to be weak confirming their chemical inertness. A crystal composed of these inert clusters is expected to have electronic properties that are markedly different from crystals where atoms are the building blocks. The recent observation of ferromagnetism in potassium clusters assembled in zeolite cages is discussed. (orig.)

  9. Occurrence of Radio Minihalos in a Mass-limited Sample of Galaxy Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Giacintucci, Simona; Clarke, Tracy E. [Naval Research Laboratory, 4555 Overlook Avenue SW, Code 7213, Washington, DC 20375 (United States); Markevitch, Maxim [NASA/Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Cassano, Rossella; Venturi, Tiziana; Brunetti, Gianfranco, E-mail: simona.giacintucci@nrl.navy.mil [INAF—Istituto di Radioastronomia, via Gobetti 101, I-40129 Bologna (Italy)

    2017-06-01

    We investigate the occurrence of radio minihalos—diffuse radio sources of unknown origin observed in the cores of some galaxy clusters—in a statistical sample of 58 clusters drawn from the Planck Sunyaev–Zel’dovich cluster catalog using a mass cut ( M {sub 500} > 6 × 10{sup 14} M {sub ⊙}). We supplement our statistical sample with a similarly sized nonstatistical sample mostly consisting of clusters in the ACCEPT X-ray catalog with suitable X-ray and radio data, which includes lower-mass clusters. Where necessary (for nine clusters), we reanalyzed the Very Large Array archival radio data to determine whether a minihalo is present. Our total sample includes all 28 currently known and recently discovered radio minihalos, including six candidates. We classify clusters as cool-core or non-cool-core according to the value of the specific entropy floor in the cluster center, rederived or newly derived from the Chandra X-ray density and temperature profiles where necessary (for 27 clusters). Contrary to the common wisdom that minihalos are rare, we find that almost all cool cores—at least 12 out of 15 (80%)—in our complete sample of massive clusters exhibit minihalos. The supplementary sample shows that the occurrence of minihalos may be lower in lower-mass cool-core clusters. No minihalos are found in non-cool cores or “warm cores.” These findings will help test theories of the origin of minihalos and provide information on the physical processes and energetics of the cluster cores.

  10. Stratified sampling design based on data mining.

    Science.gov (United States)

    Kim, Yeonkook J; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon; Park, Hayoung

    2013-09-01

    To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.

  11. The Hubble Space Telescope Medium Deep Survey Cluster Sample: Methodology and Data

    Science.gov (United States)

    Ostrander, E. J.; Nichol, R. C.; Ratnatunga, K. U.; Griffiths, R. E.

    1998-12-01

    We present a new, objectively selected, sample of galaxy overdensities detected in the Hubble Space Telescope Medium Deep Survey (MDS). These clusters/groups were found using an automated procedure that involved searching for statistically significant galaxy overdensities. The contrast of the clusters against the field galaxy population is increased when morphological data are used to search around bulge-dominated galaxies. In total, we present 92 overdensities above a probability threshold of 99.5%. We show, via extensive Monte Carlo simulations, that at least 60% of these overdensities are likely to be real clusters and groups and not random line-of-sight superpositions of galaxies. For each overdensity in the MDS cluster sample, we provide a richness and the average of the bulge-to-total ratio of galaxies within each system. This MDS cluster sample potentially contains some of the most distant clusters/groups ever detected, with about 25% of the overdensities having estimated redshifts z > ~0.9. We have made this sample publicly available to facilitate spectroscopic confirmation of these clusters and help more detailed studies of cluster and galaxy evolution. We also report the serendipitous discovery of a new cluster close on the sky to the rich optical cluster Cl l0016+16 at z = 0.546. This new overdensity, HST 001831+16208, may be coincident with both an X-ray source and a radio source. HST 001831+16208 is the third cluster/group discovered near to Cl 0016+16 and appears to strengthen the claims of Connolly et al. of superclustering at high redshift.

  12. Spatially explicit population estimates for black bears based on cluster sampling

    Science.gov (United States)

    Humm, J.; McCown, J. Walter; Scheick, B.K.; Clark, Joseph D.

    2017-01-01

    We estimated abundance and density of the 5 major black bear (Ursus americanus) subpopulations (i.e., Eglin, Apalachicola, Osceola, Ocala-St. Johns, Big Cypress) in Florida, USA with spatially explicit capture-mark-recapture (SCR) by extracting DNA from hair samples collected at barbed-wire hair sampling sites. We employed a clustered sampling configuration with sampling sites arranged in 3 × 3 clusters spaced 2 km apart within each cluster and cluster centers spaced 16 km apart (center to center). We surveyed all 5 subpopulations encompassing 38,960 km2 during 2014 and 2015. Several landscape variables, most associated with forest cover, helped refine density estimates for the 5 subpopulations we sampled. Detection probabilities were affected by site-specific behavioral responses coupled with individual capture heterogeneity associated with sex. Model-averaged bear population estimates ranged from 120 (95% CI = 59–276) bears or a mean 0.025 bears/km2 (95% CI = 0.011–0.44) for the Eglin subpopulation to 1,198 bears (95% CI = 949–1,537) or 0.127 bears/km2 (95% CI = 0.101–0.163) for the Ocala-St. Johns subpopulation. The total population estimate for our 5 study areas was 3,916 bears (95% CI = 2,914–5,451). The clustered sampling method coupled with information on land cover was efficient and allowed us to estimate abundance across extensive areas that would not have been possible otherwise. Clustered sampling combined with spatially explicit capture-recapture methods has the potential to provide rigorous population estimates for a wide array of species that are extensive and heterogeneous in their distribution.

  13. ATCA observations of the MACS-Planck Radio Halo Cluster Project. II. Radio observations of an intermediate redshift cluster sample

    Science.gov (United States)

    Martinez Aviles, G.; Johnston-Hollitt, M.; Ferrari, C.; Venturi, T.; Democles, J.; Dallacasa, D.; Cassano, R.; Brunetti, G.; Giacintucci, S.; Pratt, G. W.; Arnaud, M.; Aghanim, N.; Brown, S.; Douspis, M.; Hurier, J.; Intema, H. T.; Langer, M.; Macario, G.; Pointecouteau, E.

    2018-04-01

    Aim. A fraction of galaxy clusters host diffuse radio sources whose origins are investigated through multi-wavelength studies of cluster samples. We investigate the presence of diffuse radio emission in a sample of seven galaxy clusters in the largely unexplored intermediate redshift range (0.3 http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A94

  14. Cluster lot quality assurance sampling: effect of increasing the number of clusters on classification precision and operational feasibility.

    Science.gov (United States)

    Okayasu, Hiromasa; Brown, Alexandra E; Nzioki, Michael M; Gasasira, Alex N; Takane, Marina; Mkanda, Pascal; Wassilak, Steven G F; Sutter, Roland W

    2014-11-01

    To assess the quality of supplementary immunization activities (SIAs), the Global Polio Eradication Initiative (GPEI) has used cluster lot quality assurance sampling (C-LQAS) methods since 2009. However, since the inception of C-LQAS, questions have been raised about the optimal balance between operational feasibility and precision of classification of lots to identify areas with low SIA quality that require corrective programmatic action. To determine if an increased precision in classification would result in differential programmatic decision making, we conducted a pilot evaluation in 4 local government areas (LGAs) in Nigeria with an expanded LQAS sample size of 16 clusters (instead of the standard 6 clusters) of 10 subjects each. The results showed greater heterogeneity between clusters than the assumed standard deviation of 10%, ranging from 12% to 23%. Comparing the distribution of 4-outcome classifications obtained from all possible combinations of 6-cluster subsamples to the observed classification of the 16-cluster sample, we obtained an exact match in classification in 56% to 85% of instances. We concluded that the 6-cluster C-LQAS provides acceptable classification precision for programmatic action. Considering the greater resources required to implement an expanded C-LQAS, the improvement in precision was deemed insufficient to warrant the effort. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  15. Hot Zone Identification: Analyzing Effects of Data Sampling on Spam Clustering

    Directory of Open Access Journals (Sweden)

    Rasib Khan

    2014-03-01

    Full Text Available Email is the most common and comparatively the most efficient means of exchanging information in today's world. However, given the widespread use of emails in all sectors, they have been the target of spammers since the beginning. Filtering spam emails has now led to critical actions such as forensic activities based on mining spam email. The data mine for spam emails at the University of Alabama at Birmingham is considered to be one of the most prominent resources for mining and identifying spam sources. It is a widely researched repository used by researchers from different global organizations. The usual process of mining the spam data involves going through every email in the data mine and clustering them based on their different attributes. However, given the size of the data mine, it takes an exceptionally long time to execute the clustering mechanism each time. In this paper, we have illustrated sampling as an efficient tool for data reduction, while preserving the information within the clusters, which would thus allow the spam forensic experts to quickly and effectively identify the ‘hot zone’ from the spam campaigns. We have provided detailed comparative analysis of the quality of the clusters after sampling, the overall distribution of clusters on the spam data, and timing measurements for our sampling approach. Additionally, we present different strategies which allowed us to optimize the sampling process using data-preprocessing and using the database engine's computational resources, and thus improving the performance of the clustering process.

  16. Hydration of Atmospheric Molecular Clusters: Systematic Configurational Sampling.

    Science.gov (United States)

    Kildgaard, Jens; Mikkelsen, Kurt V; Bilde, Merete; Elm, Jonas

    2018-05-09

    We present a new systematic configurational sampling algorithm for investigating the potential energy surface of hydrated atmospheric molecular clusters. The algo- rithm is based on creating a Fibonacci sphere around each atom in the cluster and adding water molecules to each point in 9 different orientations. To allow the sam- pling of water molecules to existing hydrogen bonds, the cluster is displaced along the hydrogen bond and a water molecule is placed in between in three different ori- entations. Generated redundant structures are eliminated based on minimizing the root mean square distance (RMSD) of different conformers. Initially, the clusters are sampled using the semiempirical PM6 method and subsequently using density func- tional theory (M06-2X and ωB97X-D) with the 6-31++G(d,p) basis set. Applying the developed algorithm we study the hydration of sulfuric acid with up to 15 water molecules. We find that the additions of the first four water molecules "saturate" the sulfuric acid molecule and are more thermodynamically favourable than the addition of water molecule 5-15. Using the large generated set of conformers, we assess the performance of approximate methods (ωB97X-D, M06-2X, PW91 and PW6B95-D3) in calculating the binding energies and assigning the global minimum conformation compared to high level CCSD(T)-F12a/VDZ-F12 reference calculations. The tested DFT functionals systematically overestimates the binding energies compared to cou- pled cluster calculations, and we find that this deficiency can be corrected by a simple scaling factor.

  17. A clustering algorithm for sample data based on environmental pollution characteristics

    Science.gov (United States)

    Chen, Mei; Wang, Pengfei; Chen, Qiang; Wu, Jiadong; Chen, Xiaoyun

    2015-04-01

    Environmental pollution has become an issue of serious international concern in recent years. Among the receptor-oriented pollution models, CMB, PMF, UNMIX, and PCA are widely used as source apportionment models. To improve the accuracy of source apportionment and classify the sample data for these models, this study proposes an easy-to-use, high-dimensional EPC algorithm that not only organizes all of the sample data into different groups according to the similarities in pollution characteristics such as pollution sources and concentrations but also simultaneously detects outliers. The main clustering process consists of selecting the first unlabelled point as the cluster centre, then assigning each data point in the sample dataset to its most similar cluster centre according to both the user-defined threshold and the value of similarity function in each iteration, and finally modifying the clusters using a method similar to k-Means. The validity and accuracy of the algorithm are tested using both real and synthetic datasets, which makes the EPC algorithm practical and effective for appropriately classifying sample data for source apportionment models and helpful for better understanding and interpreting the sources of pollution.

  18. Clustered lot quality assurance sampling to assess immunisation coverage: increasing rapidity and maintaining precision.

    Science.gov (United States)

    Pezzoli, Lorenzo; Andrews, Nick; Ronveaux, Olivier

    2010-05-01

    Vaccination programmes targeting disease elimination aim to achieve very high coverage levels (e.g. 95%). We calculated the precision of different clustered lot quality assurance sampling (LQAS) designs in computer-simulated surveys to provide local health officers in the field with preset LQAS plans to simply and rapidly assess programmes with high coverage targets. We calculated sample size (N), decision value (d) and misclassification errors (alpha and beta) of several LQAS plans by running 10 000 simulations. We kept the upper coverage threshold (UT) at 90% or 95% and decreased the lower threshold (LT) progressively by 5%. We measured the proportion of simulations with d unvaccinated individuals if the coverage was LT% (pLT) to calculate alpha (1-pLT). We divided N in clusters (between 5 and 10) and recalculated the errors hypothesising that the coverage would vary in the clusters according to a binomial distribution with preset standard deviations of 0.05 and 0.1 from the mean lot coverage. We selected the plans fulfilling these criteria: alpha LQAS plans dividing the lot in five clusters with N = 50 (5 x 10) and d = 4 to evaluate programmes with 95% coverage target and d = 7 to evaluate programmes with 90% target. These plans will considerably increase the feasibility and the rapidity of conducting the LQAS in the field.

  19. Flocculent and grand design spiral arm structure in cluster galaxies

    International Nuclear Information System (INIS)

    Elmegreen, D.M.

    1982-01-01

    A total of 829 spiral galaxies in 22 clusters having redshifts between z = 0.02 and 0.06 were classified according to the appearance of their spiral arm structures. The fraction of galaxies that have a grand design spiral structure was found to be higher among barred galaxies than among non-barred galaxies (at z = 0.02, 95 per cent of strongly barred galaxies have a grand design, compared with 67 per cent of non-barred or weakly barred galaxies). Cluster galaxies and distant non-cluster galaxies have the same fraction of grand design galaxies when resolution effects are considered. The grand design fraction among cluster galaxies is also similar to the fraction observed among nearby galaxies in binary systems and in groups. (author)

  20. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  1. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations.

    Science.gov (United States)

    NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel

    2017-08-01

    Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.

  2. Systematic adaptive cluster sampling for the assessment of rare tree species in Nepal

    NARCIS (Netherlands)

    Acharya, B.; Bhattarai, G.; Gier, de A.; Stein, A.

    2000-01-01

    Sampling to assess rare tree species poses methodic problems, because they may cluster and many plots with no such trees are to be expected. We used systematic adaptive cluster sampling (SACS) to sample three rare tree species in a forest area of about 40 ha in Nepal. We checked its applicability

  3. Dairy cluster design for Myanmar

    NARCIS (Netherlands)

    Zijlstra, J.; Lee, van der J.

    2015-01-01

    At the request of the Dutch and Myanmar governments, a project team consisting of researchers from Wageningen University & Research centre and experts from dairy processor Royal FrieslandCampina, feed company Royal De Heus and AgriWorks consultancy have developed a design for a dairy cluster in

  4. Clustering for high-dimension, low-sample size data using distance vectors

    OpenAIRE

    Terada, Yoshikazu

    2013-01-01

    In high-dimension, low-sample size (HDLSS) data, it is not always true that closeness of two objects reflects a hidden cluster structure. We point out the important fact that it is not the closeness, but the "values" of distance that contain information of the cluster structure in high-dimensional space. Based on this fact, we propose an efficient and simple clustering approach, called distance vector clustering, for HDLSS data. Under the assumptions given in the work of Hall et al. (2005), w...

  5. A HIGH FIDELITY SAMPLE OF COLD FRONT CLUSTERS FROM THE CHANDRA ARCHIVE

    International Nuclear Information System (INIS)

    Owers, Matt S.; Nulsen, Paul E. J.; Markevitch, Maxim; Couch, Warrick J.

    2009-01-01

    This paper presents a sample of 'cold front' clusters selected from the Chandra archive. The clusters are selected based purely on the existence of surface brightness edges in their Chandra images which are modeled as density jumps. A combination of the derived density and temperature jumps across the fronts is used to select nine robust examples of cold front clusters: 1ES0657 - 558, Abell 1201, Abell 1758N, MS1455.0+2232, Abell 2069, Abell 2142, Abell 2163, RXJ1720.1+2638, and Abell 3667. This sample is the subject of an ongoing study aimed at relating cold fronts to cluster merger activity, and understanding how the merging environment affects the cluster constituents. Here, temperature maps are presented along with the Chandra X-ray images. A dichotomy is found in the sample in that there exists a subsample of cold front clusters which are clearly mergers based on their X-ray morphologies, and a second subsample of clusters which harbor cold fronts, but have surprisingly relaxed X-ray morphologies, and minimal evidence for merger activity at other wavelengths. For this second subsample, the existence of a cold front provides the sole evidence for merger activity at X-ray wavelengths. We discuss how cold fronts can provide additional information which may be used to constrain merger histories, and also the possibility of using cold fronts to distinguish major and minor mergers.

  6. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    Science.gov (United States)

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  7. HICOSMO - cosmology with a complete sample of galaxy clusters - I. Data analysis, sample selection and luminosity-mass scaling relation

    Science.gov (United States)

    Schellenberger, G.; Reiprich, T. H.

    2017-08-01

    The X-ray regime, where the most massive visible component of galaxy clusters, the intracluster medium, is visible, offers directly measured quantities, like the luminosity, and derived quantities, like the total mass, to characterize these objects. The aim of this project is to analyse a complete sample of galaxy clusters in detail and constrain cosmological parameters, like the matter density, Ωm, or the amplitude of initial density fluctuations, σ8. The purely X-ray flux-limited sample (HIFLUGCS) consists of the 64 X-ray brightest galaxy clusters, which are excellent targets to study the systematic effects, that can bias results. We analysed in total 196 Chandra observations of the 64 HIFLUGCS clusters, with a total exposure time of 7.7 Ms. Here, we present our data analysis procedure (including an automated substructure detection and an energy band optimization for surface brightness profile analysis) that gives individually determined, robust total mass estimates. These masses are tested against dynamical and Planck Sunyaev-Zeldovich (SZ) derived masses of the same clusters, where good overall agreement is found with the dynamical masses. The Planck SZ masses seem to show a mass-dependent bias to our hydrostatic masses; possible biases in this mass-mass comparison are discussed including the Planck selection function. Furthermore, we show the results for the (0.1-2.4) keV luminosity versus mass scaling relation. The overall slope of the sample (1.34) is in agreement with expectations and values from literature. Splitting the sample into galaxy groups and clusters reveals, even after a selection bias correction, that galaxy groups exhibit a significantly steeper slope (1.88) compared to clusters (1.06).

  8. Phylogenetic Inference of HIV Transmission Clusters

    Directory of Open Access Journals (Sweden)

    Vlad Novitsky

    2017-10-01

    Full Text Available Better understanding the structure and dynamics of HIV transmission networks is essential for designing the most efficient interventions to prevent new HIV transmissions, and ultimately for gaining control of the HIV epidemic. The inference of phylogenetic relationships and the interpretation of results rely on the definition of the HIV transmission cluster. The definition of the HIV cluster is complex and dependent on multiple factors, including the design of sampling, accuracy of sequencing, precision of sequence alignment, evolutionary models, the phylogenetic method of inference, and specified thresholds for cluster support. While the majority of studies focus on clusters, non-clustered cases could also be highly informative. A new dimension in the analysis of the global and local HIV epidemics is the concept of phylogenetically distinct HIV sub-epidemics. The identification of active HIV sub-epidemics reveals spreading viral lineages and may help in the design of targeted interventions.HIVclustering can also be affected by sampling density. Obtaining a proper sampling density may increase statistical power and reduce sampling bias, so sampling density should be taken into account in study design and in interpretation of phylogenetic results. Finally, recent advances in long-range genotyping may enable more accurate inference of HIV transmission networks. If performed in real time, it could both inform public-health strategies and be clinically relevant (e.g., drug-resistance testing.

  9. HICOSMO - X-ray analysis of a complete sample of galaxy clusters

    Science.gov (United States)

    Schellenberger, G.; Reiprich, T.

    2017-10-01

    Galaxy clusters are known to be the largest virialized objects in the Universe. Based on the theory of structure formation one can use them as cosmological probes, since they originate from collapsed overdensities in the early Universe and witness its history. The X-ray regime provides the unique possibility to measure in detail the most massive visible component, the intra cluster medium. Using Chandra observations of a local sample of 64 bright clusters (HIFLUGCS) we provide total (hydrostatic) and gas mass estimates of each cluster individually. Making use of the completeness of the sample we quantify two interesting cosmological parameters by a Bayesian cosmological likelihood analysis. We find Ω_{M}=0.3±0.01 and σ_{8}=0.79±0.03 (statistical uncertainties) using our default analysis strategy combining both, a mass function analysis and the gas mass fraction results. The main sources of biases that we discuss and correct here are (1) the influence of galaxy groups (higher incompleteness in parent samples and a differing behavior of the L_{x} - M relation), (2) the hydrostatic mass bias (as determined by recent hydrodynamical simulations), (3) the extrapolation of the total mass (comparing various methods), (4) the theoretical halo mass function and (5) other cosmological (non-negligible neutrino mass), and instrumental (calibration) effects.

  10. Optimising cluster survey design for planning schistosomiasis preventive chemotherapy.

    Directory of Open Access Journals (Sweden)

    Sarah C L Knowles

    2017-05-01

    Full Text Available The cornerstone of current schistosomiasis control programmes is delivery of praziquantel to at-risk populations. Such preventive chemotherapy requires accurate information on the geographic distribution of infection, yet the performance of alternative survey designs for estimating prevalence and converting this into treatment decisions has not been thoroughly evaluated.We used baseline schistosomiasis mapping surveys from three countries (Malawi, Côte d'Ivoire and Liberia to generate spatially realistic gold standard datasets, against which we tested alternative two-stage cluster survey designs. We assessed how sampling different numbers of schools per district (2-20 and children per school (10-50 influences the accuracy of prevalence estimates and treatment class assignment, and we compared survey cost-efficiency using data from Malawi. Due to the focal nature of schistosomiasis, up to 53% simulated surveys involving 2-5 schools per district failed to detect schistosomiasis in low endemicity areas (1-10% prevalence. Increasing the number of schools surveyed per district improved treatment class assignment far more than increasing the number of children sampled per school. For Malawi, surveys of 15 schools per district and 20-30 children per school reliably detected endemic schistosomiasis and maximised cost-efficiency. In sensitivity analyses where treatment costs and the country considered were varied, optimal survey size was remarkably consistent, with cost-efficiency maximised at 15-20 schools per district.Among two-stage cluster surveys for schistosomiasis, our simulations indicated that surveying 15-20 schools per district and 20-30 children per school optimised cost-efficiency and minimised the risk of under-treatment, with surveys involving more schools of greater cost-efficiency as treatment costs rose.

  11. Sensitivity Sampling Over Dynamic Geometric Data Streams with Applications to $k$-Clustering

    OpenAIRE

    Song, Zhao; Yang, Lin F.; Zhong, Peilin

    2018-01-01

    Sensitivity based sampling is crucial for constructing nearly-optimal coreset for $k$-means / median clustering. In this paper, we provide a novel data structure that enables sensitivity sampling over a dynamic data stream, where points from a high dimensional discrete Euclidean space can be either inserted or deleted. Based on this data structure, we provide a one-pass coreset construction for $k$-means %and M-estimator clustering using space $\\widetilde{O}(k\\mathrm{poly}(d))$ over $d$-dimen...

  12. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    Science.gov (United States)

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  13. AGN Clustering in the BAT Sample

    Science.gov (United States)

    Powell, Meredith; Cappelluti, Nico; Urry, Meg; Koss, Michael; BASS Team

    2018-01-01

    We characterize the environments of local growing supermassive black holes by measuring the clustering of AGN in the Swift-BAT Spectroscopic Survey (BASS). With 548 AGN in the redshift range 0.012MASS galaxies, we constrain the halo occupation distribution (HOD) of the full sample with unprecedented sensitivity, as well as in bins of obscuration with matched luminosity distributions. In doing so, we find that AGN tend to reside in galaxy groups, agreeing with previous studies of AGN throughout a large range of luminosity and redshift. We also find evidence that obscured AGN tend to reside in denser environments than unobscured AGN.

  14. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  15. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  16. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  17. Evaluation of primary immunization coverage of infants under universal immunization programme in an urban area of bangalore city using cluster sampling and lot quality assurance sampling techniques.

    Science.gov (United States)

    K, Punith; K, Lalitha; G, Suman; Bs, Pradeep; Kumar K, Jayanth

    2008-07-01

    Is LQAS technique better than cluster sampling technique in terms of resources to evaluate the immunization coverage in an urban area? To assess and compare the lot quality assurance sampling against cluster sampling in the evaluation of primary immunization coverage. Population-based cross-sectional study. Areas under Mathikere Urban Health Center. Children aged 12 months to 23 months. 220 in cluster sampling, 76 in lot quality assurance sampling. Percentages and Proportions, Chi square Test. (1) Using cluster sampling, the percentage of completely immunized, partially immunized and unimmunized children were 84.09%, 14.09% and 1.82%, respectively. With lot quality assurance sampling, it was 92.11%, 6.58% and 1.31%, respectively. (2) Immunization coverage levels as evaluated by cluster sampling technique were not statistically different from the coverage value as obtained by lot quality assurance sampling techniques. Considering the time and resources required, it was found that lot quality assurance sampling is a better technique in evaluating the primary immunization coverage in urban area.

  18. A Clustering-Based Automatic Transfer Function Design for Volume Visualization

    Directory of Open Access Journals (Sweden)

    Tianjin Zhang

    2016-01-01

    Full Text Available The two-dimensional transfer functions (TFs designed based on intensity-gradient magnitude (IGM histogram are effective tools for the visualization and exploration of 3D volume data. However, traditional design methods usually depend on multiple times of trial-and-error. We propose a novel method for the automatic generation of transfer functions by performing the affinity propagation (AP clustering algorithm on the IGM histogram. Compared with previous clustering algorithms that were employed in volume visualization, the AP clustering algorithm has much faster convergence speed and can achieve more accurate clustering results. In order to obtain meaningful clustering results, we introduce two similarity measurements: IGM similarity and spatial similarity. These two similarity measurements can effectively bring the voxels of the same tissue together and differentiate the voxels of different tissues so that the generated TFs can assign different optical properties to different tissues. Before performing the clustering algorithm on the IGM histogram, we propose to remove noisy voxels based on the spatial information of voxels. Our method does not require users to input the number of clusters, and the classification and visualization process is automatic and efficient. Experiments on various datasets demonstrate the effectiveness of the proposed method.

  19. Cluster chemical ionization for improved confidence level in sample identification by gas chromatography/mass spectrometry.

    Science.gov (United States)

    Fialkov, Alexander B; Amirav, Aviv

    2003-01-01

    Upon the supersonic expansion of helium mixed with vapor from an organic solvent (e.g. methanol), various clusters of the solvent with the sample molecules can be formed. As a result of 70 eV electron ionization of these clusters, cluster chemical ionization (cluster CI) mass spectra are obtained. These spectra are characterized by the combination of EI mass spectra of vibrationally cold molecules in the supersonic molecular beam (cold EI) with CI-like appearance of abundant protonated molecules, together with satellite peaks corresponding to protonated or non-protonated clusters of sample compounds with 1-3 solvent molecules. Like CI, cluster CI preferably occurs for polar compounds with high proton affinity. However, in contrast to conventional CI, for non-polar compounds or those with reduced proton affinity the cluster CI mass spectrum converges to that of cold EI. The appearance of a protonated molecule and its solvent cluster peaks, plus the lack of protonation and cluster satellites for prominent EI fragments, enable the unambiguous identification of the molecular ion. In turn, the insertion of the proper molecular ion into the NIST library search of the cold EI mass spectra eliminates those candidates with incorrect molecular mass and thus significantly increases the confidence level in sample identification. Furthermore, molecular mass identification is of prime importance for the analysis of unknown compounds that are absent in the library. Examples are given with emphasis on the cluster CI analysis of carbamate pesticides, high explosives and unknown samples, to demonstrate the usefulness of Supersonic GC/MS (GC/MS with supersonic molecular beam) in the analysis of these thermally labile compounds. Cluster CI is shown to be a practical ionization method, due to its ease-of-use and fast instrumental conversion between EI and cluster CI, which involves the opening of only one valve located at the make-up gas path. The ease-of-use of cluster CI is analogous

  20. Cluster-sample surveys and lot quality assurance sampling to evaluate yellow fever immunisation coverage following a national campaign, Bolivia, 2007.

    Science.gov (United States)

    Pezzoli, Lorenzo; Pineda, Silvia; Halkyer, Percy; Crespo, Gladys; Andrews, Nick; Ronveaux, Olivier

    2009-03-01

    To estimate the yellow fever (YF) vaccine coverage for the endemic and non-endemic areas of Bolivia and to determine whether selected districts had acceptable levels of coverage (>70%). We conducted two surveys of 600 individuals (25 x 12 clusters) to estimate coverage in the endemic and non-endemic areas. We assessed 11 districts using lot quality assurance sampling (LQAS). The lot (district) sample was 35 individuals with six as decision value (alpha error 6% if true coverage 70%; beta error 6% if true coverage 90%). To increase feasibility, we divided the lots into five clusters of seven individuals; to investigate the effect of clustering, we calculated alpha and beta by conducting simulations where each cluster's true coverage was sampled from a normal distribution with a mean of 70% or 90% and standard deviations of 5% or 10%. Estimated coverage was 84.3% (95% CI: 78.9-89.7) in endemic areas, 86.8% (82.5-91.0) in non-endemic and 86.0% (82.8-89.1) nationally. LQAS showed that four lots had unacceptable coverage levels. In six lots, results were inconsistent with the estimated administrative coverage. The simulations suggested that the effect of clustering the lots is unlikely to have significantly increased the risk of making incorrect accept/reject decisions. Estimated YF coverage was high. Discrepancies between administrative coverage and LQAS results may be due to incorrect population data. Even allowing for clustering in LQAS, the statistical errors would remain low. Catch-up campaigns are recommended in districts with unacceptable coverage.

  1. Design Optimization of Multi-Cluster Embedded Systems for Real-Time Applications

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2004-01-01

    We present an approach to design optimization of multi-cluster embedded systems consisting of time-triggered and event-triggered clusters, interconnected via gateways. In this paper, we address design problems which are characteristic to multi-clusters: partitioning of the system functionality...... into time-triggered and event-triggered domains, process mapping, and the optimization of parameters corresponding to the communication protocol. We present several heuristics for solving these problems. Our heuristics are able to find schedulable implementations under limited resources, achieving...... an efficient utilization of the system. The developed algorithms are evaluated using extensive experiments and a real-life example....

  2. Design Optimization of Multi-Cluster Embedded Systems for Real-Time Applications

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2006-01-01

    We present an approach to design optimization of multi-cluster embedded systems consisting of time-triggered and event-triggered clusters, interconnected via gateways. In this paper, we address design problems which are characteristic to multi-clusters: partitioning of the system functionality...... into time-triggered and event-triggered domains, process mapping, and the optimization of parameters corresponding to the communication protocol. We present several heuristics for solving these problems. Our heuristics are able to find schedulable implementations under limited resources, achieving...... an efficient utilization of the system. The developed algorithms are evaluated using extensive experiments and a real-life example....

  3. Planck/SDSS Cluster Mass and Gas Scaling Relations for a Volume-Complete redMaPPer Sample

    Science.gov (United States)

    Jimeno, Pablo; Diego, Jose M.; Broadhurst, Tom; De Martino, I.; Lazkoz, Ruth

    2018-04-01

    Using Planck satellite data, we construct Sunyaev-Zel'dovich (SZ) gas pressure profiles for a large, volume-complete sample of optically selected clusters. We have defined a sample of over 8,000 redMaPPer clusters from the Sloan Digital Sky Survey (SDSS), within the volume-complete redshift region 0.100 trend towards larger break radius with increasing cluster mass. Our SZ-based masses fall ˜16% below the mass-richness relations from weak lensing, in a similar fashion as the "hydrostatic bias" related with X-ray derived masses. Finally, we derive a tight Y500-M500 relation over a wide range of cluster mass, with a power law slope equal to 1.70 ± 0.07, that agrees well with the independent slope obtained by the Planck team with an SZ-selected cluster sample, but extends to lower masses with higher precision.

  4. The ellipticities of a sample of globular clusters in M31

    International Nuclear Information System (INIS)

    Lupton, R.H.

    1989-01-01

    Images for a sample of 18 globular clusters in M31 have been obtained. The mean ellipticity on the sky in the range 7-14 pc (2-4 arcsec) is 0.08 + or - 0.02 and 0.12 + or - 0.01 in the range 14-21 pc (4-6 arcsec), with corresponding true ellipticities of 0.12 and 0.18. The difference between the inner and outer parts is significant at a 99 percent level. The flattening of the inner parts is statistically indistinguishable from that of the Galactic globular clusters, while the outer parts are flatter than the Galactic clusters at a 99.8 percent confidence level. There is a significant anticorrelation of ellipticity with line strength; such a correlation may in retrospect also be seen in the Galactic globular cluster system. For the M31 data, this anticorrelation is stronger in the inner parts of the galaxy. 30 refs

  5. Space density and clustering properties of a new sample of emission-line galaxies

    International Nuclear Information System (INIS)

    Wasilewski, A.J.

    1982-01-01

    A moderate-dispersion objective-prism survey for low-redshift emission-line galaxies has been carried out in an 825 sq. deg. region of sky with the Burrell Schmidt telescope of Case Western Reserve University. A 4 0 prism (300 A/mm at H#betta#) was used with the Illa-J emulsion to show that a new sample of emission-line galaxies is available even in areas already searched with the excess uv-continuum technique. The new emission-line galaxies occur quite commonly in systems with peculiar morphology indicating gravitational interaction with a close companion or other disturbance. About 10 to 15% of the sample are Seyfert galaxies. It is suggested that tidal interaction involving matter infall play a significant role in the generation of an emission-line spectrum. The space density of the new galaxies is found to be similar to the space density of the Makarian galaxies. Like the Markarian sample, the galaxies in the present survey represent about 10% of all galaxies in the absolute magnitude range M/sub p/ = -16 to -22. The observations also indicate that current estimates of dwarf galaxy space densities may be too low. The clustering properties of the new galaxies have been investigated using two approaches: cluster contour maps and the spatial correlation function. These tests suggest that there is weak clustering and possibly superclustering within the sample itself and that the galaxies considered here are about as common in clusters of ordinary galaxies as in the field

  6. BioCluster: Tool for Identification and Clustering of Enterobacteriaceae Based on Biochemical Data

    Directory of Open Access Journals (Sweden)

    Ahmed Abdullah

    2015-06-01

    Full Text Available Presumptive identification of different Enterobacteriaceae species is routinely achieved based on biochemical properties. Traditional practice includes manual comparison of each biochemical property of the unknown sample with known reference samples and inference of its identity based on the maximum similarity pattern with the known samples. This process is labor-intensive, time-consuming, error-prone, and subjective. Therefore, automation of sorting and similarity in calculation would be advantageous. Here we present a MATLAB-based graphical user interface (GUI tool named BioCluster. This tool was designed for automated clustering and identification of Enterobacteriaceae based on biochemical test results. In this tool, we used two types of algorithms, i.e., traditional hierarchical clustering (HC and the Improved Hierarchical Clustering (IHC, a modified algorithm that was developed specifically for the clustering and identification of Enterobacteriaceae species. IHC takes into account the variability in result of 1–47 biochemical tests within this Enterobacteriaceae family. This tool also provides different options to optimize the clustering in a user-friendly way. Using computer-generated synthetic data and some real data, we have demonstrated that BioCluster has high accuracy in clustering and identifying enterobacterial species based on biochemical test data. This tool can be freely downloaded at http://microbialgen.du.ac.bd/biocluster/.

  7. Comparing the performance of cluster random sampling and integrated threshold mapping for targeting trachoma control, using computer simulation.

    Directory of Open Access Journals (Sweden)

    Jennifer L Smith

    Full Text Available Implementation of trachoma control strategies requires reliable district-level estimates of trachomatous inflammation-follicular (TF, generally collected using the recommended gold-standard cluster randomized surveys (CRS. Integrated Threshold Mapping (ITM has been proposed as an integrated and cost-effective means of rapidly surveying trachoma in order to classify districts according to treatment thresholds. ITM differs from CRS in a number of important ways, including the use of a school-based sampling platform for children aged 1-9 and a different age distribution of participants. This study uses computerised sampling simulations to compare the performance of these survey designs and evaluate the impact of varying key parameters.Realistic pseudo gold standard data for 100 districts were generated that maintained the relative risk of disease between important sub-groups and incorporated empirical estimates of disease clustering at the household, village and district level. To simulate the different sampling approaches, 20 clusters were selected from each district, with individuals sampled according to the protocol for ITM and CRS. Results showed that ITM generally under-estimated the true prevalence of TF over a range of epidemiological settings and introduced more district misclassification according to treatment thresholds than did CRS. However, the extent of underestimation and resulting misclassification was found to be dependent on three main factors: (i the district prevalence of TF; (ii the relative risk of TF between enrolled and non-enrolled children within clusters; and (iii the enrollment rate in schools.Although in some contexts the two methodologies may be equivalent, ITM can introduce a bias-dependent shift as prevalence of TF increases, resulting in a greater risk of misclassification around treatment thresholds. In addition to strengthening the evidence base around choice of trachoma survey methodologies, this study illustrates

  8. Planck early results. VIII. The all-sky early Sunyaev-Zeldovich cluster sample

    DEFF Research Database (Denmark)

    Poutanen, T.; Natoli, P.; Polenta, G.

    2011-01-01

    We present the first all-sky sample of galaxy clusters detected blindly by the Planck satellite through the Sunyaev-Zeldovich (SZ) effect from its six highest frequencies. This early SZ (ESZ) sample is comprised of 189 candidates, which have a high signal-to-noise ratio ranging from 6 to 29. Its ...

  9. Effect of study design and setting on tuberculosis clustering estimates using Mycobacterial Interspersed Repetitive Units-Variable Number Tandem Repeats (MIRU-VNTR): a systematic review.

    Science.gov (United States)

    Mears, Jessica; Abubakar, Ibrahim; Cohen, Theodore; McHugh, Timothy D; Sonnenberg, Pam

    2015-01-21

    To systematically review the evidence for the impact of study design and setting on the interpretation of tuberculosis (TB) transmission using clustering derived from Mycobacterial Interspersed Repetitive Units-Variable Number Tandem Repeats (MIRU-VNTR) strain typing. MEDLINE, EMBASE, CINHAL, Web of Science and Scopus were searched for articles published before 21st October 2014. Studies in humans that reported the proportion of clustering of TB isolates by MIRU-VNTR were included in the analysis. Univariable meta-regression analyses were conducted to assess the influence of study design and setting on the proportion of clustering. The search identified 27 eligible articles reporting clustering between 0% and 63%. The number of MIRU-VNTR loci typed, requiring consent to type patient isolates (as a proxy for sampling fraction), the TB incidence and the maximum cluster size explained 14%, 14%, 27% and 48% of between-study variation, respectively, and had a significant association with the proportion of clustering. Although MIRU-VNTR typing is being adopted worldwide there is a paucity of data on how study design and setting may influence estimates of clustering. We have highlighted study design variables for consideration in the design and interpretation of future studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  10. Estimation after classification using lot quality assurance sampling: corrections for curtailed sampling with application to evaluating polio vaccination campaigns.

    Science.gov (United States)

    Olives, Casey; Valadez, Joseph J; Pagano, Marcello

    2014-03-01

    To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.

  11. Medical Image Retrieval Based On the Parallelization of the Cluster Sampling Algorithm

    OpenAIRE

    Ali, Hesham Arafat; Attiya, Salah; El-henawy, Ibrahim

    2017-01-01

    In this paper we develop parallel cluster sampling algorithms and show that a multi-chain version is embarrassingly parallel and can be used efficiently for medical image retrieval among other applications.

  12. Identification of Clusters of Foot Pain Location in a Community Sample.

    Science.gov (United States)

    Gill, Tiffany K; Menz, Hylton B; Landorf, Karl B; Arnold, John B; Taylor, Anne W; Hill, Catherine L

    2017-12-01

    To identify foot pain clusters according to pain location in a community-based sample of the general population. This study analyzed data from the North West Adelaide Health Study. Data were obtained between 2004 and 2006, using computer-assisted telephone interviewing, clinical assessment, and self-completed questionnaire. The location of foot pain was assessed using a diagram during the clinical assessment. Hierarchical cluster analysis was undertaken to identify foot pain location clusters, which were then compared in relation to demographics, comorbidities, and podiatry services utilization. There were 558 participants with foot pain (mean age 54.4 years, 57.5% female). Five clusters were identified: 1 with predominantly arch and ball pain (26.8%), 1 with rearfoot pain (20.9%), 1 with heel pain (13.3%), and 2 with predominantly forefoot, toe, and nail pain (28.3% and 10.7%). Each cluster was distinct in age, sex, and comorbidity profile. Of the two clusters with predominantly forefoot, toe, and nail pain, one of them had a higher proportion of men and those classified as obese, had diabetes mellitus, and used podiatry services (30%), while the other was comprised of a higher proportion of women who were overweight and reported less use of podiatry services (17.5%). Five clusters of foot pain according to pain location were identified, all with distinct age, sex, and comorbidity profiles. These findings may assist in the identification of individuals at risk for developing foot pain and in the development of targeted preventive strategies and treatments. © 2017, American College of Rheumatology.

  13. The Study on Mental Health at Work: Design and sampling.

    Science.gov (United States)

    Rose, Uwe; Schiel, Stefan; Schröder, Helmut; Kleudgen, Martin; Tophoven, Silke; Rauch, Angela; Freude, Gabriele; Müller, Grit

    2017-08-01

    The Study on Mental Health at Work (S-MGA) generates the first nationwide representative survey enabling the exploration of the relationship between working conditions, mental health and functioning. This paper describes the study design, sampling procedures and data collection, and presents a summary of the sample characteristics. S-MGA is a representative study of German employees aged 31-60 years subject to social security contributions. The sample was drawn from the employment register based on a two-stage cluster sampling procedure. Firstly, 206 municipalities were randomly selected from a pool of 12,227 municipalities in Germany. Secondly, 13,590 addresses were drawn from the selected municipalities for the purpose of conducting 4500 face-to-face interviews. The questionnaire covers psychosocial working and employment conditions, measures of mental health, work ability and functioning. Data from personal interviews were combined with employment histories from register data. Descriptive statistics of socio-demographic characteristics and logistic regressions analyses were used for comparing population, gross sample and respondents. In total, 4511 face-to-face interviews were conducted. A test for sampling bias revealed that individuals in older cohorts participated more often, while individuals with an unknown educational level, residing in major cities or with a non-German ethnic background were slightly underrepresented. There is no indication of major deviations in characteristics between the basic population and the sample of respondents. Hence, S-MGA provides representative data for research on work and health, designed as a cohort study with plans to rerun the survey 5 years after the first assessment.

  14. The Study on Mental Health at Work: Design and sampling

    Science.gov (United States)

    Rose, Uwe; Schiel, Stefan; Schröder, Helmut; Kleudgen, Martin; Tophoven, Silke; Rauch, Angela; Freude, Gabriele; Müller, Grit

    2017-01-01

    Aims: The Study on Mental Health at Work (S-MGA) generates the first nationwide representative survey enabling the exploration of the relationship between working conditions, mental health and functioning. This paper describes the study design, sampling procedures and data collection, and presents a summary of the sample characteristics. Methods: S-MGA is a representative study of German employees aged 31–60 years subject to social security contributions. The sample was drawn from the employment register based on a two-stage cluster sampling procedure. Firstly, 206 municipalities were randomly selected from a pool of 12,227 municipalities in Germany. Secondly, 13,590 addresses were drawn from the selected municipalities for the purpose of conducting 4500 face-to-face interviews. The questionnaire covers psychosocial working and employment conditions, measures of mental health, work ability and functioning. Data from personal interviews were combined with employment histories from register data. Descriptive statistics of socio-demographic characteristics and logistic regressions analyses were used for comparing population, gross sample and respondents. Results: In total, 4511 face-to-face interviews were conducted. A test for sampling bias revealed that individuals in older cohorts participated more often, while individuals with an unknown educational level, residing in major cities or with a non-German ethnic background were slightly underrepresented. Conclusions: There is no indication of major deviations in characteristics between the basic population and the sample of respondents. Hence, S-MGA provides representative data for research on work and health, designed as a cohort study with plans to rerun the survey 5 years after the first assessment. PMID:28673202

  15. Planetary Sample Caching System Design Options

    Science.gov (United States)

    Collins, Curtis; Younse, Paulo; Backes, Paul

    2009-01-01

    Potential Mars Sample Return missions would aspire to collect small core and regolith samples using a rover with a sample acquisition tool and sample caching system. Samples would need to be stored in individual sealed tubes in a canister that could be transfered to a Mars ascent vehicle and returned to Earth. A sample handling, encapsulation and containerization system (SHEC) has been developed as part of an integrated system for acquiring and storing core samples for application to future potential MSR and other potential sample return missions. Requirements and design options for the SHEC system were studied and a recommended design concept developed. Two families of solutions were explored: 1)transfer of a raw sample from the tool to the SHEC subsystem and 2)transfer of a tube containing the sample to the SHEC subsystem. The recommended design utilizes sample tool bit change out as the mechanism for transferring tubes to and samples in tubes from the tool. The SHEC subsystem design, called the Bit Changeout Caching(BiCC) design, is intended for operations on a MER class rover.

  16. X-Ray Temperatures, Luminosities, and Masses from XMM-Newton Follow-up of the First Shear-selected Galaxy Cluster Sample

    Energy Technology Data Exchange (ETDEWEB)

    Deshpande, Amruta J.; Hughes, John P. [Department of Physics and Astronomy, Rutgers the State University of New Jersey, 136 Frelinghuysen Road, Piscataway, NJ 08854 (United States); Wittman, David, E-mail: amrejd@physics.rutgers.edu, E-mail: jph@physics.rutgers.edu, E-mail: dwittman@physics.ucdavis.edu [Department of Physics, University of California, Davis, One Shields Avenue, Davis, CA 95616 (United States)

    2017-04-20

    We continue the study of the first sample of shear-selected clusters from the initial 8.6 square degrees of the Deep Lens Survey (DLS); a sample with well-defined selection criteria corresponding to the highest ranked shear peaks in the survey area. We aim to characterize the weak lensing selection by examining the sample’s X-ray properties. There are multiple X-ray clusters associated with nearly all the shear peaks: 14 X-ray clusters corresponding to seven DLS shear peaks. An additional three X-ray clusters cannot be definitively associated with shear peaks, mainly due to large positional offsets between the X-ray centroid and the shear peak. Here we report on the XMM-Newton properties of the 17 X-ray clusters. The X-ray clusters display a wide range of luminosities and temperatures; the L {sub X} − T {sub X} relation we determine for the shear-associated X-ray clusters is consistent with X-ray cluster samples selected without regard to dynamical state, while it is inconsistent with self-similarity. For a subset of the sample, we measure X-ray masses using temperature as a proxy, and compare to weak lensing masses determined by the DLS team. The resulting mass comparison is consistent with equality. The X-ray and weak lensing masses show considerable intrinsic scatter (∼48%), which is consistent with X-ray selected samples when their X-ray and weak lensing masses are independently determined.

  17. Concept design and cluster control of advanced space connectable intelligent microsatellite

    Science.gov (United States)

    Wang, Xiaohui; Li, Shuang; She, Yuchen

    2017-12-01

    In this note, a new type of advanced space connectable intelligent microsatellite is presented to extend the range of potential application of microsatellite and improve the efficiency of cooperation. First, the overall concept of the micro satellite cluster is described, which is characterized by autonomously connecting with each other and being able to realize relative rotation through the external interfaces. Second, the multi-satellite autonomous assembly algorithm and control algorithm of the cluster motion are developed to make the cluster system combine into a variety of configurations in order to achieve different types of functionality. Finally, the design of the satellite cluster system is proposed, and the possible applications are discussed.

  18. The use of hierarchical clustering for the design of optimized monitoring networks

    Science.gov (United States)

    Soares, Joana; Makar, Paul Andrew; Aklilu, Yayne; Akingunola, Ayodeji

    2018-05-01

    Associativity analysis is a powerful tool to deal with large-scale datasets by clustering the data on the basis of (dis)similarity and can be used to assess the efficacy and design of air quality monitoring networks. We describe here our use of Kolmogorov-Zurbenko filtering and hierarchical clustering of NO2 and SO2 passive and continuous monitoring data to analyse and optimize air quality networks for these species in the province of Alberta, Canada. The methodology applied in this study assesses dissimilarity between monitoring station time series based on two metrics: 1 - R, R being the Pearson correlation coefficient, and the Euclidean distance; we find that both should be used in evaluating monitoring site similarity. We have combined the analytic power of hierarchical clustering with the spatial information provided by deterministic air quality model results, using the gridded time series of model output as potential station locations, as a proxy for assessing monitoring network design and for network optimization. We demonstrate that clustering results depend on the air contaminant analysed, reflecting the difference in the respective emission sources of SO2 and NO2 in the region under study. Our work shows that much of the signal identifying the sources of NO2 and SO2 emissions resides in shorter timescales (hourly to daily) due to short-term variation of concentrations and that longer-term averages in data collection may lose the information needed to identify local sources. However, the methodology identifies stations mainly influenced by seasonality, if larger timescales (weekly to monthly) are considered. We have performed the first dissimilarity analysis based on gridded air quality model output and have shown that the methodology is capable of generating maps of subregions within which a single station will represent the entire subregion, to a given level of dissimilarity. We have also shown that our approach is capable of identifying different

  19. The clustering evolution of distant red galaxies in the GOODS-MUSIC sample

    Science.gov (United States)

    Grazian, A.; Fontana, A.; Moscardini, L.; Salimbeni, S.; Menci, N.; Giallongo, E.; de Santis, C.; Gallozzi, S.; Nonino, M.; Cristiani, S.; Vanzella, E.

    2006-07-01

    Aims.We study the clustering properties of Distant Red Galaxies (DRGs) to test whether they are the progenitors of local massive galaxies. Methods.We use the GOODS-MUSIC sample, a catalog of ~3000 Ks-selected galaxies based on VLT and HST observation of the GOODS-South field with extended multi-wavelength coverage (from 0.3 to 8~μm) and accurate estimates of the photometric redshifts to select 179 DRGs with J-Ks≥ 1.3 in an area of 135 sq. arcmin.Results.We first show that the J-Ks≥ 1.3 criterion selects a rather heterogeneous sample of galaxies, going from the targeted high-redshift luminous evolved systems, to a significant fraction of lower redshift (1mass, like groups or small galaxy clusters. Low-z DRGs, on the other hand, will likely evolve into slightly less massive field galaxies.

  20. An imbalance in cluster sizes does not lead to notable loss of power in cross-sectional, stepped-wedge cluster randomised trials with a continuous outcome.

    Science.gov (United States)

    Kristunas, Caroline A; Smith, Karen L; Gray, Laura J

    2017-03-07

    The current methodology for sample size calculations for stepped-wedge cluster randomised trials (SW-CRTs) is based on the assumption of equal cluster sizes. However, as is often the case in cluster randomised trials (CRTs), the clusters in SW-CRTs are likely to vary in size, which in other designs of CRT leads to a reduction in power. The effect of an imbalance in cluster size on the power of SW-CRTs has not previously been reported, nor what an appropriate adjustment to the sample size calculation should be to allow for any imbalance. We aimed to assess the impact of an imbalance in cluster size on the power of a cross-sectional SW-CRT and recommend a method for calculating the sample size of a SW-CRT when there is an imbalance in cluster size. The effect of varying degrees of imbalance in cluster size on the power of SW-CRTs was investigated using simulations. The sample size was calculated using both the standard method and two proposed adjusted design effects (DEs), based on those suggested for CRTs with unequal cluster sizes. The data were analysed using generalised estimating equations with an exchangeable correlation matrix and robust standard errors. An imbalance in cluster size was not found to have a notable effect on the power of SW-CRTs. The two proposed adjusted DEs resulted in trials that were generally considerably over-powered. We recommend that the standard method of sample size calculation for SW-CRTs be used, provided that the assumptions of the method hold. However, it would be beneficial to investigate, through simulation, what effect the maximum likely amount of inequality in cluster sizes would be on the power of the trial and whether any inflation of the sample size would be required.

  1. The Atacama Cosmology Telescope: Physical Properties and Purity of a Galaxy Cluster Sample Selected Via the Sunyaev-Zel'Dovich Effect

    Science.gov (United States)

    Menanteau, Felipe; Gonzalez, Jorge; Juin, Jean-Baptiste; Marriage, Tobias; Reese, Erik D.; Acquaviva, Viviana; Aguirre, Paula; Appel, John Willam; Baker, Andrew J.; Barrientos, L. Felipe; hide

    2010-01-01

    We present optical and X-ray properties for the first confirmed galaxy cluster sample selected by the Sunyaev-Zel'dovich Effect from 148 GHz maps over 455 square degrees of sky made with the Atacama Cosmology Telescope. These maps. coupled with multi-band imaging on 4-meter-class optical telescopes, have yielded a sample of 23 galaxy clusters with redshifts between 0.118 and 1.066. Of these 23 clusters, 10 are newly discovered. The selection of this sample is approximately mass limited and essentially independent of redshift. We provide optical positions, images, redshifts and X-ray fluxes and luminosities for the full sample, and X-ray temperatures of an important subset. The mass limit of the full sample is around 8.0 x 10(exp 14) Stellar Mass. with a number distribution that peaks around a redshift of 0.4. For the 10 highest significance SZE-selected cluster candidates, all of which are optically confirmed, the mass threshold is 1 x 10(exp 15) Stellar Mass and the redshift range is 0.167 to 1.066. Archival observations from Chandra, XMM-Newton. and ROSAT provide X-ray luminosities and temperatures that are broadly consistent with this mass threshold. Our optical follow-up procedure also allowed us to assess the purity of the ACT cluster sample. Eighty (one hundred) percent of the 148 GHz candidates with signal-to-noise ratios greater than 5.1 (5.7) are confirmed as massive clusters. The reported sample represents one of the largest SZE-selected sample of massive clusters over all redshifts within a cosmologically-significant survey volume, which will enable cosmological studies as well as future studies on the evolution, morphology, and stellar populations in the most massive clusters in the Universe.

  2. VizieR Online Data Catalog: ETGs sample for the Coma cluster (Riguccini+, 2015)

    Science.gov (United States)

    Riguccini, L.; Temi, P.; Amblard, A.; Fanelli, M.; Brighenti, F.

    2017-10-01

    For the Coma Cluster, we utilize the work of Mahajan et al. (2010, J/MNRAS/404/1745) to build our ETG sample. Mahajan et al. (2010, J/MNRAS/404/1745) used a combination of MIPS 24 μm observations and SDSS photometry and spectra to investigate the star formation history of galaxies in the Coma supercluster. All of their galaxies from the SDSS data in the Coma supercluster region are brighter than r~17.77, the completeness limit of the SDSS spectroscopic galaxy catalog. Their 24 μm fluxes are obtained from archival data covering 2x2 deg2 for Coma Cluster. Our final sample of 124 sources is composed of 49 ellipticals and 75 lenticulars. (1 data file).

  3. Hierarchical Bayesian modelling of gene expression time series across irregularly sampled replicates and clusters.

    Science.gov (United States)

    Hensman, James; Lawrence, Neil D; Rattray, Magnus

    2013-08-20

    Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.

  4. Computational Design of Clusters for Catalysis

    Science.gov (United States)

    Jimenez-Izal, Elisa; Alexandrova, Anastassia N.

    2018-04-01

    When small clusters are studied in chemical physics or physical chemistry, one perhaps thinks of the fundamental aspects of cluster electronic structure, or precision spectroscopy in ultracold molecular beams. However, small clusters are also of interest in catalysis, where the cold ground state or an isolated cluster may not even be the right starting point. Instead, the big question is: What happens to cluster-based catalysts under real conditions of catalysis, such as high temperature and coverage with reagents? Myriads of metastable cluster states become accessible, the entire system is dynamic, and catalysis may be driven by rare sites present only under those conditions. Activity, selectivity, and stability are highly dependent on size, composition, shape, support, and environment. To probe and master cluster catalysis, sophisticated tools are being developed for precision synthesis, operando measurements, and multiscale modeling. This review intends to tell the messy story of clusters in catalysis.

  5. On the errors on Omega(0): Monte Carlo simulations of the EMSS cluster sample

    DEFF Research Database (Denmark)

    Oukbir, J.; Arnaud, M.

    2001-01-01

    We perform Monte Carlo simulations of synthetic EMSS cluster samples, to quantify the systematic errors and the statistical uncertainties on the estimate of Omega (0) derived from fits to the cluster number density evolution and to the X-ray temperature distribution up to z=0.83. We identify...... the scatter around the relation between cluster X-ray luminosity and temperature to be a source of systematic error, of the order of Delta (syst)Omega (0) = 0.09, if not properly taken into account in the modelling. After correcting for this bias, our best Omega (0) is 0.66. The uncertainties on the shape...

  6. ClusterSignificance: A bioconductor package facilitating statistical analysis of class cluster separations in dimensionality reduced data

    DEFF Research Database (Denmark)

    Serviss, Jason T.; Gådin, Jesper R.; Eriksson, Per

    2017-01-01

    , e.g. genes in a specific pathway, alone can separate samples into these established classes. Despite this, the evaluation of class separations is often subjective and performed via visualization. Here we present the ClusterSignificance package; a set of tools designed to assess the statistical...... significance of class separations downstream of dimensionality reduction algorithms. In addition, we demonstrate the design and utility of the ClusterSignificance package and utilize it to determine the importance of long non-coding RNA expression in the identity of multiple hematological malignancies....

  7. CA II TRIPLET SPECTROSCOPY OF SMALL MAGELLANIC CLOUD RED GIANTS. III. ABUNDANCES AND VELOCITIES FOR A SAMPLE OF 14 CLUSTERS

    Energy Technology Data Exchange (ETDEWEB)

    Parisi, M. C.; Clariá, J. J.; Marcionni, N. [Observatorio Astronómico, Universidad Nacional de Córdoba, Laprida 854, Córdoba, CP 5000 (Argentina); Geisler, D.; Villanova, S. [Departamento de Astronomía, Universidad de Concepción Casilla 160-C, Concepción (Chile); Sarajedini, A. [Department of Astronomy, University of Florida P.O. Box 112055, Gainesville, FL 32611 (United States); Grocholski, A. J., E-mail: celeste@oac.uncor.edu, E-mail: claria@oac.uncor.edu, E-mail: nmarcionni@oac.uncor.edu, E-mail: dgeisler@astro-udec.cl, E-mail: svillanova@astro-udec.cl, E-mail: ata@astro.ufl.edu, E-mail: grocholski@phys.lsu.edu [Department of Physics and Astronomy, Louisiana State University 202 Nicholson Hall, Tower Drive, Baton Rouge, LA 70803-4001 (United States)

    2015-05-15

    We obtained spectra of red giants in 15 Small Magellanic Cloud (SMC) clusters in the region of the Ca ii lines with FORS2 on the Very Large Telescope. We determined the mean metallicity and radial velocity with mean errors of 0.05 dex and 2.6 km s{sup −1}, respectively, from a mean of 6.5 members per cluster. One cluster (B113) was too young for a reliable metallicity determination and was excluded from the sample. We combined the sample studied here with 15 clusters previously studied by us using the same technique, and with 7 clusters whose metallicities determined by other authors are on a scale similar to ours. This compilation of 36 clusters is the largest SMC cluster sample currently available with accurate and homogeneously determined metallicities. We found a high probability that the metallicity distribution is bimodal, with potential peaks at −1.1 and −0.8 dex. Our data show no strong evidence of a metallicity gradient in the SMC clusters, somewhat at odds with recent evidence from Ca ii triplet spectra of a large sample of field stars. This may be revealing possible differences in the chemical history of clusters and field stars. Our clusters show a significant dispersion of metallicities, whatever age is considered, which could be reflecting the lack of a unique age–metallicity relation in this galaxy. None of the chemical evolution models currently available in the literature satisfactorily represents the global chemical enrichment processes of SMC clusters.

  8. Edge Principal Components and Squash Clustering: Using the Special Structure of Phylogenetic Placement Data for Sample Comparison

    Science.gov (United States)

    Matsen IV, Frederick A.; Evans, Steven N.

    2013-01-01

    Principal components analysis (PCA) and hierarchical clustering are two of the most heavily used techniques for analyzing the differences between nucleic acid sequence samples taken from a given environment. They have led to many insights regarding the structure of microbial communities. We have developed two new complementary methods that leverage how this microbial community data sits on a phylogenetic tree. Edge principal components analysis enables the detection of important differences between samples that contain closely related taxa. Each principal component axis is a collection of signed weights on the edges of the phylogenetic tree, and these weights are easily visualized by a suitable thickening and coloring of the edges. Squash clustering outputs a (rooted) clustering tree in which each internal node corresponds to an appropriate “average” of the original samples at the leaves below the node. Moreover, the length of an edge is a suitably defined distance between the averaged samples associated with the two incident nodes, rather than the less interpretable average of distances produced by UPGMA, the most widely used hierarchical clustering method in this context. We present these methods and illustrate their use with data from the human microbiome. PMID:23505415

  9. Clustered lot quality assurance sampling: a tool to monitor immunization coverage rapidly during a national yellow fever and polio vaccination campaign in Cameroon, May 2009.

    Science.gov (United States)

    Pezzoli, L; Tchio, R; Dzossa, A D; Ndjomo, S; Takeu, A; Anya, B; Ticha, J; Ronveaux, O; Lewis, R F

    2012-01-01

    We used the clustered lot quality assurance sampling (clustered-LQAS) technique to identify districts with low immunization coverage and guide mop-up actions during the last 4 days of a combined oral polio vaccine (OPV) and yellow fever (YF) vaccination campaign conducted in Cameroon in May 2009. We monitored 17 pre-selected districts at risk for low coverage. We designed LQAS plans to reject districts with YF vaccination coverage LQAS proved to be useful in guiding the campaign vaccination strategy before the completion of the operations.

  10. Fuzzy C-Means Clustering Model Data Mining For Recognizing Stock Data Sampling Pattern

    Directory of Open Access Journals (Sweden)

    Sylvia Jane Annatje Sumarauw

    2007-06-01

    Full Text Available Abstract Capital market has been beneficial to companies and investor. For investors, the capital market provides two economical advantages, namely deviden and capital gain, and a non-economical one that is a voting .} hare in Shareholders General Meeting. But, it can also penalize the share owners. In order to prevent them from the risk, the investors should predict the prospect of their companies. As a consequence of having an abstract commodity, the share quality will be determined by the validity of their company profile information. Any information of stock value fluctuation from Jakarta Stock Exchange can be a useful consideration and a good measurement for data analysis. In the context of preventing the shareholders from the risk, this research focuses on stock data sample category or stock data sample pattern by using Fuzzy c-Me, MS Clustering Model which providing any useful information jar the investors. lite research analyses stock data such as Individual Index, Volume and Amount on Property and Real Estate Emitter Group at Jakarta Stock Exchange from January 1 till December 31 of 204. 'he mining process follows Cross Industry Standard Process model for Data Mining (CRISP,. DM in the form of circle with these steps: Business Understanding, Data Understanding, Data Preparation, Modelling, Evaluation and Deployment. At this modelling process, the Fuzzy c-Means Clustering Model will be applied. Data Mining Fuzzy c-Means Clustering Model can analyze stock data in a big database with many complex variables especially for finding the data sample pattern, and then building Fuzzy Inference System for stimulating inputs to be outputs that based on Fuzzy Logic by recognising the pattern. Keywords: Data Mining, AUz..:y c-Means Clustering Model, Pattern Recognition

  11. The X-ray luminosity-temperature relation of a complete sample of low-mass galaxy clusters

    DEFF Research Database (Denmark)

    Zou, S.; Maughan, B. J.; Giles, P. A.

    2016-01-01

    found for massive clusters to a steeper slope for the lower mass sample studied here. Thanks to our rigorous treatment of selection biases, these measurements provide a robust reference against which to compare predictions of models of the impact of feedback on the X-ray properties of galaxy groups....... (T), taking selection biases fully into account. The logarithmic slope of the bolometric L-T relation was found to be 3.29 ± 0.33, consistent with values typically found for samples of more massive clusters. In combination with other recent studies of the L-T relation, we show...

  12. Management system of ELHEP cluster machine for FEL photonics design

    Science.gov (United States)

    Zysik, Jacek; Poźniak, Krzysztof; Romaniuk, Ryszard

    2006-10-01

    A multipurpose, distributed MatLab calculations oriented, cluster machine was assembled in PERG/ELHEP laboratory at ISE/WUT. It is predicted mainly for advanced photonics and FPGA/DSP based systems design for Free Electron Laser. It will be used also for student projects for superconducting accelerator and FEL. Here we present one specific side of cluster design. For an intense, distributed daily work with the cluster, it is important to have a good interface and practical access to all machine resources. A complex management system was implemented in PERG laboratory. It helps all registered users to work using all necessary applications, communicate with other logged in people, check all the news and gather all necessary information about what is going on in the system, how it is utilized, etc. The system is also very practical for administrator purposes, it helps to keep controlling who is using the resources and for how long. It provides different privileges for different applications and many more. The system is introduced as a freeware, using open source code and can be modified by system operators or super-users who are interested in nonstandard system configuration.

  13. The Gemini/HST Galaxy Cluster Project: Redshift 0.2–1.0 Cluster Sample, X-Ray Data, and Optical Photometry Catalog

    Science.gov (United States)

    Jørgensen, Inger; Chiboucas, Kristin; Hibon, Pascale; Nielsen, Louise D.; Takamiya, Marianne

    2018-04-01

    The Gemini/HST Galaxy Cluster Project (GCP) covers 14 z = 0.2–1.0 clusters with X-ray luminosity of {L}500≥slant {10}44 {erg} {{{s}}}-1 in the 0.1–2.4 keV band. In this paper, we provide homogeneously calibrated X-ray luminosities, masses, and radii, and we present the complete catalog of the ground-based photometry for the GCP clusters. The clusters were observed with either Gemini North or South in three or four of the optical passbands g‧, r‧, i‧, and z‧. The photometric catalog includes consistently calibrated total magnitudes, colors, and geometrical parameters. The photometry reaches ≈25 mag in the passband closest to the rest-frame B band. We summarize comparisons of our photometry with data from the Sloan Digital Sky Survey. We describe the sample selection for our spectroscopic observations, and establish the calibrations to obtain rest-frame magnitudes and colors. Finally, we derive the color–magnitude relations for the clusters, and briefly discuss these in the context of evolution with redshift. Consistent with our results based on spectroscopic data, the color–magnitude relations support passive evolution of the red sequence galaxies. The absence of change in the slope with redshift constrains the allowable age variation along the red sequence to <0.05 dex between the brightest cluster galaxies and those four magnitudes fainter. This paper serves as the main reference for the GCP cluster and galaxy selection, X-ray data, and ground-based photometry.

  14. THE ATACAMA COSMOLOGY TELESCOPE: DYNAMICAL MASSES AND SCALING RELATIONS FOR A SAMPLE OF MASSIVE SUNYAEV-ZEL'DOVICH EFFECT SELECTED GALAXY CLUSTERS ,

    International Nuclear Information System (INIS)

    Sifón, Cristóbal; Barrientos, L. Felipe; González, Jorge; Infante, Leopoldo; Dünner, Rolando; Menanteau, Felipe; Hughes, John P.; Baker, Andrew J.; Hasselfield, Matthew; Marriage, Tobias A.; Crichton, Devin; Gralla, Megan B.; Addison, Graeme E.; Dunkley, Joanna; Battaglia, Nick; Bond, J. Richard; Hajian, Amir; Das, Sudeep; Devlin, Mark J.; Hilton, Matt

    2013-01-01

    We present the first dynamical mass estimates and scaling relations for a sample of Sunyaev-Zel'dovich effect (SZE) selected galaxy clusters. The sample consists of 16 massive clusters detected with the Atacama Cosmology Telescope (ACT) over a 455 deg 2 area of the southern sky. Deep multi-object spectroscopic observations were taken to secure intermediate-resolution (R ∼ 700-800) spectra and redshifts for ≈60 member galaxies on average per cluster. The dynamical masses M 200c of the clusters have been calculated using simulation-based scaling relations between velocity dispersion and mass. The sample has a median redshift z = 0.50 and a median mass M 200c ≅12×10 14 h 70 -1 M sun with a lower limit M 200c ≅6×10 14 h 70 -1 M sun , consistent with the expectations for the ACT southern sky survey. These masses are compared to the ACT SZE properties of the sample, specifically, the match-filtered central SZE amplitude y 0 -tilde, the central Compton parameter y 0 , and the integrated Compton signal Y 200c , which we use to derive SZE-mass scaling relations. All SZE estimators correlate with dynamical mass with low intrinsic scatter (∼< 20%), in agreement with numerical simulations. We explore the effects of various systematic effects on these scaling relations, including the correlation between observables and the influence of dynamically disturbed clusters. Using the three-dimensional information available, we divide the sample into relaxed and disturbed clusters and find that ∼50% of the clusters are disturbed. There are hints that disturbed systems might bias the scaling relations, but given the current sample sizes, these differences are not significant; further studies including more clusters are required to assess the impact of these clusters on the scaling relations

  15. The optimal design of stepped wedge trials with equal allocation to sequences and a comparison to other trial designs.

    Science.gov (United States)

    Thompson, Jennifer A; Fielding, Katherine; Hargreaves, James; Copas, Andrew

    2017-12-01

    Background/Aims We sought to optimise the design of stepped wedge trials with an equal allocation of clusters to sequences and explored sample size comparisons with alternative trial designs. Methods We developed a new expression for the design effect for a stepped wedge trial, assuming that observations are equally correlated within clusters and an equal number of observations in each period between sequences switching to the intervention. We minimised the design effect with respect to (1) the fraction of observations before the first and after the final sequence switches (the periods with all clusters in the control or intervention condition, respectively) and (2) the number of sequences. We compared the design effect of this optimised stepped wedge trial to the design effects of a parallel cluster-randomised trial, a cluster-randomised trial with baseline observations, and a hybrid trial design (a mixture of cluster-randomised trial and stepped wedge trial) with the same total cluster size for all designs. Results We found that a stepped wedge trial with an equal allocation to sequences is optimised by obtaining all observations after the first sequence switches and before the final sequence switches to the intervention; this means that the first sequence remains in the control condition and the last sequence remains in the intervention condition for the duration of the trial. With this design, the optimal number of sequences is [Formula: see text], where [Formula: see text] is the cluster-mean correlation, [Formula: see text] is the intracluster correlation coefficient, and m is the total cluster size. The optimal number of sequences is small when the intracluster correlation coefficient and cluster size are small and large when the intracluster correlation coefficient or cluster size is large. A cluster-randomised trial remains more efficient than the optimised stepped wedge trial when the intracluster correlation coefficient or cluster size is small. A

  16. How can design be a platform for the development of a regional cluster in the Region of Southern Denmark

    DEFF Research Database (Denmark)

    Jensen, Susanne; Christensen, Poul Rind

    2013-01-01

    Analyses of key factors for the emergence of a cluster and the formation of a design cluster in the region of Southern Denmark......Analyses of key factors for the emergence of a cluster and the formation of a design cluster in the region of Southern Denmark...

  17. Don't spin the pen: two alternative methods for second-stage sampling in urban cluster surveys

    Directory of Open Access Journals (Sweden)

    Rose Angela MC

    2007-06-01

    Full Text Available Abstract In two-stage cluster surveys, the traditional method used in second-stage sampling (in which the first household in a cluster is selected is time-consuming and may result in biased estimates of the indicator of interest. Firstly, a random direction from the center of the cluster is selected, usually by spinning a pen. The houses along that direction are then counted out to the boundary of the cluster, and one is then selected at random to be the first household surveyed. This process favors households towards the center of the cluster, but it could easily be improved. During a recent meningitis vaccination coverage survey in Maradi, Niger, we compared this method of first household selection to two alternatives in urban zones: 1 using a superimposed grid on the map of the cluster area and randomly selecting an intersection; and 2 drawing the perimeter of the cluster area using a Global Positioning System (GPS and randomly selecting one point within the perimeter. Although we only compared a limited number of clusters using each method, we found the sampling grid method to be the fastest and easiest for field survey teams, although it does require a map of the area. Selecting a random GPS point was also found to be a good method, once adequate training can be provided. Spinning the pen and counting households to the boundary was the most complicated and time-consuming. The two methods tested here represent simpler, quicker and potentially more robust alternatives to spinning the pen for cluster surveys in urban areas. However, in rural areas, these alternatives would favor initial household selection from lower density (or even potentially empty areas. Bearing in mind these limitations, as well as available resources and feasibility, investigators should choose the most appropriate method for their particular survey context.

  18. Cluster Sampling Bias in Government-Sponsored Evaluations: A Correlational Study of Employment and Welfare Pilots in England.

    Science.gov (United States)

    Vaganay, Arnaud

    2016-01-01

    For pilot or experimental employment programme results to apply beyond their test bed, researchers must select 'clusters' (i.e. the job centres delivering the new intervention) that are reasonably representative of the whole territory. More specifically, this requirement must account for conditions that could artificially inflate the effect of a programme, such as the fluidity of the local labour market or the performance of the local job centre. Failure to achieve representativeness results in Cluster Sampling Bias (CSB). This paper makes three contributions to the literature. Theoretically, it approaches the notion of CSB as a human behaviour. It offers a comprehensive theory, whereby researchers with limited resources and conflicting priorities tend to oversample 'effect-enhancing' clusters when piloting a new intervention. Methodologically, it advocates for a 'narrow and deep' scope, as opposed to the 'wide and shallow' scope, which has prevailed so far. The PILOT-2 dataset was developed to test this idea. Empirically, it provides evidence on the prevalence of CSB. In conditions similar to the PILOT-2 case study, investigators (1) do not sample clusters with a view to maximise generalisability; (2) do not oversample 'effect-enhancing' clusters; (3) consistently oversample some clusters, including those with higher-than-average client caseloads; and (4) report their sampling decisions in an inconsistent and generally poor manner. In conclusion, although CSB is prevalent, it is still unclear whether it is intentional and meant to mislead stakeholders about the expected effect of the intervention or due to higher-level constraints or other considerations.

  19. ELEMENTAL ABUNDANCE RATIOS IN STARS OF THE OUTER GALACTIC DISK. IV. A NEW SAMPLE OF OPEN CLUSTERS

    International Nuclear Information System (INIS)

    Yong, David; Carney, Bruce W.; Friel, Eileen D.

    2012-01-01

    We present radial velocities and chemical abundances for nine stars in the old, distant open clusters Be18, Be21, Be22, Be32, and PWM4. For Be18 and PWM4, these are the first chemical abundance measurements. Combining our data with literature results produces a compilation of some 68 chemical abundance measurements in 49 unique clusters. For this combined sample, we study the chemical abundances of open clusters as a function of distance, age, and metallicity. We confirm that the metallicity gradient in the outer disk is flatter than the gradient in the vicinity of the solar neighborhood. We also confirm that the open clusters in the outer disk are metal-poor with enhancements in the ratios [α/Fe] and perhaps [Eu/Fe]. All elements show negligible or small trends between [X/Fe] and distance ( –1 ), but for some elements, there is a hint that the local (R GC GC > 13 kpc) samples may have different trends with distance. There is no evidence for significant abundance trends versus age ( –1 ). We measure the linear relation between [X/Fe] and metallicity, [Fe/H], and find that the scatter about the mean trend is comparable to the measurement uncertainties. Comparison with solar neighborhood field giants shows that the open clusters share similar abundance ratios [X/Fe] at a given metallicity. While the flattening of the metallicity gradient and enhanced [α/Fe] ratios in the outer disk suggest a chemical enrichment history different from that of the solar neighborhood, we echo the sentiments expressed by Friel et al. that definitive conclusions await homogeneous analyses of larger samples of stars in larger numbers of clusters. Arguably, our understanding of the evolution of the outer disk from open clusters is currently limited by systematic abundance differences between various studies.

  20. A two-stage cluster sampling method using gridded population data, a GIS, and Google EarthTM imagery in a population-based mortality survey in Iraq

    Directory of Open Access Journals (Sweden)

    Galway LP

    2012-04-01

    Full Text Available Abstract Background Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. Results We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Conclusion Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings.

  1. Multi-level flow-based Markov clustering for design structure matrices

    NARCIS (Netherlands)

    Wilschut, T.; Etman, P.L.F.; Rooda, J.E.; Adan, I.J.B.F.

    2016-01-01

    For decomposition and integration of systems one requires extensive knowledge on system structure. A Design Structure Matrix (DSM) can provide a simple, compact and visual representation of dependencies between system elements. By permuting the rows and columns of a DSM using a clustering algorithm,

  2. An optimal design of cluster spacing intervals for staged fracturing in horizontal shale gas wells based on the optimal SRVs

    Directory of Open Access Journals (Sweden)

    Lan Ren

    2017-09-01

    Full Text Available When horizontal well staged cluster fracturing is applied in shale gas reservoirs, the cluster spacing is essential to fracturing performance. If the cluster spacing is too small, the stimulated area between major fractures will be overlapped, and the efficiency of fracturing stimulation will be decreased. If the cluster spacing is too large, the area between major fractures cannot be stimulated completely and reservoir recovery extent will be adversely impacted. At present, cluster spacing design is mainly based on the static model with the potential reservoir stimulation area as the target, and there is no cluster spacing design method in accordance with the actual fracturing process and targets dynamic stimulated reservoir volume (SRV. In this paper, a dynamic SRV calculation model for cluster fracture propagation was established by analyzing the coupling mechanisms among fracture propagation, fracturing fluid loss and stress. Then, the cluster spacing was optimized to reach the target of the optimal SRVs. This model was applied for validation on site in the Jiaoshiba shale gasfield in the Fuling area of the Sichuan Basin. The key geological engineering parameters influencing the optimal cluster spacing intervals were analyzed. The reference charts for the optimal cluster spacing design were prepared based on the geological characteristics of south and north blocks in the Jiaoshiba shale gasfield. It is concluded that the cluster spacing optimal design method proposed in this paper is of great significance in overcoming the blindness in current cluster perforation design and guiding the optimal design of volume fracturing in shale gas reservoirs. Keywords: Shale gas, Horizontal well, Staged fracturing, Cluster spacing, Reservoir, Stimulated reservoir volume (SRV, Mathematical model, Optimal method, Sichuan basin, Jiaoshiba shale gasfield

  3. Stepped-wedge cluster randomised controlled trials: a generic framework including parallel and multiple-level designs.

    Science.gov (United States)

    Hemming, Karla; Lilford, Richard; Girling, Alan J

    2015-01-30

    Stepped-wedge cluster randomised trials (SW-CRTs) are being used with increasing frequency in health service evaluation. Conventionally, these studies are cross-sectional in design with equally spaced steps, with an equal number of clusters randomised at each step and data collected at each and every step. Here we introduce several variations on this design and consider implications for power. One modification we consider is the incomplete cross-sectional SW-CRT, where the number of clusters varies at each step or where at some steps, for example, implementation or transition periods, data are not collected. We show that the parallel CRT with staggered but balanced randomisation can be considered a special case of the incomplete SW-CRT. As too can the parallel CRT with baseline measures. And we extend these designs to allow for multiple layers of clustering, for example, wards within a hospital. Building on results for complete designs, power and detectable difference are derived using a Wald test and obtaining the variance-covariance matrix of the treatment effect assuming a generalised linear mixed model. These variations are illustrated by several real examples. We recommend that whilst the impact of transition periods on power is likely to be small, where they are a feature of the design they should be incorporated. We also show examples in which the power of a SW-CRT increases as the intra-cluster correlation (ICC) increases and demonstrate that the impact of the ICC is likely to be smaller in a SW-CRT compared with a parallel CRT, especially where there are multiple levels of clustering. Finally, through this unified framework, the efficiency of the SW-CRT and the parallel CRT can be compared. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  4. ClusterCAD: a computational platform for type I modular polyketide synthase design

    DEFF Research Database (Denmark)

    Eng, Clara H.; Backman, Tyler W. H.; Bailey, Constance B.

    2018-01-01

    barrier to the design of active variants, and identifying strategies to reliably construct functional PKS chimeras remains an active area of research. In this work, we formalize a paradigm for the design of PKS chimeras and introduce ClusterCAD as a computational platform to streamline and simplify...

  5. Objective sampling design in a highly heterogeneous landscape - characterizing environmental determinants of malaria vector distribution in French Guiana, in the Amazonian region.

    Science.gov (United States)

    Roux, Emmanuel; Gaborit, Pascal; Romaña, Christine A; Girod, Romain; Dessay, Nadine; Dusfour, Isabelle

    2013-12-01

    Sampling design is a key issue when establishing species inventories and characterizing habitats within highly heterogeneous landscapes. Sampling efforts in such environments may be constrained and many field studies only rely on subjective and/or qualitative approaches to design collection strategy. The region of Cacao, in French Guiana, provides an excellent study site to understand the presence and abundance of Anopheles mosquitoes, their species dynamics and the transmission risk of malaria across various environments. We propose an objective methodology to define a stratified sampling design. Following thorough environmental characterization, a factorial analysis of mixed groups allows the data to be reduced and non-collinear principal components to be identified while balancing the influences of the different environmental factors. Such components defined new variables which could then be used in a robust k-means clustering procedure. Then, we identified five clusters that corresponded to our sampling strata and selected sampling sites in each stratum. We validated our method by comparing the species overlap of entomological collections from selected sites and the environmental similarities of the same sites. The Morisita index was significantly correlated (Pearson linear correlation) with environmental similarity based on i) the balanced environmental variable groups considered jointly (p = 0.001) and ii) land cover/use (p-value sampling approach. Land cover/use maps (based on high spatial resolution satellite images) were shown to be particularly useful when studying the presence, density and diversity of Anopheles mosquitoes at local scales and in very heterogeneous landscapes.

  6. A simple sample size formula for analysis of covariance in cluster randomized trials.

    NARCIS (Netherlands)

    Teerenstra, S.; Eldridge, S.; Graff, M.J.; Hoop, E. de; Borm, G.F.

    2012-01-01

    For cluster randomized trials with a continuous outcome, the sample size is often calculated as if an analysis of the outcomes at the end of the treatment period (follow-up scores) would be performed. However, often a baseline measurement of the outcome is available or feasible to obtain. An

  7. An improved initialization center k-means clustering algorithm based on distance and density

    Science.gov (United States)

    Duan, Yanling; Liu, Qun; Xia, Shuyin

    2018-04-01

    Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.

  8. Ca II TRIPLET SPECTROSCOPY OF SMALL MAGELLANIC CLOUD RED GIANTS. I. ABUNDANCES AND VELOCITIES FOR A SAMPLE OF CLUSTERS

    International Nuclear Information System (INIS)

    Parisi, M. C.; Claria, J. J.; Grocholski, A. J.; Geisler, D.; Sarajedini, A.

    2009-01-01

    We have obtained near-infrared spectra covering the Ca II triplet lines for a large number of stars associated with 16 Small Magellanic Cloud (SMC) clusters using the VLT + FORS2. These data compose the largest available sample of SMC clusters with spectroscopically derived abundances and velocities. Our clusters span a wide range of ages and provide good areal coverage of the galaxy. Cluster members are selected using a combination of their positions relative to the cluster center as well as their location in the color-magnitude diagram, abundances, and radial velocities (RVs). We determine mean cluster velocities to typically 2.7 km s -1 and metallicities to 0.05 dex (random errors), from an average of 6.4 members per cluster. By combining our clusters with previously published results, we compile a sample of 25 clusters on a homogeneous metallicity scale and with relatively small metallicity errors, and thereby investigate the metallicity distribution, metallicity gradient, and age-metallicity relation (AMR) of the SMC cluster system. For all 25 clusters in our expanded sample, the mean metallicity [Fe/H] = -0.96 with σ = 0.19. The metallicity distribution may possibly be bimodal, with peaks at ∼-0.9 dex and -1.15 dex. Similar to the Large Magellanic Cloud (LMC), the SMC cluster system gives no indication of a radial metallicity gradient. However, intermediate age SMC clusters are both significantly more metal-poor and have a larger metallicity spread than their LMC counterparts. Our AMR shows evidence for three phases: a very early (>11 Gyr) phase in which the metallicity reached ∼-1.2 dex, a long intermediate phase from ∼10 to 3 Gyr in which the metallicity only slightly increased, and a final phase from 3 to 1 Gyr ago in which the rate of enrichment was substantially faster. We find good overall agreement with the model of Pagel and Tautvaisiene, which assumes a burst of star formation at 4 Gyr. Finally, we find that the mean RV of the cluster system

  9. Design of the South East Asian Nutrition Survey (SEANUTS): a four-country multistage cluster design study.

    Science.gov (United States)

    Schaafsma, Anne; Deurenberg, Paul; Calame, Wim; van den Heuvel, Ellen G H M; van Beusekom, Christien; Hautvast, Jo; Sandjaja; Bee Koon, Poh; Rojroongwasinkul, Nipa; Le Nguyen, Bao Khanh; Parikh, Panam; Khouw, Ilse

    2013-09-01

    Nutrition is a well-known factor in the growth, health and development of children. It is also acknowledged that worldwide many people have dietary imbalances resulting in over- or undernutrition. In 2009, the multinational food company FrieslandCampina initiated the South East Asian Nutrition Survey (SEANUTS), a combination of surveys carried out in Indonesia, Malaysia, Thailand and Vietnam, to get a better insight into these imbalances. The present study describes the general study design and methodology, as well as some problems and pitfalls encountered. In each of these countries, participants in the age range of 0·5-12 years were recruited according to a multistage cluster randomised or stratified random sampling methodology. Field teams took care of recruitment and data collection. For the health status of children, growth and body composition, physical activity, bone density, and development and cognition were measured. For nutrition, food intake and food habits were assessed by questionnaires, whereas in subpopulations blood and urine samples were collected to measure the biochemical status parameters of Fe, vitamins A and D, and DHA. In Thailand, the researchers additionally studied the lipid profile in blood, whereas in Indonesia iodine excretion in urine was analysed. Biochemical data were analysed in certified laboratories. Study protocols and methodology were aligned where practically possible. In December 2011, data collection was finalised. In total, 16,744 children participated in the present study. Information that will be very relevant for formulating nutritional health policies, as well as for designing innovative food and nutrition research and development programmes, has become available.

  10. RosettaAntibodyDesign (RAbD): A general framework for computational antibody design

    Science.gov (United States)

    Adolf-Bryfogle, Jared; Kalyuzhniy, Oleks; Kubitz, Michael; Hu, Xiaozhen; Adachi, Yumiko; Schief, William R.

    2018-01-01

    A structural-bioinformatics-based computational methodology and framework have been developed for the design of antibodies to targets of interest. RosettaAntibodyDesign (RAbD) samples the diverse sequence, structure, and binding space of an antibody to an antigen in highly customizable protocols for the design of antibodies in a broad range of applications. The program samples antibody sequences and structures by grafting structures from a widely accepted set of the canonical clusters of CDRs (North et al., J. Mol. Biol., 406:228–256, 2011). It then performs sequence design according to amino acid sequence profiles of each cluster, and samples CDR backbones using a flexible-backbone design protocol incorporating cluster-based CDR constraints. Starting from an existing experimental or computationally modeled antigen-antibody structure, RAbD can be used to redesign a single CDR or multiple CDRs with loops of different length, conformation, and sequence. We rigorously benchmarked RAbD on a set of 60 diverse antibody–antigen complexes, using two design strategies—optimizing total Rosetta energy and optimizing interface energy alone. We utilized two novel metrics for measuring success in computational protein design. The design risk ratio (DRR) is equal to the frequency of recovery of native CDR lengths and clusters divided by the frequency of sampling of those features during the Monte Carlo design procedure. Ratios greater than 1.0 indicate that the design process is picking out the native more frequently than expected from their sampled rate. We achieved DRRs for the non-H3 CDRs of between 2.4 and 4.0. The antigen risk ratio (ARR) is the ratio of frequencies of the native amino acid types, CDR lengths, and clusters in the output decoys for simulations performed in the presence and absence of the antigen. For CDRs, we achieved cluster ARRs as high as 2.5 for L1 and 1.5 for H2. For sequence design simulations without CDR grafting, the overall recovery for the

  11. RosettaAntibodyDesign (RAbD): A general framework for computational antibody design.

    Science.gov (United States)

    Adolf-Bryfogle, Jared; Kalyuzhniy, Oleks; Kubitz, Michael; Weitzner, Brian D; Hu, Xiaozhen; Adachi, Yumiko; Schief, William R; Dunbrack, Roland L

    2018-04-01

    A structural-bioinformatics-based computational methodology and framework have been developed for the design of antibodies to targets of interest. RosettaAntibodyDesign (RAbD) samples the diverse sequence, structure, and binding space of an antibody to an antigen in highly customizable protocols for the design of antibodies in a broad range of applications. The program samples antibody sequences and structures by grafting structures from a widely accepted set of the canonical clusters of CDRs (North et al., J. Mol. Biol., 406:228-256, 2011). It then performs sequence design according to amino acid sequence profiles of each cluster, and samples CDR backbones using a flexible-backbone design protocol incorporating cluster-based CDR constraints. Starting from an existing experimental or computationally modeled antigen-antibody structure, RAbD can be used to redesign a single CDR or multiple CDRs with loops of different length, conformation, and sequence. We rigorously benchmarked RAbD on a set of 60 diverse antibody-antigen complexes, using two design strategies-optimizing total Rosetta energy and optimizing interface energy alone. We utilized two novel metrics for measuring success in computational protein design. The design risk ratio (DRR) is equal to the frequency of recovery of native CDR lengths and clusters divided by the frequency of sampling of those features during the Monte Carlo design procedure. Ratios greater than 1.0 indicate that the design process is picking out the native more frequently than expected from their sampled rate. We achieved DRRs for the non-H3 CDRs of between 2.4 and 4.0. The antigen risk ratio (ARR) is the ratio of frequencies of the native amino acid types, CDR lengths, and clusters in the output decoys for simulations performed in the presence and absence of the antigen. For CDRs, we achieved cluster ARRs as high as 2.5 for L1 and 1.5 for H2. For sequence design simulations without CDR grafting, the overall recovery for the native

  12. A sampling device for counting insect egg clusters and measuring vertical distribution of vegetation

    Science.gov (United States)

    Robert L. Talerico; Robert W., Jr. Wilson

    1978-01-01

    The use of a vertical sampling pole that delineates known volumes and position is illustrated and demonstrated for counting egg clusters of N. sertifer. The pole can also be used to estimate vertical and horizontal coverage, distribution or damage of vegetation or foliage.

  13. On the Analysis of Case-Control Studies in Cluster-correlated Data Settings.

    Science.gov (United States)

    Haneuse, Sebastien; Rivera-Rodriguez, Claudia

    2018-01-01

    In resource-limited settings, long-term evaluation of national antiretroviral treatment (ART) programs often relies on aggregated data, the analysis of which may be subject to ecological bias. As researchers and policy makers consider evaluating individual-level outcomes such as treatment adherence or mortality, the well-known case-control design is appealing in that it provides efficiency gains over random sampling. In the context that motivates this article, valid estimation and inference requires acknowledging any clustering, although, to our knowledge, no statistical methods have been published for the analysis of case-control data for which the underlying population exhibits clustering. Furthermore, in the specific context of an ongoing collaboration in Malawi, rather than performing case-control sampling across all clinics, case-control sampling within clinics has been suggested as a more practical strategy. To our knowledge, although similar outcome-dependent sampling schemes have been described in the literature, a case-control design specific to correlated data settings is new. In this article, we describe this design, discuss balanced versus unbalanced sampling techniques, and provide a general approach to analyzing case-control studies in cluster-correlated settings based on inverse probability-weighted generalized estimating equations. Inference is based on a robust sandwich estimator with correlation parameters estimated to ensure appropriate accounting of the outcome-dependent sampling scheme. We conduct comprehensive simulations, based in part on real data on a sample of N = 78,155 program registrants in Malawi between 2005 and 2007, to evaluate small-sample operating characteristics and potential trade-offs associated with standard case-control sampling or when case-control sampling is performed within clusters.

  14. Grouped fuzzy SVM with EM-based partition of sample space for clustered microcalcification detection.

    Science.gov (United States)

    Wang, Huiya; Feng, Jun; Wang, Hongyu

    2017-07-20

    Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.

  15. Evaluation of immunization coverage by lot quality assurance sampling compared with 30-cluster sampling in a primary health centre in India.

    OpenAIRE

    Singh, J.; Jain, D. C.; Sharma, R. S.; Verghese, T.

    1996-01-01

    The immunization coverage of infants, children and women residing in a primary health centre (PHC) area in Rajasthan was evaluated both by lot quality assurance sampling (LQAS) and by the 30-cluster sampling method recommended by WHO's Expanded Programme on Immunization (EPI). The LQAS survey was used to classify 27 mutually exclusive subunits of the population, defined as residents in health subcentre areas, on the basis of acceptable or unacceptable levels of immunization coverage among inf...

  16. The use of the barbell cluster ANOVA design for the assessment of Environmental Pollution (1987): a case study, Wigierski National Park, NE Poland

    Energy Technology Data Exchange (ETDEWEB)

    Migaszewski, Zdzislaw M. [Pedagogical University, Institute of Chemistry, Geochemistry and the Environment Div., ul. Checinska 5, 25-020 Kielce (Poland)]. E-mail: zmig@pu.kielce.pl; Galuszka, Agnieszka [Pedagogical University, Institute of Chemistry, Geochemistry and the Environment Div., ul. Checinska 5, 25-020 Kielce (Poland); Paslaski, Piotr [Central Chemical Laboratory of the Polish Geological Institute, ul. Rakowiecka 4, 00-975 Warsaw (Poland)

    2005-01-01

    This report presents an assessment of chemical variability in natural ecosystems of Wigierski National Park (NE Poland) derived from the calculation of geochemical baselines using a barbell cluster ANOVA design. This method enabled us to obtain statistically valid information with a minimum number of samples collected. Results of summary statistics are presented for elemental concentrations in the soil horizons-O (Ol + Ofh), -A and -B, 1- and 2-year old Pinus sylvestris L. (Scots pine) needles, pine bark and Hypogymnia physodes (L.) Nyl. (lichen) thalli, as well as pH and TOC. The scope of this study also encompassed S and C stable isotope determinations and SEM examinations on Scots pine needles. The variability for S and trace metals in soils and plant bioindicators is primarily governed by parent material lithology and to a lesser extent by anthropogenic factors. This fact enabled us to study concentrations that are close to regional background levels. - The barbell cluster ANOVA design allowed the number of samples collected to be reduced to a minimum.

  17. Dependence of the clustering properties of galaxies on stellar velocity dispersion in the Main galaxy sample of SDSS DR10

    Science.gov (United States)

    Deng, Xin-Fa; Song, Jun; Chen, Yi-Qing; Jiang, Peng; Ding, Ying-Ping

    2014-08-01

    Using two volume-limited Main galaxy samples of the Sloan Digital Sky Survey Data Release 10 (SDSS DR10), we investigate the dependence of the clustering properties of galaxies on stellar velocity dispersion by cluster analysis. It is found that in the luminous volume-limited Main galaxy sample, except at r=1.2, richer and larger systems can be more easily formed in the large stellar velocity dispersion subsample, while in the faint volume-limited Main galaxy sample, at r≥0.9, an opposite trend is observed. According to statistical analyses of the multiplicity functions, we conclude in two volume-limited Main galaxy samples: small stellar velocity dispersion galaxies preferentially form isolated galaxies, close pairs and small group, while large stellar velocity dispersion galaxies preferentially inhabit the dense groups and clusters. However, we note the difference between two volume-limited Main galaxy samples: in the faint volume-limited Main galaxy sample, at r≥0.9, the small stellar velocity dispersion subsample has a higher proportion of galaxies in superclusters ( n≥200) than the large stellar velocity dispersion subsample.

  18. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16

    This report describes the experimental and sampling design developed to assess sampling approaches and methods for detecting contamination in a building and clearing the building for use after decontamination. An Idaho National Laboratory (INL) building will be contaminated with BG (Bacillus globigii, renamed Bacillus atrophaeus), a simulant for Bacillus anthracis (BA). The contamination, sampling, decontamination, and re-sampling will occur per the experimental and sampling design. This INL-2 Sample Collection Operational Test is being planned by the Validated Sampling Plan Working Group (VSPWG). The primary objectives are: 1) Evaluate judgmental and probabilistic sampling for characterization as well as probabilistic and combined (judgment and probabilistic) sampling approaches for clearance, 2) Conduct these evaluations for gradient contamination (from low or moderate down to absent or undetectable) for different initial concentrations of the contaminant, 3) Explore judgment composite sampling approaches to reduce sample numbers, 4) Collect baseline data to serve as an indication of the actual levels of contamination in the tests. A combined judgmental and random (CJR) approach uses Bayesian methodology to combine judgmental and probabilistic samples to make clearance statements of the form "X% confidence that at least Y% of an area does not contain detectable contamination” (X%/Y% clearance statements). The INL-2 experimental design has five test events, which 1) vary the floor of the INL building on which the contaminant will be released, 2) provide for varying the amount of contaminant released to obtain desired concentration gradients, and 3) investigate overt as well as covert release of contaminants. Desirable contaminant gradients would have moderate to low concentrations of contaminant in rooms near the release point, with concentrations down to zero in other rooms. Such gradients would provide a range of contamination levels to challenge the sampling

  19. Unequal cluster sizes in stepped-wedge cluster randomised trials: a systematic review.

    Science.gov (United States)

    Kristunas, Caroline; Morris, Tom; Gray, Laura

    2017-11-15

    To investigate the extent to which cluster sizes vary in stepped-wedge cluster randomised trials (SW-CRT) and whether any variability is accounted for during the sample size calculation and analysis of these trials. Any, not limited to healthcare settings. Any taking part in an SW-CRT published up to March 2016. The primary outcome is the variability in cluster sizes, measured by the coefficient of variation (CV) in cluster size. Secondary outcomes include the difference between the cluster sizes assumed during the sample size calculation and those observed during the trial, any reported variability in cluster sizes and whether the methods of sample size calculation and methods of analysis accounted for any variability in cluster sizes. Of the 101 included SW-CRTs, 48% mentioned that the included clusters were known to vary in size, yet only 13% of these accounted for this during the calculation of the sample size. However, 69% of the trials did use a method of analysis appropriate for when clusters vary in size. Full trial reports were available for 53 trials. The CV was calculated for 23 of these: the median CV was 0.41 (IQR: 0.22-0.52). Actual cluster sizes could be compared with those assumed during the sample size calculation for 14 (26%) of the trial reports; the cluster sizes were between 29% and 480% of that which had been assumed. Cluster sizes often vary in SW-CRTs. Reporting of SW-CRTs also remains suboptimal. The effect of unequal cluster sizes on the statistical power of SW-CRTs needs further exploration and methods appropriate to studies with unequal cluster sizes need to be employed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. THE ATACAMA COSMOLOGY TELESCOPE: DYNAMICAL MASSES AND SCALING RELATIONS FOR A SAMPLE OF MASSIVE SUNYAEV-ZEL'DOVICH EFFECT SELECTED GALAXY CLUSTERS {sup ,}

    Energy Technology Data Exchange (ETDEWEB)

    Sifon, Cristobal; Barrientos, L. Felipe; Gonzalez, Jorge; Infante, Leopoldo; Duenner, Rolando [Departamento de Astronomia y Astrofisica, Facultad de Fisica, Pontificia Universidad Catolica de Chile, Casilla 306, Santiago 22 (Chile); Menanteau, Felipe; Hughes, John P.; Baker, Andrew J. [Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen Road, Piscataway, NJ 08854 (United States); Hasselfield, Matthew [Department of Physics and Astronomy, University of British Columbia, Vancouver, BC V6T 1Z4 (Canada); Marriage, Tobias A.; Crichton, Devin; Gralla, Megan B. [Department of Physics and Astronomy, The Johns Hopkins University, Baltimore, MD 21218-2686 (United States); Addison, Graeme E.; Dunkley, Joanna [Sub-department of Astrophysics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); Battaglia, Nick; Bond, J. Richard; Hajian, Amir [Canadian Institute for Theoretical Astrophysics, University of Toronto, Toronto, ON M5S 3H8 (Canada); Das, Sudeep [Berkeley Center for Cosmological Physics, LBL and Department of Physics, University of California, Berkeley, CA 94720 (United States); Devlin, Mark J. [Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Hilton, Matt [School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, NG7 2RD (United Kingdom); and others

    2013-07-20

    We present the first dynamical mass estimates and scaling relations for a sample of Sunyaev-Zel'dovich effect (SZE) selected galaxy clusters. The sample consists of 16 massive clusters detected with the Atacama Cosmology Telescope (ACT) over a 455 deg{sup 2} area of the southern sky. Deep multi-object spectroscopic observations were taken to secure intermediate-resolution (R {approx} 700-800) spectra and redshifts for Almost-Equal-To 60 member galaxies on average per cluster. The dynamical masses M{sub 200c} of the clusters have been calculated using simulation-based scaling relations between velocity dispersion and mass. The sample has a median redshift z = 0.50 and a median mass M{sub 200c}{approx_equal}12 Multiplication-Sign 10{sup 14} h{sub 70}{sup -1} M{sub sun} with a lower limit M{sub 200c}{approx_equal}6 Multiplication-Sign 10{sup 14} h{sub 70}{sup -1} M{sub sun}, consistent with the expectations for the ACT southern sky survey. These masses are compared to the ACT SZE properties of the sample, specifically, the match-filtered central SZE amplitude y{sub 0}-tilde, the central Compton parameter y{sub 0}, and the integrated Compton signal Y{sub 200c}, which we use to derive SZE-mass scaling relations. All SZE estimators correlate with dynamical mass with low intrinsic scatter ({approx}< 20%), in agreement with numerical simulations. We explore the effects of various systematic effects on these scaling relations, including the correlation between observables and the influence of dynamically disturbed clusters. Using the three-dimensional information available, we divide the sample into relaxed and disturbed clusters and find that {approx}50% of the clusters are disturbed. There are hints that disturbed systems might bias the scaling relations, but given the current sample sizes, these differences are not significant; further studies including more clusters are required to assess the impact of these clusters on the scaling relations.

  1. Task shifting of frontline community health workers for cardiovascular risk reduction: design and rationale of a cluster randomised controlled trial (DISHA study) in India.

    Science.gov (United States)

    Jeemon, Panniyammakal; Narayanan, Gitanjali; Kondal, Dimple; Kahol, Kashvi; Bharadwaj, Ashok; Purty, Anil; Negi, Prakash; Ladhani, Sulaiman; Sanghvi, Jyoti; Singh, Kuldeep; Kapoor, Deksha; Sobti, Nidhi; Lall, Dorothy; Manimunda, Sathyaprakash; Dwivedi, Supriya; Toteja, Gurudyal; Prabhakaran, Dorairaj

    2016-03-15

    Effective task-shifting interventions targeted at reducing the global cardiovascular disease (CVD) epidemic in low and middle-income countries (LMICs) are urgently needed. DISHA is a cluster randomised controlled trial conducted across 10 sites (5 in phase 1 and 5 in phase 2) in India in 120 clusters. At each site, 12 clusters were randomly selected from a district. A cluster is defined as a small village with 250-300 households and well defined geographical boundaries. They were then randomly allocated to intervention and control clusters in a 1:1 allocation sequence. If any of the intervention and control clusters were workers (mainly Anganwadi workers and ASHA workers) and a post intervention survey in a representative sample. The study staff had no information on intervention allocation until the completion of the baseline survey. In order to ensure comparability of data across sites, the DISHA study follows a common protocol and manual of operation with standardized measurement techniques. Our study is the largest community based cluster randomised trial in low and middle-income country settings designed to test the effectiveness of 'task shifting' interventions involving frontline health workers for cardiovascular risk reduction. CTRI/2013/10/004049 . Registered 7 October 2013.

  2. UV TO FAR-IR CATALOG OF A GALAXY SAMPLE IN NEARBY CLUSTERS: SPECTRAL ENERGY DISTRIBUTIONS AND ENVIRONMENTAL TRENDS

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Fernandez, Jonathan D.; Iglesias-Paramo, J.; Vilchez, J. M., E-mail: jonatan@iaa.es [Instituto de Astrofisica de Andalucia, Glorieta de la Astronomia s/n, 18008 Granada (Spain)

    2012-03-01

    In this paper, we present a sample of cluster galaxies devoted to study the environmental influence on the star formation activity. This sample of galaxies inhabits in clusters showing a rich variety in their characteristics and have been observed by the SDSS-DR6 down to M{sub B} {approx} -18, and by the Galaxy Evolution Explorer AIS throughout sky regions corresponding to several megaparsecs. We assign the broadband and emission-line fluxes from ultraviolet to far-infrared to each galaxy performing an accurate spectral energy distribution for spectral fitting analysis. The clusters follow the general X-ray luminosity versus velocity dispersion trend of L{sub X} {proportional_to} {sigma}{sup 4.4}{sub c}. The analysis of the distributions of galaxy density counting up to the 5th nearest neighbor {Sigma}{sub 5} shows: (1) the virial regions and the cluster outskirts share a common range in the high density part of the distribution. This can be attributed to the presence of massive galaxy structures in the surroundings of virial regions. (2) The virial regions of massive clusters ({sigma}{sub c} > 550 km s{sup -1}) present a {Sigma}{sub 5} distribution statistically distinguishable ({approx}96%) from the corresponding distribution of low-mass clusters ({sigma}{sub c} < 550 km s{sup -1}). Both massive and low-mass clusters follow a similar density-radius trend, but the low-mass clusters avoid the high density extreme. We illustrate, with ABELL 1185, the environmental trends of galaxy populations. Maps of sky projected galaxy density show how low-luminosity star-forming galaxies appear distributed along more spread structures than their giant counterparts, whereas low-luminosity passive galaxies avoid the low-density environment. Giant passive and star-forming galaxies share rather similar sky regions with passive galaxies exhibiting more concentrated distributions.

  3. Architectural design for a topological cluster state quantum computer

    International Nuclear Information System (INIS)

    Devitt, Simon J; Munro, William J; Nemoto, Kae; Fowler, Austin G; Stephens, Ashley M; Greentree, Andrew D; Hollenberg, Lloyd C L

    2009-01-01

    The development of a large scale quantum computer is a highly sought after goal of fundamental research and consequently a highly non-trivial problem. Scalability in quantum information processing is not just a problem of qubit manufacturing and control but it crucially depends on the ability to adapt advanced techniques in quantum information theory, such as error correction, to the experimental restrictions of assembling qubit arrays into the millions. In this paper, we introduce a feasible architectural design for large scale quantum computation in optical systems. We combine the recent developments in topological cluster state computation with the photonic module, a simple chip-based device that can be used as a fundamental building block for a large-scale computer. The integration of the topological cluster model with this comparatively simple operational element addresses many significant issues in scalable computing and leads to a promising modular architecture with complete integration of active error correction, exhibiting high fault-tolerant thresholds.

  4. 30 CFR 71.208 - Bimonthly sampling; designated work positions.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Bimonthly sampling; designated work positions... UNDERGROUND COAL MINES Sampling Procedures § 71.208 Bimonthly sampling; designated work positions. (a) Each... standard when quartz is present), respirable dust sampling of designated work positions shall begin on the...

  5. Cluster as a Service for Disaster Recovery in Intercloud Systems: Design and Modeling

    OpenAIRE

    Mohammad Ali Khoshkholghi

    2014-01-01

    Nowadays, all modern IT technologies aim to create dynamic and flexible environments. For this reason, InterCloud has been designed to provide a vast and flexible virtualized environment in which many clouds can interact with one another in a dynamic way. Disaster recovery is one of the main applications of InterCloud which can be supported by Cluster as a Service. However, the previous studies addressed disaster recovery and Cluster as a Service separately. In addition, system backup and dis...

  6. A field test of three LQAS designs to assess the prevalence of acute malnutrition.

    Science.gov (United States)

    Deitchler, Megan; Valadez, Joseph J; Egge, Kari; Fernandez, Soledad; Hennigan, Mary

    2007-08-01

    The conventional method for assessing the prevalence of Global Acute Malnutrition (GAM) in emergency settings is the 30 x 30 cluster-survey. This study describes alternative approaches: three Lot Quality Assurance Sampling (LQAS) designs to assess GAM. The LQAS designs were field-tested and their results compared with those from a 30 x 30 cluster-survey. Computer simulations confirmed that small clusters instead of a simple random sample could be used for LQAS assessments of GAM. Three LQAS designs were developed (33 x 6, 67 x 3, Sequential design) to assess GAM thresholds of 10, 15 and 20%. The designs were field-tested simultaneously with a 30 x 30 cluster-survey in Siraro, Ethiopia during June 2003. Using a nested study design, anthropometric, morbidity and vaccination data were collected on all children 6-59 months in sampled households. Hypothesis tests about GAM thresholds were conducted for each LQAS design. Point estimates were obtained for the 30 x 30 cluster-survey and the 33 x 6 and 67 x 3 LQAS designs. Hypothesis tests showed GAM as or =10% for the 67 x 3 and Sequential designs. Point estimates for the 33 x 6 and 67 x 3 designs were similar to those of the 30 x 30 cluster-survey for GAM (6.7%, CI = 3.2-10.2%; 8.2%, CI = 4.3-12.1%, 7.4%, CI = 4.8-9.9%) and all other indicators. The CIs for the LQAS designs were only slightly wider than the CIs for the 30 x 30 cluster-survey; yet the LQAS designs required substantially less time to administer. The LQAS designs provide statistically appropriate alternatives to the more time-consuming 30 x 30 cluster-survey. However, additional field-testing is needed using independent samples rather than a nested study design.

  7. HUBBLE SPACE TELESCOPE PROPER MOTION (HSTPROMO) CATALOGS OF GALACTIC GLOBULAR CLUSTERS. I. SAMPLE SELECTION, DATA REDUCTION, AND NGC 7078 RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Bellini, A.; Anderson, J.; Van der Marel, R. P.; Watkins, L. L. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); King, I. R. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Bianchini, P. [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Chanamé, J. [Instituto de Astrofísica, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, Macul 782-0436, Santiago (Chile); Chandar, R. [Department of Physics and Astronomy, The University of Toledo, 2801 West Bancroft Street, Toledo, OH 43606 (United States); Cool, A. M. [Department of Physics and Astronomy, San Francisco State University, 1600 Holloway Avenue, San Francisco, CA 94132 (United States); Ferraro, F. R.; Massari, D. [Dipartimento di Fisica e Astronomia, Università di Bologna, via Ranzani 1, I-40127 Bologna (Italy); Ford, H., E-mail: bellini@stsci.edu [Department of Physics and Astronomy, The Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States)

    2014-12-20

    We present the first study of high-precision internal proper motions (PMs) in a large sample of globular clusters, based on Hubble Space Telescope (HST) data obtained over the past decade with the ACS/WFC, ACS/HRC, and WFC3/UVIS instruments. We determine PMs for over 1.3 million stars in the central regions of 22 clusters, with a median number of ∼60,000 stars per cluster. These PMs have the potential to significantly advance our understanding of the internal kinematics of globular clusters by extending past line-of-sight (LOS) velocity measurements to two- or three-dimensional velocities, lower stellar masses, and larger sample sizes. We describe the reduction pipeline that we developed to derive homogeneous PMs from the very heterogeneous archival data. We demonstrate the quality of the measurements through extensive Monte Carlo simulations. We also discuss the PM errors introduced by various systematic effects and the techniques that we have developed to correct or remove them to the extent possible. We provide in electronic form the catalog for NGC 7078 (M 15), which consists of 77,837 stars in the central 2.'4. We validate the catalog by comparison with existing PM measurements and LOS velocities and use it to study the dependence of the velocity dispersion on radius, stellar magnitude (or mass) along the main sequence, and direction in the plane of the sky (radial or tangential). Subsequent papers in this series will explore a range of applications in globular-cluster science and will also present the PM catalogs for the other sample clusters.

  8. THE DYNAMICAL STATE OF BRIGHTEST CLUSTER GALAXIES AND THE FORMATION OF CLUSTERS

    International Nuclear Information System (INIS)

    Coziol, R.; Andernach, H.; Caretta, C. A.; Alamo-MartInez, K. A.; Tago, E.

    2009-01-01

    A large sample of Abell clusters of galaxies, selected for the likely presence of a dominant galaxy, is used to study the dynamical properties of the brightest cluster members (BCMs). From visual inspection of Digitized Sky Survey images combined with redshift information we identify 1426 candidate BCMs located in 1221 different redshift components associated with 1169 different Abell clusters. This is the largest sample published so far of such galaxies. From our own morphological classification we find that ∼92% of the BCMs in our sample are early-type galaxies and 48% are of cD type. We confirm what was previously observed based on much smaller samples, namely, that a large fraction of BCMs have significant peculiar velocities. From a subsample of 452 clusters having at least 10 measured radial velocities, we estimate a median BCM peculiar velocity of 32% of their host clusters' radial velocity dispersion. This suggests that most BCMs are not at rest in the potential well of their clusters. This phenomenon is common to galaxy clusters in our sample, and not a special trait of clusters hosting cD galaxies. We show that the peculiar velocity of the BCM is independent of cluster richness and only slightly dependent on the Bautz-Morgan type. We also find a weak trend for the peculiar velocity to rise with the cluster velocity dispersion. The strongest dependence is with the morphological type of the BCM: cD galaxies tend to have lower relative peculiar velocities than elliptical galaxies. This result points to a connection between the formation of the BCMs and that of their clusters. Our data are qualitatively consistent with the merging-groups scenario, where BCMs in clusters formed first in smaller subsystems comparable to compact groups of galaxies. In this scenario, clusters would have formed recently from the mergers of many such groups and would still be in a dynamically unrelaxed state.

  9. Further observations on comparison of immunization coverage by lot quality assurance sampling and 30 cluster sampling.

    Science.gov (United States)

    Singh, J; Jain, D C; Sharma, R S; Verghese, T

    1996-06-01

    Lot Quality Assurance Sampling (LQAS) and standard EPI methodology (30 cluster sampling) were used to evaluate immunization coverage in a Primary Health Center (PHC) where coverage levels were reported to be more than 85%. Of 27 sub-centers (lots) evaluated by LQAS, only 2 were accepted for child coverage, whereas none was accepted for tetanus toxoid (TT) coverage in mothers. LQAS data were combined to obtain an estimate of coverage in the entire population; 41% (95% CI 36-46) infants were immunized appropriately for their ages, while 42% (95% CI 37-47) of their mothers had received a second/ booster dose of TT. TT coverage in 149 contemporary mothers sampled in EPI survey was also 42% (95% CI 31-52). Although results by the two sampling methods were consistent with each other, a big gap was evident between reported coverage (in children as well as mothers) and survey results. LQAS was found to be operationally feasible, but it cost 40% more and required 2.5 times more time than the EPI survey. LQAS therefore, is not a good substitute for current EPI methodology to evaluate immunization coverage in a large administrative area. However, LQAS has potential as method to monitor health programs on a routine basis in small population sub-units, especially in areas with high and heterogeneously distributed immunization coverage.

  10. Accurate recapture identification for genetic mark–recapture studies with error-tolerant likelihood-based match calling and sample clustering

    Science.gov (United States)

    Sethi, Suresh; Linden, Daniel; Wenburg, John; Lewis, Cara; Lemons, Patrick R.; Fuller, Angela K.; Hare, Matthew P.

    2016-01-01

    Error-tolerant likelihood-based match calling presents a promising technique to accurately identify recapture events in genetic mark–recapture studies by combining probabilities of latent genotypes and probabilities of observed genotypes, which may contain genotyping errors. Combined with clustering algorithms to group samples into sets of recaptures based upon pairwise match calls, these tools can be used to reconstruct accurate capture histories for mark–recapture modelling. Here, we assess the performance of a recently introduced error-tolerant likelihood-based match-calling model and sample clustering algorithm for genetic mark–recapture studies. We assessed both biallelic (i.e. single nucleotide polymorphisms; SNP) and multiallelic (i.e. microsatellite; MSAT) markers using a combination of simulation analyses and case study data on Pacific walrus (Odobenus rosmarus divergens) and fishers (Pekania pennanti). A novel two-stage clustering approach is demonstrated for genetic mark–recapture applications. First, repeat captures within a sampling occasion are identified. Subsequently, recaptures across sampling occasions are identified. The likelihood-based matching protocol performed well in simulation trials, demonstrating utility for use in a wide range of genetic mark–recapture studies. Moderately sized SNP (64+) and MSAT (10–15) panels produced accurate match calls for recaptures and accurate non-match calls for samples from closely related individuals in the face of low to moderate genotyping error. Furthermore, matching performance remained stable or increased as the number of genetic markers increased, genotyping error notwithstanding.

  11. Evaluation of immunization coverage by lot quality assurance sampling compared with 30-cluster sampling in a primary health centre in India.

    Science.gov (United States)

    Singh, J; Jain, D C; Sharma, R S; Verghese, T

    1996-01-01

    The immunization coverage of infants, children and women residing in a primary health centre (PHC) area in Rajasthan was evaluated both by lot quality assurance sampling (LQAS) and by the 30-cluster sampling method recommended by WHO's Expanded Programme on Immunization (EPI). The LQAS survey was used to classify 27 mutually exclusive subunits of the population, defined as residents in health subcentre areas, on the basis of acceptable or unacceptable levels of immunization coverage among infants and their mothers. The LQAS results from the 27 subcentres were also combined to obtain an overall estimate of coverage for the entire population of the primary health centre, and these results were compared with the EPI cluster survey results. The LQAS survey did not identify any subcentre with a level of immunization among infants high enough to be classified as acceptable; only three subcentres were classified as having acceptable levels of tetanus toxoid (TT) coverage among women. The estimated overall coverage in the PHC population from the combined LQAS results showed that a quarter of the infants were immunized appropriately for their ages and that 46% of their mothers had been adequately immunized with TT. Although the age groups and the periods of time during which the children were immunized differed for the LQAS and EPI survey populations, the characteristics of the mothers were largely similar. About 57% (95% CI, 46-67) of them were found to be fully immunized with TT by 30-cluster sampling, compared with 46% (95% CI, 41-51) by stratified random sampling. The difference was not statistically significant. The field work to collect LQAS data took about three times longer, and cost 60% more than the EPI survey. The apparently homogeneous and low level of immunization coverage in the 27 subcentres makes this an impractical situation in which to apply LQAS, and the results obtained were therefore not particularly useful. However, if LQAS had been applied by local

  12. THE CLUSTERING OF ALFALFA GALAXIES: DEPENDENCE ON H I MASS, RELATIONSHIP WITH OPTICAL SAMPLES, AND CLUES OF HOST HALO PROPERTIES

    Energy Technology Data Exchange (ETDEWEB)

    Papastergis, Emmanouil; Giovanelli, Riccardo; Haynes, Martha P.; Jones, Michael G. [Center for Radiophysics and Space Research, Space Sciences Building, Cornell University, Ithaca, NY 14853 (United States); Rodríguez-Puebla, Aldo, E-mail: papastergis@astro.cornell.edu, E-mail: riccardo@astro.cornell.edu, E-mail: haynes@astro.cornell.edu, E-mail: jonesmg@astro.cornell.edu, E-mail: apuebla@astro.unam.mx [Instituto de Astronomía, Universidad Nacional Autónoma de México, A. P. 70-264, 04510 México, D.F. (Mexico)

    2013-10-10

    We use a sample of ≈6000 galaxies detected by the Arecibo Legacy Fast ALFA (ALFALFA) 21 cm survey to measure the clustering properties of H I-selected galaxies. We find no convincing evidence for a dependence of clustering on galactic atomic hydrogen (H I) mass, over the range M{sub H{sub I}} ≈ 10{sup 8.5}-10{sup 10.5} M{sub ☉}. We show that previously reported results of weaker clustering for low H I mass galaxies are probably due to finite-volume effects. In addition, we compare the clustering of ALFALFA galaxies with optically selected samples drawn from the Sloan Digital Sky Survey (SDSS). We find that H I-selected galaxies cluster more weakly than even relatively optically faint galaxies, when no color selection is applied. Conversely, when SDSS galaxies are split based on their color, we find that the correlation function of blue optical galaxies is practically indistinguishable from that of H I-selected galaxies. At the same time, SDSS galaxies with red colors are found to cluster significantly more than H I-selected galaxies, a fact that is evident in both the projected as well as the full two-dimensional correlation function. A cross-correlation analysis further reveals that gas-rich galaxies 'avoid' being located within ≈3 Mpc of optical galaxies with red colors. Next, we consider the clustering properties of halo samples selected from the Bolshoi ΛCDM simulation. A comparison with the clustering of ALFALFA galaxies suggests that galactic H I mass is not tightly related to host halo mass and that a sizable fraction of subhalos do not host H I galaxies. Lastly, we find that we can recover fairly well the correlation function of H I galaxies by just excluding halos with low spin parameter. This finding lends support to the hypothesis that halo spin plays a key role in determining the gas content of galaxies.

  13. Clinical evaluation of nonsyndromic dental anomalies in Dravidian population: A cluster sample analysis

    OpenAIRE

    Yamunadevi, Andamuthu; Selvamani, M.; Vinitha, V.; Srivandhana, R.; Balakrithiga, M.; Prabhu, S.; Ganapathy, N.

    2015-01-01

    Aim: To record the prevalence rate of dental anomalies in Dravidian population and analyze the percentage of individual anomalies in the population. Methodology: A cluster sample analysis was done, where 244 subjects studying in a dental institution were all included and analyzed for occurrence of dental anomalies by clinical examination, excluding third molars from analysis. Results: 31.55% of the study subjects had dental anomalies and shape anomalies were more prevalent (22.1%), followed b...

  14. Chandra Cluster Cosmology Project. II. Samples and X-Ray Data Reduction

    DEFF Research Database (Denmark)

    Vikhlinin, A.; Burenin, R. A.; Ebeling, H.

    2009-01-01

    We discuss the measurements of the galaxy cluster mass functions at z ≈ 0.05 and z ≈ 0.5 using high-quality Chandra observations of samples derived from the ROSAT PSPC All-Sky and 400 deg2 surveys. We provide a full reference for the data analysis procedures, present updated calibration of relati...... at a fixed mass threshold, e.g., by a factor of 5.0 ± 1.2 at M 500 = 2.5 × 1014 h –1 M sun between z = 0 and 0.5. This evolution reflects the growth of density perturbations, and can be used for the cosmological constraints complementing those from the distance-redshift relation....

  15. Weak-lensing mass calibration of the Atacama Cosmology Telescope equatorial Sunyaev-Zeldovich cluster sample with the Canada-France-Hawaii telescope stripe 82 survey

    Energy Technology Data Exchange (ETDEWEB)

    Battaglia, N.; Miyatake, H.; Hasselfield, M.; Calabrese, E.; Ferrara, S.; Hložek, R. [Dept. of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); Leauthaud, A. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583 (Japan); Gralla, M.B.; Crichton, D. [Dept. of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218 (United States); Allison, R.; Dunkley, J. [Dept. of Astrophysics, University of Oxford, Oxford OX1 3RH (United Kingdom); Bond, J.R. [Canadian Institute for Theoretical Astrophysics, Toronto, ON M5S 3H8 (Canada); Devlin, M.J. [Dept. of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 (United States); Dünner, R. [Dept. de Astronomía y Astrofísica, Facultad de Física, Pontificia Universidad Católica de Chile, Santiago (Chile); Erben, T. [Argelander-Institut für Astronomie, University of Bonn, 53121 Bonn (Germany); Halpern, M.; Hincks, A.D. [Dept. of Physics and Astronomy, University of British Columbia, Vancouver, BC, V6T 1Z4 (Canada); Hilton, M. [Astrophysics and Cosmology Research Unit, School of Mathematical, Statistics and Computer Science, University of KwaZulu-Natal, Durban, 4041 (South Africa); Hill, J.C. [Dept. of Astronomy, Columbia University, New York, NY 10027 (United States); Huffenberger, K.M., E-mail: nbatta@astro.princeton.edu [Dept. of Physics, Florida State University, Tallahassee, FL 32306 (United States); and others

    2016-08-01

    Mass calibration uncertainty is the largest systematic effect for using clusters of galaxies to constrain cosmological parameters. We present weak lensing mass measurements from the Canada-France-Hawaii Telescope Stripe 82 Survey for galaxy clusters selected through their high signal-to-noise thermal Sunyaev-Zeldovich (tSZ) signal measured with the Atacama Cosmology Telescope (ACT). For a sample of 9 ACT clusters with a tSZ signal-to-noise greater than five the average weak lensing mass is (4.8±0.8) ×10{sup 14} M{sub ⊙}, consistent with the tSZ mass estimate of (4.70±1.0) ×10{sup 14} M{sub ⊙} which assumes a universal pressure profile for the cluster gas. Our results are consistent with previous weak-lensing measurements of tSZ-detected clusters from the Planck satellite. When comparing our results, we estimate the Eddington bias correction for the sample intersection of Planck and weak-lensing clusters which was previously excluded.

  16. Weak-lensing mass calibration of the Atacama Cosmology Telescope equatorial Sunyaev-Zeldovich cluster sample with the Canada-France-Hawaii telescope stripe 82 survey

    International Nuclear Information System (INIS)

    Battaglia, N.; Miyatake, H.; Hasselfield, M.; Calabrese, E.; Ferrara, S.; Hložek, R.; Leauthaud, A.; Gralla, M.B.; Crichton, D.; Allison, R.; Dunkley, J.; Bond, J.R.; Devlin, M.J.; Dünner, R.; Erben, T.; Halpern, M.; Hincks, A.D.; Hilton, M.; Hill, J.C.; Huffenberger, K.M.

    2016-01-01

    Mass calibration uncertainty is the largest systematic effect for using clusters of galaxies to constrain cosmological parameters. We present weak lensing mass measurements from the Canada-France-Hawaii Telescope Stripe 82 Survey for galaxy clusters selected through their high signal-to-noise thermal Sunyaev-Zeldovich (tSZ) signal measured with the Atacama Cosmology Telescope (ACT). For a sample of 9 ACT clusters with a tSZ signal-to-noise greater than five the average weak lensing mass is (4.8±0.8) ×10 14 M ⊙ , consistent with the tSZ mass estimate of (4.70±1.0) ×10 14 M ⊙ which assumes a universal pressure profile for the cluster gas. Our results are consistent with previous weak-lensing measurements of tSZ-detected clusters from the Planck satellite. When comparing our results, we estimate the Eddington bias correction for the sample intersection of Planck and weak-lensing clusters which was previously excluded.

  17. THE SWIFT AGN AND CLUSTER SURVEY. II. CLUSTER CONFIRMATION WITH SDSS DATA

    International Nuclear Information System (INIS)

    Griffin, Rhiannon D.; Dai, Xinyu; Kochanek, Christopher S.; Bregman, Joel N.

    2016-01-01

    We study 203 (of 442) Swift AGN and Cluster Survey extended X-ray sources located in the SDSS DR8 footprint to search for galaxy over-densities in three-dimensional space using SDSS galaxy photometric redshifts and positions near the Swift cluster candidates. We find 104 Swift clusters with a >3σ galaxy over-density. The remaining targets are potentially located at higher redshifts and require deeper optical follow-up observations for confirmation as galaxy clusters. We present a series of cluster properties including the redshift, brightest cluster galaxy (BCG) magnitude, BCG-to-X-ray center offset, optical richness, and X-ray luminosity. We also detect red sequences in ∼85% of the 104 confirmed clusters. The X-ray luminosity and optical richness for the SDSS confirmed Swift clusters are correlated and follow previously established relations. The distribution of the separations between the X-ray centroids and the most likely BCG is also consistent with expectation. We compare the observed redshift distribution of the sample with a theoretical model, and find that our sample is complete for z ≲ 0.3 and is still 80% complete up to z ≃ 0.4, consistent with the SDSS survey depth. These analysis results suggest that our Swift cluster selection algorithm has yielded a statistically well-defined cluster sample for further study of cluster evolution and cosmology. We also match our SDSS confirmed Swift clusters to existing cluster catalogs, and find 42, 23, and 1 matches in optical, X-ray, and Sunyaev–Zel’dovich catalogs, respectively, and so the majority of these clusters are new detections

  18. Clustering in surgical trials - database of intracluster correlations

    Directory of Open Access Journals (Sweden)

    Cook Jonathan A

    2012-01-01

    Full Text Available Abstract Background Randomised trials evaluation of surgical interventions are often designed and analysed as if the outcome of individual patients is independent of the surgeon providing the intervention. There is reason to expect outcomes for patients treated by the same surgeon tend to be more similar than those under the care of another surgeon due to previous experience, individual practice, training, and infrastructure. Such a phenomenon is referred to as the clustering effect and potentially impacts on the design and analysis adopted and thereby the required sample size. The aim of this work was to inform trial design by quantifying clustering effects (at both centre and surgeon level for various outcomes using a database of surgical trials. Methods Intracluster correlation coefficients (ICCs were calculated for outcomes from a set of 10 multicentre surgical trials for a range of outcomes and different time points for clustering at both the centre and surgeon level. Results ICCs were calculated for 198 outcomes across the 10 trials at both centre and surgeon cluster levels. The number of cases varied from 138 to 1370 across the trials. The median (range average cluster size was 32 (9 to 51 and 6 (3 to 30 for centre and surgeon levels respectively. ICC estimates varied substantially between outcome type though uncertainty around individual ICC estimates was substantial, which was reflected in generally wide confidence intervals. Conclusions This database of surgical trials provides trialists with valuable information on how to design surgical trials. Our data suggests clustering of outcome is more of an issue than has been previously acknowledged. We anticipate that over time the addition of ICCs from further surgical trial datasets to our database will further inform the design of surgical trials.

  19. Interplay between experiments and calculations for organometallic clusters and caged clusters

    International Nuclear Information System (INIS)

    Nakajima, Atsushi

    2015-01-01

    Clusters consisting of 10-1000 atoms exhibit size-dependent electronic and geometric properties. In particular, composite clusters consisting of several elements and/or components provide a promising way for a bottom-up approach for designing functional advanced materials, because the functionality of the composite clusters can be optimized not only by the cluster size but also by their compositions. In the formation of composite clusters, their geometric symmetry and dimensionality are emphasized to control the physical and chemical properties, because selective and anisotropic enhancements for optical, chemical, and magnetic properties can be expected. Organometallic clusters and caged clusters are demonstrated as a representative example of designing the functionality of the composite clusters. Organometallic vanadium-benzene forms a one dimensional sandwich structure showing ferromagnetic behaviors and anomalously large HOMO-LUMO gap differences of two spin orbitals, which can be regarded as spin-filter components for cluster-based spintronic devices. Caged clusters of aluminum (Al) are well stabilized both geometrically and electronically at Al 12 X, behaving as a “superatom”

  20. OBSERVED SCALING RELATIONS FOR STRONG LENSING CLUSTERS: CONSEQUENCES FOR COSMOLOGY AND CLUSTER ASSEMBLY

    International Nuclear Information System (INIS)

    Comerford, Julia M.; Moustakas, Leonidas A.; Natarajan, Priyamvada

    2010-01-01

    Scaling relations of observed galaxy cluster properties are useful tools for constraining cosmological parameters as well as cluster formation histories. One of the key cosmological parameters, σ 8 , is constrained using observed clusters of galaxies, although current estimates of σ 8 from the scaling relations of dynamically relaxed galaxy clusters are limited by the large scatter in the observed cluster mass-temperature (M-T) relation. With a sample of eight strong lensing clusters at 0.3 8 , but combining the cluster concentration-mass relation with the M-T relation enables the inclusion of unrelaxed clusters as well. Thus, the resultant gains in the accuracy of σ 8 measurements from clusters are twofold: the errors on σ 8 are reduced and the cluster sample size is increased. Therefore, the statistics on σ 8 determination from clusters are greatly improved by the inclusion of unrelaxed clusters. Exploring cluster scaling relations further, we find that the correlation between brightest cluster galaxy (BCG) luminosity and cluster mass offers insight into the assembly histories of clusters. We find preliminary evidence for a steeper BCG luminosity-cluster mass relation for strong lensing clusters than the general cluster population, hinting that strong lensing clusters may have had more active merging histories.

  1. Person mobility in the design and analysis of cluster-randomized cohort prevention trials.

    Science.gov (United States)

    Vuchinich, Sam; Flay, Brian R; Aber, Lawrence; Bickman, Leonard

    2012-06-01

    Person mobility is an inescapable fact of life for most cluster-randomized (e.g., schools, hospitals, clinic, cities, state) cohort prevention trials. Mobility rates are an important substantive consideration in estimating the effects of an intervention. In cluster-randomized trials, mobility rates are often correlated with ethnicity, poverty and other variables associated with disparity. This raises the possibility that estimated intervention effects may generalize to only the least mobile segments of a population and, thus, create a threat to external validity. Such mobility can also create threats to the internal validity of conclusions from randomized trials. Researchers must decide how to deal with persons who leave study clusters during a trial (dropouts), persons and clusters that do not comply with an assigned intervention, and persons who enter clusters during a trial (late entrants), in addition to the persons who remain for the duration of a trial (stayers). Statistical techniques alone cannot solve the key issues of internal and external validity raised by the phenomenon of person mobility. This commentary presents a systematic, Campbellian-type analysis of person mobility in cluster-randomized cohort prevention trials. It describes four approaches for dealing with dropouts, late entrants and stayers with respect to data collection, analysis and generalizability. The questions at issue are: 1) From whom should data be collected at each wave of data collection? 2) Which cases should be included in the analyses of an intervention effect? and 3) To what populations can trial results be generalized? The conclusions lead to recommendations for the design and analysis of future cluster-randomized cohort prevention trials.

  2. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  3. Adaptive cluster sampling: An efficient method for assessing inconspicuous species

    Science.gov (United States)

    Andrea M. Silletti; Joan Walker

    2003-01-01

    Restorationistis typically evaluate the success of a project by estimating the population sizes of species that have been planted or seeded. Because total census is raely feasible, they must rely on sampling methods for population estimates. However, traditional random sampling designs may be inefficient for species that, for one reason or another, are challenging to...

  4. Cluster Analysis of the Yale Global Tic Severity Scale (YGTSS): Symptom Dimensions and Clinical Correlates in an Outpatient Youth Sample

    Science.gov (United States)

    Kircanski, Katharina; Woods, Douglas W.; Chang, Susanna W.; Ricketts, Emily J.; Piacentini, John C.

    2010-01-01

    Tic disorders are heterogeneous, with symptoms varying widely both within and across patients. Exploration of symptom clusters may aid in the identification of symptom dimensions of empirical and treatment import. This article presents the results of two studies investigating tic symptom clusters using a sample of 99 youth (M age = 10.7, 81% male,…

  5. On efficiency of some ratio estimators in double sampling design ...

    African Journals Online (AJOL)

    In this paper, three sampling ratio estimators in double sampling design were proposed with the intention of finding an alternative double sampling design estimator to the conventional ratio estimator in double sampling design discussed by Cochran (1997), Okafor (2002) , Raj (1972) and Raj and Chandhok (1999).

  6. Physical design of time-of-flight mass spectrometer in energetic cluster impact deposition apparatus

    International Nuclear Information System (INIS)

    Yu Guoqing; Shi Ying; Chen Jingsheng; Zhu Dezhang; Pan Haochang; Xu Hongjie

    1999-01-01

    The principle and physical design of the time-of-flight mass spectrometer equipped in the energetic cluster impact deposition apparatus are introduced. Some problems existed in experiments and their solutions are also discussed

  7. TreeCluster: Massively scalable transmission clustering using phylogenetic trees

    OpenAIRE

    Moshiri, Alexander

    2018-01-01

    Background: The ability to infer transmission clusters from molecular data is critical to designing and evaluating viral control strategies. Viral sequencing datasets are growing rapidly, but standard methods of transmission cluster inference do not scale well beyond thousands of sequences. Results: I present TreeCluster, a cross-platform tool that performs transmission cluster inference on a given phylogenetic tree orders of magnitude faster than existing inference methods and supports multi...

  8. Scientific Cluster Deployment and Recovery - Using puppet to simplify cluster management

    Science.gov (United States)

    Hendrix, Val; Benjamin, Doug; Yao, Yushu

    2012-12-01

    Deployment, maintenance and recovery of a scientific cluster, which has complex, specialized services, can be a time consuming task requiring the assistance of Linux system administrators, network engineers as well as domain experts. Universities and small institutions that have a part-time FTE with limited time for and knowledge of the administration of such clusters can be strained by such maintenance tasks. This current work is the result of an effort to maintain a data analysis cluster (DAC) with minimal effort by a local system administrator. The realized benefit is the scientist, who is the local system administrator, is able to focus on the data analysis instead of the intricacies of managing a cluster. Our work provides a cluster deployment and recovery process (CDRP) based on the puppet configuration engine allowing a part-time FTE to easily deploy and recover entire clusters with minimal effort. Puppet is a configuration management system (CMS) used widely in computing centers for the automatic management of resources. Domain experts use Puppet's declarative language to define reusable modules for service configuration and deployment. Our CDRP has three actors: domain experts, a cluster designer and a cluster manager. The domain experts first write the puppet modules for the cluster services. A cluster designer would then define a cluster. This includes the creation of cluster roles, mapping the services to those roles and determining the relationships between the services. Finally, a cluster manager would acquire the resources (machines, networking), enter the cluster input parameters (hostnames, IP addresses) and automatically generate deployment scripts used by puppet to configure it to act as a designated role. In the event of a machine failure, the originally generated deployment scripts along with puppet can be used to easily reconfigure a new machine. The cluster definition produced in our CDRP is an integral part of automating cluster deployment

  9. Weak-Lensing Mass Calibration of the Atacama Cosmology Telescope Equatorial Sunyaev-Zeldovich Cluster Sample with the Canada-France-Hawaii Telescope Stripe 82 Survey

    Science.gov (United States)

    Battaglia, N.; Leauthaud, A.; Miyatake, H.; Hasseleld, M.; Gralla, M. B.; Allison, R.; Bond, J. R.; Calabrese, E.; Crichton, D.; Devlin, M. J.; hide

    2016-01-01

    Mass calibration uncertainty is the largest systematic effect for using clustersof galaxies to constrain cosmological parameters. We present weak lensing mass measurements from the Canada-France-Hawaii Telescope Stripe 82 Survey for galaxy clusters selected through their high signal-to-noise thermal Sunyaev-Zeldovich (tSZ) signal measured with the Atacama Cosmology Telescope (ACT). For a sample of 9 ACT clusters with a tSZ signal-to-noise greater than five, the average weak lensing mass is (4.8 plus or minus 0.8) times 10 (sup 14) solar mass, consistent with the tSZ mass estimate of (4.7 plus or minus 1.0) times 10 (sup 14) solar mass, which assumes a universal pressure profile for the cluster gas. Our results are consistent with previous weak-lensing measurements of tSZ-detected clusters from the Planck satellite. When comparing our results, we estimate the Eddington bias correction for the sample intersection of Planck and weak-lensing clusters which was previously excluded.

  10. Design compliance matrix waste sample container filling system for nested, fixed-depth sampling system

    International Nuclear Information System (INIS)

    BOGER, R.M.

    1999-01-01

    This design compliance matrix document provides specific design related functional characteristics, constraints, and requirements for the container filling system that is part of the nested, fixed-depth sampling system. This document addresses performance, external interfaces, ALARA, Authorization Basis, environmental and design code requirements for the container filling system. The container filling system will interface with the waste stream from the fluidic pumping channels of the nested, fixed-depth sampling system and will fill containers with waste that meet the Resource Conservation and Recovery Act (RCRA) criteria for waste that contains volatile and semi-volatile organic materials. The specifications for the nested, fixed-depth sampling system are described in a Level 2 Specification document (HNF-3483, Rev. 1). The basis for this design compliance matrix document is the Tank Waste Remediation System (TWRS) desk instructions for design Compliance matrix documents (PI-CP-008-00, Rev. 0)

  11. Semi-supervised weighted kernel clustering based on gravitational search for fault diagnosis.

    Science.gov (United States)

    Li, Chaoshun; Zhou, Jianzhong

    2014-09-01

    Supervised learning method, like support vector machine (SVM), has been widely applied in diagnosing known faults, however this kind of method fails to work correctly when new or unknown fault occurs. Traditional unsupervised kernel clustering can be used for unknown fault diagnosis, but it could not make use of the historical classification information to improve diagnosis accuracy. In this paper, a semi-supervised kernel clustering model is designed to diagnose known and unknown faults. At first, a novel semi-supervised weighted kernel clustering algorithm based on gravitational search (SWKC-GS) is proposed for clustering of dataset composed of labeled and unlabeled fault samples. The clustering model of SWKC-GS is defined based on wrong classification rate of labeled samples and fuzzy clustering index on the whole dataset. Gravitational search algorithm (GSA) is used to solve the clustering model, while centers of clusters, feature weights and parameter of kernel function are selected as optimization variables. And then, new fault samples are identified and diagnosed by calculating the weighted kernel distance between them and the fault cluster centers. If the fault samples are unknown, they will be added in historical dataset and the SWKC-GS is used to partition the mixed dataset and update the clustering results for diagnosing new fault. In experiments, the proposed method has been applied in fault diagnosis for rotatory bearing, while SWKC-GS has been compared not only with traditional clustering methods, but also with SVM and neural network, for known fault diagnosis. In addition, the proposed method has also been applied in unknown fault diagnosis. The results have shown effectiveness of the proposed method in achieving expected diagnosis accuracy for both known and unknown faults of rotatory bearing. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Social network recruitment for Yo Puedo: an innovative sexual health intervention in an underserved urban neighborhood—sample and design implications.

    Science.gov (United States)

    Minnis, Alexandra M; vanDommelen-Gonzalez, Evan; Luecke, Ellen; Cheng, Helen; Dow, William; Bautista-Arredondo, Sergio; Padian, Nancy S

    2015-02-01

    Most existing evidence-based sexual health interventions focus on individual-level behavior, even though there is substantial evidence that highlights the influential role of social environments in shaping adolescents' behaviors and reproductive health outcomes. We developed Yo Puedo, a combined conditional cash transfer and life skills intervention for youth to promote educational attainment, job training, and reproductive health wellness that we then evaluated for feasibility among 162 youth aged 16-21 years in a predominantly Latino community in San Francisco, CA. The intervention targeted youth's social networks and involved recruitment and randomization of small social network clusters. In this paper we describe the design of the feasibility study and report participants' baseline characteristics. Furthermore, we examined the sample and design implications of recruiting social network clusters as the unit of randomization. Baseline data provide evidence that we successfully enrolled high risk youth using a social network recruitment approach in community and school-based settings. Nearly all participants (95%) were high risk for adverse educational and reproductive health outcomes based on multiple measures of low socioeconomic status (81%) and/or reported high risk behaviors (e.g., gang affiliation, past pregnancy, recent unprotected sex, frequent substance use; 62%). We achieved variability in the study sample through heterogeneity in recruitment of the index participants, whereas the individuals within the small social networks of close friends demonstrated substantial homogeneity across sociodemographic and risk profile characteristics. Social networks recruitment was feasible and yielded a sample of high risk youth willing to enroll in a randomized study to evaluate a novel sexual health intervention.

  13. BRIGHTEST CLUSTER GALAXIES AND CORE GAS DENSITY IN REXCESS CLUSTERS

    International Nuclear Information System (INIS)

    Haarsma, Deborah B.; Leisman, Luke; Donahue, Megan; Bruch, Seth; Voit, G. Mark; Boehringer, Hans; Pratt, Gabriel W.; Pierini, Daniele; Croston, Judith H.; Arnaud, Monique

    2010-01-01

    We investigate the relationship between brightest cluster galaxies (BCGs) and their host clusters using a sample of nearby galaxy clusters from the Representative XMM-Newton Cluster Structure Survey. The sample was imaged with the Southern Observatory for Astrophysical Research in R band to investigate the mass of the old stellar population. Using a metric radius of 12 h -1 kpc, we found that the BCG luminosity depends weakly on overall cluster mass as L BCG ∝ M 0.18±0.07 cl , consistent with previous work. We found that 90% of the BCGs are located within 0.035 r 500 of the peak of the X-ray emission, including all of the cool core (CC) clusters. We also found an unexpected correlation between the BCG metric luminosity and the core gas density for non-cool-core (non-CC) clusters, following a power law of n e ∝ L 2.7±0.4 BCG (where n e is measured at 0.008 r 500 ). The correlation is not easily explained by star formation (which is weak in non-CC clusters) or overall cluster mass (which is not correlated with core gas density). The trend persists even when the BCG is not located near the peak of the X-ray emission, so proximity is not necessary. We suggest that, for non-CC clusters, this correlation implies that the same process that sets the central entropy of the cluster gas also determines the central stellar density of the BCG, and that this underlying physical process is likely to be mergers.

  14. Fullerene nanostructure design with cluster ion impacts

    Czech Academy of Sciences Publication Activity Database

    Lavrentiev, Vasyl; Vacík, Jiří; Naramoto, H.; Narumi, K.

    2009-01-01

    Roč. 483, - (2009), s. 479-483 ISSN 0925-8388 R&D Projects: GA AV ČR IAA200480702; GA AV ČR IAA400100701; GA AV ČR(CZ) KAN400480701 Institutional research plan: CEZ:AV0Z10480505 Keywords : fullerene films, clusters C60+ * cluster ion implantation * patterning Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 2.135, year: 2009

  15. Scientific Cluster Deployment and Recovery – Using puppet to simplify cluster management

    International Nuclear Information System (INIS)

    Hendrix, Val; Yao Yushu; Benjamin, Doug

    2012-01-01

    Deployment, maintenance and recovery of a scientific cluster, which has complex, specialized services, can be a time consuming task requiring the assistance of Linux system administrators, network engineers as well as domain experts. Universities and small institutions that have a part-time FTE with limited time for and knowledge of the administration of such clusters can be strained by such maintenance tasks. This current work is the result of an effort to maintain a data analysis cluster (DAC) with minimal effort by a local system administrator. The realized benefit is the scientist, who is the local system administrator, is able to focus on the data analysis instead of the intricacies of managing a cluster. Our work provides a cluster deployment and recovery process (CDRP) based on the puppet configuration engine allowing a part-time FTE to easily deploy and recover entire clusters with minimal effort. Puppet is a configuration management system (CMS) used widely in computing centers for the automatic management of resources. Domain experts use Puppet's declarative language to define reusable modules for service configuration and deployment. Our CDRP has three actors: domain experts, a cluster designer and a cluster manager. The domain experts first write the puppet modules for the cluster services. A cluster designer would then define a cluster. This includes the creation of cluster roles, mapping the services to those roles and determining the relationships between the services. Finally, a cluster manager would acquire the resources (machines, networking), enter the cluster input parameters (hostnames, IP addresses) and automatically generate deployment scripts used by puppet to configure it to act as a designated role. In the event of a machine failure, the originally generated deployment scripts along with puppet can be used to easily reconfigure a new machine. The cluster definition produced in our CDRP is an integral part of automating cluster deployment

  16. Sample design for the residential energy consumption survey

    Energy Technology Data Exchange (ETDEWEB)

    1994-08-01

    The purpose of this report is to provide detailed information about the multistage area-probability sample design used for the Residential Energy Consumption Survey (RECS). It is intended as a technical report, for use by statisticians, to better understand the theory and procedures followed in the creation of the RECS sample frame. For a more cursory overview of the RECS sample design, refer to the appendix entitled ``How the Survey was Conducted,`` which is included in the statistical reports produced for each RECS survey year.

  17. EERA-DTOC Project: Design Tools for Offshore Wind Farm Clusters; Proyecto EERA-DTOC: herramientas para el diseno de clusters de Parques Eolicos Marinos

    Energy Technology Data Exchange (ETDEWEB)

    Palomares, A. M.

    2015-07-01

    In the EERA-DTOC Project an integrated and validated software design tool for the optimization of offshore wind farms and wind farm clusters has been developed. The CIEMAT contribution to this project has change the view on mesoscale wind forecasting models, which were not so far considered capable of modeling wind farm scale phenomena. It has been shown the ability of the WRF model to simulate the wakes caused by the wind turbines on the downwind ones (inter-turbine wakes within a wind farm) as well as the wakes between wind farms within a cluster. (Author)

  18. Nurses' beliefs about nursing diagnosis: A study with cluster analysis.

    Science.gov (United States)

    D'Agostino, Fabio; Pancani, Luca; Romero-Sánchez, José Manuel; Lumillo-Gutierrez, Iris; Paloma-Castro, Olga; Vellone, Ercole; Alvaro, Rosaria

    2018-06-01

    To identify clusters of nurses in relation to their beliefs about nursing diagnosis among two populations (Italian and Spanish); to investigate differences among clusters of nurses in each population considering the nurses' socio-demographic data, attitudes towards nursing diagnosis, intentions to make nursing diagnosis and actual behaviours in making nursing diagnosis. Nurses' beliefs concerning nursing diagnosis can influence its use in practice but this is still unclear. A cross-sectional design. A convenience sample of nurses in Italy and Spain was enrolled. Data were collected between 2014-2015 using tools, that is, a socio-demographic questionnaire and behavioural, normative and control beliefs, attitudes, intentions and behaviours scales. The sample included 499 nurses (272 Italians & 227 Spanish). Of these, 66.5% of the Italian and 90.7% of the Spanish sample were female. The mean age was 36.5 and 45.2 years old in the Italian and Spanish sample respectively. Six clusters of nurses were identified in Spain and four in Italy. Three clusters were similar among the two populations. Similar significant associations between age, years of work, attitudes towards nursing diagnosis, intentions to make nursing diagnosis and behaviours in making nursing diagnosis and cluster membership in each population were identified. Belief profiles identified unique subsets of nurses that have distinct characteristics. Categorizing nurses by belief patterns may help administrators and educators to tailor interventions aimed at improving nursing diagnosis use in practice. © 2018 John Wiley & Sons Ltd.

  19. System design description for sampling fuel in K basins

    International Nuclear Information System (INIS)

    Baker, R.B.

    1996-01-01

    This System Design Description provides: (1) statements of the Spent Nuclear Fuel Projects (SNFP) needs requiring sampling of fuel in the K East and K West Basins, (2) the sampling equipment functions and requirements, (3) a general work plan and the design logic being followed to develop the equipment, and (4) a summary description of the design for the sampling equipment. The report summarizes the integrated application of both the subject equipment and the canister sludge sampler in near-term characterization campaigns at K Basins

  20. Sampling design for use by the soil decontamination project

    International Nuclear Information System (INIS)

    Rutherford, D.W.; Stevens, J.R.

    1981-01-01

    This report proposes a general approach to the problem and discusses sampling of soil to map the contaminated area and to provide samples for characterizaton of soil components and contamination. Basic concepts in sample design are reviewed with reference to environmental transuranic studies. Common designs are reviewed and evaluated for use with specific objectives that might be required by the soil decontamination project. Examples of a hierarchial design pilot study and a combined hierarchial and grid study are proposed for the Rocky Flats 903 pad area

  1. [Saarland Growth Study: sampling design].

    Science.gov (United States)

    Danker-Hopfe, H; Zabransky, S

    2000-01-01

    The use of reference data to evaluate the physical development of children and adolescents is part of the daily routine in the paediatric ambulance. The construction of such reference data is based on the collection of extensive reference data. There are different kinds of reference data: cross sectional references, which are based on data collected from a big representative cross-sectional sample of the population, longitudinal references, which are based on follow-up surveys of usually smaller samples of individuals from birth to maturity, and mixed longitudinal references, which are a combination of longitudinal and cross-sectional reference data. The advantages and disadvantages of the different methods of data collection and the resulting reference data are discussed. The Saarland Growth Study was conducted for several reasons: growth processes are subject to secular changes, there are no specific reference data for children and adolescents from this part of the country and the growth charts in use in the paediatric praxis are possibly not appropriate any more. Therefore, the Saarland Growth Study served two purposes a) to create actual regional reference data and b) to create a database for future studies on secular trends in growth processes of children and adolescents from Saarland. The present contribution focusses on general remarks on the sampling design of (cross-sectional) growth surveys and its inferences for the design of the present study.

  2. TimesVector: a vectorized clustering approach to the analysis of time series transcriptome data from multiple phenotypes.

    Science.gov (United States)

    Jung, Inuk; Jo, Kyuri; Kang, Hyejin; Ahn, Hongryul; Yu, Youngjae; Kim, Sun

    2017-12-01

    Identifying biologically meaningful gene expression patterns from time series gene expression data is important to understand the underlying biological mechanisms. To identify significantly perturbed gene sets between different phenotypes, analysis of time series transcriptome data requires consideration of time and sample dimensions. Thus, the analysis of such time series data seeks to search gene sets that exhibit similar or different expression patterns between two or more sample conditions, constituting the three-dimensional data, i.e. gene-time-condition. Computational complexity for analyzing such data is very high, compared to the already difficult NP-hard two dimensional biclustering algorithms. Because of this challenge, traditional time series clustering algorithms are designed to capture co-expressed genes with similar expression pattern in two sample conditions. We present a triclustering algorithm, TimesVector, specifically designed for clustering three-dimensional time series data to capture distinctively similar or different gene expression patterns between two or more sample conditions. TimesVector identifies clusters with distinctive expression patterns in three steps: (i) dimension reduction and clustering of time-condition concatenated vectors, (ii) post-processing clusters for detecting similar and distinct expression patterns and (iii) rescuing genes from unclassified clusters. Using four sets of time series gene expression data, generated by both microarray and high throughput sequencing platforms, we demonstrated that TimesVector successfully detected biologically meaningful clusters of high quality. TimesVector improved the clustering quality compared to existing triclustering tools and only TimesVector detected clusters with differential expression patterns across conditions successfully. The TimesVector software is available at http://biohealth.snu.ac.kr/software/TimesVector/. sunkim.bioinfo@snu.ac.kr. Supplementary data are available at

  3. Design and development of multiple sample counting setup

    International Nuclear Information System (INIS)

    Rath, D.P.; Murali, S.; Babu, D.A.R.

    2010-01-01

    Full text: The analysis of active samples on regular basis for ambient air activity and floor contamination from radio chemical lab accounts for major chunk of the operational activity in Health Physicist's responsibility. The requirement for daily air sample analysis on immediate counting and delayed counting from various labs in addition to samples of smear swipe check of lab led to the urge for development of system that could cater multiple sample analysis in a time programmed manner on a single sample loading. A multiple alpha/beta counting system for counting was designed and fabricated. It has arrangements for loading 10 samples in slots in order, get counted in a time programmed manner with results displayed and records maintained in PC. The paper describes the design and development of multiple sample counting setup presently in use at the facility has resulted in reduction of man-hour consumption in counting and recording of the results

  4. Probability sampling design in ethnobotanical surveys of medicinal plants

    Directory of Open Access Journals (Sweden)

    Mariano Martinez Espinosa

    2012-07-01

    Full Text Available Non-probability sampling design can be used in ethnobotanical surveys of medicinal plants. However, this method does not allow statistical inferences to be made from the data generated. The aim of this paper is to present a probability sampling design that is applicable in ethnobotanical studies of medicinal plants. The sampling design employed in the research titled "Ethnobotanical knowledge of medicinal plants used by traditional communities of Nossa Senhora Aparecida do Chumbo district (NSACD, Poconé, Mato Grosso, Brazil" was used as a case study. Probability sampling methods (simple random and stratified sampling were used in this study. In order to determine the sample size, the following data were considered: population size (N of 1179 families; confidence coefficient, 95%; sample error (d, 0.05; and a proportion (p, 0.5. The application of this sampling method resulted in a sample size (n of at least 290 families in the district. The present study concludes that probability sampling methods necessarily have to be employed in ethnobotanical studies of medicinal plants, particularly where statistical inferences have to be made using data obtained. This can be achieved by applying different existing probability sampling methods, or better still, a combination of such methods.

  5. Interpretation of custom designed Illumina genotype cluster plots for targeted association studies and next-generation sequence validation

    Directory of Open Access Journals (Sweden)

    Tindall Elizabeth A

    2010-02-01

    Full Text Available Abstract Background High-throughput custom designed genotyping arrays are a valuable resource for biologically focused research studies and increasingly for validation of variation predicted by next-generation sequencing (NGS technologies. We investigate the Illumina GoldenGate chemistry using custom designed VeraCode and sentrix array matrix (SAM assays for each of these applications, respectively. We highlight applications for interpretation of Illumina generated genotype cluster plots to maximise data inclusion and reduce genotyping errors. Findings We illustrate the dramatic effect of outliers in genotype calling and data interpretation, as well as suggest simple means to avoid genotyping errors. Furthermore we present this platform as a successful method for two-cluster rare or non-autosomal variant calling. The success of high-throughput technologies to accurately call rare variants will become an essential feature for future association studies. Finally, we highlight additional advantages of the Illumina GoldenGate chemistry in generating unusually segregated cluster plots that identify potential NGS generated sequencing error resulting from minimal coverage. Conclusions We demonstrate the importance of visually inspecting genotype cluster plots generated by the Illumina software and issue warnings regarding commonly accepted quality control parameters. In addition to suggesting applications to minimise data exclusion, we propose that the Illumina cluster plots may be helpful in identifying potential in-put sequence errors, particularly important for studies to validate NGS generated variation.

  6. Designing an enhanced groundwater sample collection system

    International Nuclear Information System (INIS)

    Schalla, R.

    1994-10-01

    As part of an ongoing technical support mission to achieve excellence and efficiency in environmental restoration activities at the Laboratory for Energy and Health-Related Research (LEHR), Pacific Northwest Laboratory (PNL) provided guidance on the design and construction of monitoring wells and identified the most suitable type of groundwater sampling pump and accessories for monitoring wells. The goal was to utilize a monitoring well design that would allow for hydrologic testing and reduce turbidity to minimize the impact of sampling. The sampling results of the newly designed monitoring wells were clearly superior to those of the previously installed monitoring wells. The new wells exhibited reduced turbidity, in addition to improved access for instrumentation and hydrologic testing. The variable frequency submersible pump was selected as the best choice for obtaining groundwater samples. The literature references are listed at the end of this report. Despite some initial difficulties, the actual performance of the variable frequency, submersible pump and its accessories was effective in reducing sampling time and labor costs, and its ease of use was preferred over the previously used bladder pumps. The surface seals system, called the Dedicator, proved to be useful accessory to prevent surface contamination while providing easy access for water-level measurements and for connecting the pump. Cost savings resulted from the use of the pre-production pumps (beta units) donated by the manufacturer for the demonstration. However, larger savings resulted from shortened field time due to the ease in using the submersible pumps and the surface seal access system. Proper deployment of the monitoring wells also resulted in cost savings and ensured representative samples

  7. GAS SURFACE DENSITY, STAR FORMATION RATE SURFACE DENSITY, AND THE MAXIMUM MASS OF YOUNG STAR CLUSTERS IN A DISK GALAXY. II. THE GRAND-DESIGN GALAXY M51

    International Nuclear Information System (INIS)

    González-Lópezlira, Rosa A.; Pflamm-Altenburg, Jan; Kroupa, Pavel

    2013-01-01

    We analyze the relationship between maximum cluster mass and surface densities of total gas (Σ gas ), molecular gas (Σ H 2 ), neutral gas (Σ H I ), and star formation rate (Σ SFR ) in the grand-design galaxy M51, using published gas data and a catalog of masses, ages, and reddenings of more than 1800 star clusters in its disk, of which 223 are above the cluster mass distribution function completeness limit. By comparing the two-dimensional distribution of cluster masses and gas surface densities, we find for clusters older than 25 Myr that M 3rd ∝Σ H I 0.4±0.2 , whereM 3rd is the median of the five most massive clusters. There is no correlation withΣ gas ,Σ H2 , orΣ SFR . For clusters younger than 10 Myr, M 3rd ∝Σ H I 0.6±0.1 and M 3rd ∝Σ gas 0.5±0.2 ; there is no correlation with either Σ H 2 orΣ SFR . The results could hardly be more different from those found for clusters younger than 25 Myr in M33. For the flocculent galaxy M33, there is no correlation between maximum cluster mass and neutral gas, but we have determined M 3rd ∝Σ gas 3.8±0.3 , M 3rd ∝Σ H 2 1.2±0.1 , and M 3rd ∝Σ SFR 0.9±0.1 . For the older sample in M51, the lack of tight correlations is probably due to the combination of strong azimuthal variations in the surface densities of gas and star formation rate, and the cluster ages. These two facts mean that neither the azimuthal average of the surface densities at a given radius nor the surface densities at the present-day location of a stellar cluster represent the true surface densities at the place and time of cluster formation. In the case of the younger sample, even if the clusters have not yet traveled too far from their birth sites, the poor resolution of the radio data compared to the physical sizes of the clusters results in measuredΣ that are likely quite diluted compared to the actual densities relevant for the formation of the clusters.

  8. Hierarchical modeling of cluster size in wildlife surveys

    Science.gov (United States)

    Royle, J. Andrew

    2008-01-01

    Clusters or groups of individuals are the fundamental unit of observation in many wildlife sampling problems, including aerial surveys of waterfowl, marine mammals, and ungulates. Explicit accounting of cluster size in models for estimating abundance is necessary because detection of individuals within clusters is not independent and detectability of clusters is likely to increase with cluster size. This induces a cluster size bias in which the average cluster size in the sample is larger than in the population at large. Thus, failure to account for the relationship between delectability and cluster size will tend to yield a positive bias in estimates of abundance or density. I describe a hierarchical modeling framework for accounting for cluster-size bias in animal sampling. The hierarchical model consists of models for the observation process conditional on the cluster size distribution and the cluster size distribution conditional on the total number of clusters. Optionally, a spatial model can be specified that describes variation in the total number of clusters per sample unit. Parameter estimation, model selection, and criticism may be carried out using conventional likelihood-based methods. An extension of the model is described for the situation where measurable covariates at the level of the sample unit are available. Several candidate models within the proposed class are evaluated for aerial survey data on mallard ducks (Anas platyrhynchos).

  9. Cluster Analysis of the Yale Global Tic Severity Scale (YGTSS): Symptom Dimensions and Clinical Correlates in an Outpatient Youth Sample

    OpenAIRE

    Kircanski, Katharina; Woods, Douglas W.; Chang, Susanna W.; Ricketts, Emily J.; Piacentini, John C.

    2010-01-01

    Tic disorders are heterogeneous, with symptoms varying widely both within and across patients. Exploration of symptom clusters may aid in the identification of symptom dimensions of empirical and treatment import. This article presents the results of two studies investigating tic symptom clusters using a sample of 99 youth (M age = 10.7, 81% male, 77% Caucasian) diagnosed with a primary tic disorder (Tourette?s disorder or chronic tic disorder), across two university-based outpatient clinics ...

  10. Thermal probe design for Europa sample acquisition

    Science.gov (United States)

    Horne, Mera F.

    2018-01-01

    The planned lander missions to the surface of Europa will access samples from the subsurface of the ice in a search for signs of life. A small thermal drill (probe) is proposed to meet the sample requirement of the Science Definition Team's (SDT) report for the Europa mission. The probe is 2 cm in diameter and 16 cm in length and is designed to access the subsurface to 10 cm deep and to collect five ice samples of 7 cm3 each, approximately. The energy required to penetrate the top 10 cm of ice in a vacuum is 26 Wh, approximately, and to melt 7 cm3 of ice is 1.2 Wh, approximately. The requirement stated in the SDT report of collecting samples from five different sites can be accommodated with repeated use of the same thermal drill. For smaller sample sizes, a smaller probe of 1.0 cm in diameter with the same length of 16 cm could be utilized that would require approximately 6.4 Wh to penetrate the top 10 cm of ice, and 0.02 Wh to collect 0.1 g of sample. The thermal drill has the advantage of simplicity of design and operations and the ability to penetrate ice over a range of densities and hardness while maintaining sample integrity.

  11. Design and implementation of a scalable monitor system (IF-monitor) for Linux clusters

    International Nuclear Information System (INIS)

    Zhang Weiyi; Yu Chuansong; Sun Gongxing; Gu Ming

    2003-01-01

    PC clusters have become a cost-effective solution for high performance computing, usually only with the abilities of resource management and job scheduling, and unfortunately, with lack of powerful monitoring for built PC Farms. Therefore it is like a 'black box' for administrators who don't know how they run and where the bottlenecks are. In present there are a few of running PC Farms such as BES-Farm, LHC-Farm, YBJ-Farm at IHEP, CAS. As the scale of PC Farms growing and the IHEP campus grid computing environment implemented, it is more difficult to predict how these PC Farms perform. As a result, the SNMP-based tool called IF-Monitor that allows effective monitoring of large clusters have been designed and developed at IHEP. (authors)

  12. Investigating role stress in frontline bank employees: A cluster based approach

    Directory of Open Access Journals (Sweden)

    Arti Devi

    2013-09-01

    Full Text Available An effective role stress management programme would benefit from a segmentation of employees based on their experience of role stressors. This study explores role stressor based segments of frontline bank employees towards providing a framework for designing such a programme. Cluster analysis on a random sample of 501 frontline employees of commercial banks in Jammu and Kashmir (India revealed three distinct segments – “overloaded employees”, “unclear employees”, and “underutilised employees”, based on their experience of role stressors. The findings suggest a customised approach to role stress management, with the role stress management programme designed to address cluster specific needs.

  13. Triangulation based inclusion probabilities: a design-unbiased sampling approach

    OpenAIRE

    Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph

    2011-01-01

    A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...

  14. Model-based and design-based inference goals frame how to account for neighborhood clustering in studies of health in overlapping context types.

    Science.gov (United States)

    Lovasi, Gina S; Fink, David S; Mooney, Stephen J; Link, Bruce G

    2017-12-01

    Accounting for non-independence in health research often warrants attention. Particularly, the availability of geographic information systems data has increased the ease with which studies can add measures of the local "neighborhood" even if participant recruitment was through other contexts, such as schools or clinics. We highlight a tension between two perspectives that is often present, but particularly salient when more than one type of potentially health-relevant context is indexed (e.g., both neighborhood and school). On the one hand, a model-based perspective emphasizes the processes producing outcome variation, and observed data are used to make inference about that process. On the other hand, a design-based perspective emphasizes inference to a well-defined finite population, and is commonly invoked by those using complex survey samples or those with responsibility for the health of local residents. These two perspectives have divergent implications when deciding whether clustering must be accounted for analytically and how to select among candidate cluster definitions, though the perspectives are by no means monolithic. There are tensions within each perspective as well as between perspectives. We aim to provide insight into these perspectives and their implications for population health researchers. We focus on the crucial step of deciding which cluster definition or definitions to use at the analysis stage, as this has consequences for all subsequent analytic and interpretational challenges with potentially clustered data.

  15. Support Policies in Clusters: Prioritization of Support Needs by Cluster Members According to Cluster Life Cycle

    Directory of Open Access Journals (Sweden)

    Gulcin Salıngan

    2012-07-01

    Full Text Available Economic development has always been a moving target. Both the national and local governments have been facing the challenge of implementing the effective and efficient economic policy and program in order to best utilize their limited resources. One of the recent approaches in this area is called cluster-based economic analysis and strategy development. This study reviews key literature and some of the cluster based economic policies adopted by different governments. Based on this review, it proposes “the cluster life cycle” as a determining factor to identify the support requirements of clusters. A survey, designed based on literature review of International Cluster support programs, was conducted with 30 participants from 3 clusters with different maturity stage. This paper discusses the results of this study conducted among the cluster members in Eskişehir- Bilecik-Kütahya Region in Turkey on the requirement of the support to foster the development of related clusters.

  16. Time-of-flight secondary ion mass spectrometry of a range of coal samples: a chemometrics (PCA, cluster, and PLS) analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lei Pei; Guilin Jiang; Bonnie J. Tyler; Larry L. Baxter; Matthew R. Linford [Brigham Young University, Provo, UT (United States). Department of Chemistry and Biochemistry

    2008-03-15

    This paper documents time-of-flight secondary ion mass spectrometry (ToF-SIMS) analyses of 34 different coal samples. In many cases, the inorganic Na{sup +}, Al{sup +}, Si{sup +}, and K{sup +} ions dominate the spectra, eclipsing the organic peaks. A scores plot of principal component 1 (PC1) versus principal component 2 (PC2) in a principal components analysis (PCA) effectively separates the coal spectra into a triangular pattern, where the different vertices of this pattern come from (I) spectra that have a strong inorganic signature that is dominated by Na{sup +}, (ii) spectra that have a strong inorganic signature that is dominated by Al{sup +}, Si{sup +}, and K{sup +}, and (iii) spectra that have a strong organic signature. Loadings plots of PC1 and PC2 confirm these observations. The spectra with the more prominent inorganic signatures come from samples with higher ash contents. Cluster analysis with the K-means algorithm was also applied to the data. The progressive clustering revealed in the dendrogram correlates extremely well with the clustering of the data points found in the scores plot of PC1 versus PC2 from the PCA. In addition, this clustering often correlates with properties of the coal samples, as measured by traditional analyses. Partial least-squares (PLS), which included the use of interval PLS and a genetic algorithm for variable selection, shows a good correlation between ToF-SIMS spectra and some of the properties measured by traditional means. Thus, ToF-SIMS appears to be a promising technique for the analysis of this important fuel. 33 refs., 9 figs., 5 tabs.

  17. Substructure in clusters of galaxies

    International Nuclear Information System (INIS)

    Fitchett, M.J.

    1988-01-01

    Optical observations suggesting the existence of substructure in clusters of galaxies are examined. Models of cluster formation and methods used to detect substructure in clusters are reviewed. Consideration is given to classification schemes based on a departure of bright cluster galaxies from a spherically symmetric distribution, evidence for statistically significant substructure, and various types of substructure, including velocity, spatial, and spatial-velocity substructure. The substructure observed in the galaxy distribution in clusters is discussed, focusing on observations from general cluster samples, the Virgo cluster, the Hydra cluster, Centaurus, the Coma cluster, and the Cancer cluster. 88 refs

  18. Exploring cluster Monte Carlo updates with Boltzmann machines.

    Science.gov (United States)

    Wang, Lei

    2017-11-01

    Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.

  19. Exploring cluster Monte Carlo updates with Boltzmann machines

    Science.gov (United States)

    Wang, Lei

    2017-11-01

    Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.

  20. A Note on the Effect of Data Clustering on the Multiple-Imputation Variance Estimator: A Theoretical Addendum to the Lewis et al. article in JOS 2014

    Directory of Open Access Journals (Sweden)

    He Yulei

    2016-03-01

    Full Text Available Multiple imputation is a popular approach to handling missing data. Although it was originally motivated by survey nonresponse problems, it has been readily applied to other data settings. However, its general behavior still remains unclear when applied to survey data with complex sample designs, including clustering. Recently, Lewis et al. (2014 compared single- and multiple-imputation analyses for certain incomplete variables in the 2008 National Ambulatory Medicare Care Survey, which has a nationally representative, multistage, and clustered sampling design. Their study results suggested that the increase of the variance estimate due to multiple imputation compared with single imputation largely disappears for estimates with large design effects. We complement their empirical research by providing some theoretical reasoning. We consider data sampled from an equally weighted, single-stage cluster design and characterize the process using a balanced, one-way normal random-effects model. Assuming that the missingness is completely at random, we derive analytic expressions for the within- and between-multiple-imputation variance estimators for the mean estimator, and thus conveniently reveal the impact of design effects on these variance estimators. We propose approximations for the fraction of missing information in clustered samples, extending previous results for simple random samples. We discuss some generalizations of this research and its practical implications for data release by statistical agencies.

  1. ACS sampling system: design, implementation, and performance evaluation

    Science.gov (United States)

    Di Marcantonio, Paolo; Cirami, Roberto; Chiozzi, Gianluca

    2004-09-01

    By means of ACS (ALMA Common Software) framework we designed and implemented a sampling system which allows sampling of every Characteristic Component Property with a specific, user-defined, sustained frequency limited only by the hardware. Collected data are sent to various clients (one or more Java plotting widgets, a dedicated GUI or a COTS application) using the ACS/CORBA Notification Channel. The data transport is optimized: samples are cached locally and sent in packets with a lower and user-defined frequency to keep network load under control. Simultaneous sampling of the Properties of different Components is also possible. Together with the design and implementation issues we present the performance of the sampling system evaluated on two different platforms: on a VME based system using VxWorks RTOS (currently adopted by ALMA) and on a PC/104+ embedded platform using Red Hat 9 Linux operating system. The PC/104+ solution offers, as an alternative, a low cost PC compatible hardware environment with free and open operating system.

  2. Time clustered sampling can inflate the inferred substitution rate in foot-and-mouth disease virus analyses

    DEFF Research Database (Denmark)

    Pedersen, Casper-Emil Tingskov; Frandsen, Peter; Wekesa, Sabenzia N.

    2015-01-01

    abundance of sequence data sampled under widely different schemes, an effort to keep results consistent and comparable is needed. This study emphasizes commonly disregarded problems in the inference of evolutionary rates in viral sequence data when sampling is unevenly distributed on a temporal scale...... through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer...... to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully...

  3. COSMOLOGICAL CONSTRAINTS FROM GALAXY CLUSTERING AND THE MASS-TO-NUMBER RATIO OF GALAXY CLUSTERS

    International Nuclear Information System (INIS)

    Tinker, Jeremy L.; Blanton, Michael R.; Sheldon, Erin S.; Wechsler, Risa H.; Becker, Matthew R.; Rozo, Eduardo; Zu, Ying; Weinberg, David H.; Zehavi, Idit; Busha, Michael T.; Koester, Benjamin P.

    2012-01-01

    We place constraints on the average density (Ω m ) and clustering amplitude (σ 8 ) of matter using a combination of two measurements from the Sloan Digital Sky Survey: the galaxy two-point correlation function, w p (r p ), and the mass-to-galaxy-number ratio within galaxy clusters, M/N, analogous to cluster M/L ratios. Our w p (r p ) measurements are obtained from DR7 while the sample of clusters is the maxBCG sample, with cluster masses derived from weak gravitational lensing. We construct nonlinear galaxy bias models using the Halo Occupation Distribution (HOD) to fit both w p (r p ) and M/N for different cosmological parameters. HOD models that match the same two-point clustering predict different numbers of galaxies in massive halos when Ω m or σ 8 is varied, thereby breaking the degeneracy between cosmology and bias. We demonstrate that this technique yields constraints that are consistent and competitive with current results from cluster abundance studies, without the use of abundance information. Using w p (r p ) and M/N alone, we find Ω 0.5 m σ 8 = 0.465 ± 0.026, with individual constraints of Ω m = 0.29 ± 0.03 and σ 8 = 0.85 ± 0.06. Combined with current cosmic microwave background data, these constraints are Ω m = 0.290 ± 0.016 and σ 8 = 0.826 ± 0.020. All errors are 1σ. The systematic uncertainties that the M/N technique are most sensitive to are the amplitude of the bias function of dark matter halos and the possibility of redshift evolution between the SDSS Main sample and the maxBCG cluster sample. Our derived constraints are insensitive to the current level of uncertainties in the halo mass function and in the mass-richness relation of clusters and its scatter, making the M/N technique complementary to cluster abundances as a method for constraining cosmology with future galaxy surveys.

  4. COSMOLOGICAL CONSTRAINTS FROM GALAXY CLUSTERING AND THE MASS-TO-NUMBER RATIO OF GALAXY CLUSTERS

    Energy Technology Data Exchange (ETDEWEB)

    Tinker, Jeremy L.; Blanton, Michael R. [Center for Cosmology and Particle Physics, Department of Physics, New York University, New York, NY 10013 (United States); Sheldon, Erin S. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Wechsler, Risa H. [Kavli Institute for Particle Astrophysics and Cosmology, Physics Department, and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); Becker, Matthew R.; Rozo, Eduardo [Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637 (United States); Zu, Ying; Weinberg, David H. [Department of Astronomy, Ohio State University, Columbus, OH 43210 (United States); Zehavi, Idit [Department of Astronomy and CERCA, Case Western Reserve University, Cleveland, OH 44106 (United States); Busha, Michael T. [Institute for Theoretical Physics, Department of Physics, University of Zurich, CH-8057 Zurich (Switzerland); Koester, Benjamin P. [Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL 6037 (United States)

    2012-01-20

    We place constraints on the average density ({Omega}{sub m}) and clustering amplitude ({sigma}{sub 8}) of matter using a combination of two measurements from the Sloan Digital Sky Survey: the galaxy two-point correlation function, w{sub p} (r{sub p} ), and the mass-to-galaxy-number ratio within galaxy clusters, M/N, analogous to cluster M/L ratios. Our w{sub p} (r{sub p} ) measurements are obtained from DR7 while the sample of clusters is the maxBCG sample, with cluster masses derived from weak gravitational lensing. We construct nonlinear galaxy bias models using the Halo Occupation Distribution (HOD) to fit both w{sub p} (r{sub p} ) and M/N for different cosmological parameters. HOD models that match the same two-point clustering predict different numbers of galaxies in massive halos when {Omega}{sub m} or {sigma}{sub 8} is varied, thereby breaking the degeneracy between cosmology and bias. We demonstrate that this technique yields constraints that are consistent and competitive with current results from cluster abundance studies, without the use of abundance information. Using w{sub p} (r{sub p} ) and M/N alone, we find {Omega}{sup 0.5}{sub m}{sigma}{sub 8} = 0.465 {+-} 0.026, with individual constraints of {Omega}{sub m} = 0.29 {+-} 0.03 and {sigma}{sub 8} = 0.85 {+-} 0.06. Combined with current cosmic microwave background data, these constraints are {Omega}{sub m} = 0.290 {+-} 0.016 and {sigma}{sub 8} = 0.826 {+-} 0.020. All errors are 1{sigma}. The systematic uncertainties that the M/N technique are most sensitive to are the amplitude of the bias function of dark matter halos and the possibility of redshift evolution between the SDSS Main sample and the maxBCG cluster sample. Our derived constraints are insensitive to the current level of uncertainties in the halo mass function and in the mass-richness relation of clusters and its scatter, making the M/N technique complementary to cluster abundances as a method for constraining cosmology with future galaxy

  5. Design tool for offshore wind farm cluster planning

    DEFF Research Database (Denmark)

    Hasager, Charlotte Bay; Madsen, Peter Hauge; Giebel, Gregor

    2015-01-01

    In the framework of the FP7 project EERA DTOC: Design Tool for Offshore wind farm Cluster, a new software supporting the planning of offshore wind farms was developed, based on state-of-the-art approaches from large scale wind potential to economic benchmarking. The model portfolio includes WAs......P, FUGA, WRF, Net-Op, LCoE model, CorWind, FarmFlow, EeFarm and grid code compliance calculations. The development is done by members from European Energy Research Alliance (EERA) and guided by several industrial partners. A commercial spin-off from the project is the tool ‘Wind & Economy’. The software...... by the software and several tests were performed. The calculations include the smoothing effect on produced energy between wind farms located in different regional wind zones and the short time scales relevant for assessing balancing power. The grid code compliance was tested for several cases and the results...

  6. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile

    DEFF Research Database (Denmark)

    Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.

    2013-01-01

    and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium...

  7. Mobile Variable Depth Sampling System Design Study

    International Nuclear Information System (INIS)

    BOGER, R.M.

    2000-01-01

    A design study is presented for a mobile, variable depth sampling system (MVDSS) that will support the treatment and immobilization of Hanford LAW and HLW. The sampler can be deployed in a 4-inch tank riser and has a design that is based on requirements identified in the Level 2 Specification (latest revision). The waste feed sequence for the MVDSS is based on Phase 1, Case 3S6 waste feed sequence. Technical information is also presented that supports the design study

  8. Mobile Variable Depth Sampling System Design Study

    Energy Technology Data Exchange (ETDEWEB)

    BOGER, R.M.

    2000-08-25

    A design study is presented for a mobile, variable depth sampling system (MVDSS) that will support the treatment and immobilization of Hanford LAW and HLW. The sampler can be deployed in a 4-inch tank riser and has a design that is based on requirements identified in the Level 2 Specification (latest revision). The waste feed sequence for the MVDSS is based on Phase 1, Case 3S6 waste feed sequence. Technical information is also presented that supports the design study.

  9. Cluster management.

    Science.gov (United States)

    Katz, R

    1992-11-01

    Cluster management is a management model that fosters decentralization of management, develops leadership potential of staff, and creates ownership of unit-based goals. Unlike shared governance models, there is no formal structure created by committees and it is less threatening for managers. There are two parts to the cluster management model. One is the formation of cluster groups, consisting of all staff and facilitated by a cluster leader. The cluster groups function for communication and problem-solving. The second part of the cluster management model is the creation of task forces. These task forces are designed to work on short-term goals, usually in response to solving one of the unit's goals. Sometimes the task forces are used for quality improvement or system problems. Clusters are groups of not more than five or six staff members, facilitated by a cluster leader. A cluster is made up of individuals who work the same shift. For example, people with job titles who work days would be in a cluster. There would be registered nurses, licensed practical nurses, nursing assistants, and unit clerks in the cluster. The cluster leader is chosen by the manager based on certain criteria and is trained for this specialized role. The concept of cluster management, criteria for choosing leaders, training for leaders, using cluster groups to solve quality improvement issues, and the learning process necessary for manager support are described.

  10. The correlation functions for the clustering of galaxies and Abell clusters

    International Nuclear Information System (INIS)

    Jones, B.J.T.; Jones, J.E.; Copenhagen Univ.

    1985-01-01

    The difference in amplitudes between the galaxy-galaxy correlation function and the correlation function between Abell clusters is a consequence of two facts. Firstly, most Abell clusters with z<0.08 lie in a relatively small volume of the sampled space, and secondly, the fraction of galaxies lying in Abell clusters differs considerably inside and outside of this volume. (The Abell clusters are confined to a smaller volume of space than are the galaxies.) We discuss the implications of this interpretation of the clustering correlation functions and present a simple model showing how such a situation may arise quite naturally in standard theories for galaxy formation. (orig.)

  11. Cluster Implantation and Deposition Apparatus

    DEFF Research Database (Denmark)

    Hanif, Muhammad; Popok, Vladimir

    2015-01-01

    In the current report, a design and capabilities of a cluster implantation and deposition apparatus (CIDA) involving two different cluster sources are described. The clusters produced from gas precursors (Ar, N etc.) by PuCluS-2 can be used to study cluster ion implantation in order to develop...

  12. The Local Maximum Clustering Method and Its Application in Microarray Gene Expression Data Analysis

    Directory of Open Access Journals (Sweden)

    Chen Yidong

    2004-01-01

    Full Text Available An unsupervised data clustering method, called the local maximum clustering (LMC method, is proposed for identifying clusters in experiment data sets based on research interest. A magnitude property is defined according to research purposes, and data sets are clustered around each local maximum of the magnitude property. By properly defining a magnitude property, this method can overcome many difficulties in microarray data clustering such as reduced projection in similarities, noises, and arbitrary gene distribution. To critically evaluate the performance of this clustering method in comparison with other methods, we designed three model data sets with known cluster distributions and applied the LMC method as well as the hierarchic clustering method, the -mean clustering method, and the self-organized map method to these model data sets. The results show that the LMC method produces the most accurate clustering results. As an example of application, we applied the method to cluster the leukemia samples reported in the microarray study of Golub et al. (1999.

  13. Diversity in the stellar velocity dispersion profiles of a large sample of brightest cluster galaxies z ≤ 0.3

    Science.gov (United States)

    Loubser, S. I.; Hoekstra, H.; Babul, A.; O'Sullivan, E.

    2018-06-01

    We analyse spatially resolved deep optical spectroscopy of brightestcluster galaxies (BCGs) located in 32 massive clusters with redshifts of 0.05 ≤ z ≤ 0.30 to investigate their velocity dispersion profiles. We compare these measurements to those of other massive early-type galaxies, as well as central group galaxies, where relevant. This unique, large sample extends to the most extreme of massive galaxies, spanning MK between -25.7 and -27.8 mag, and host cluster halo mass M500 up to 1.7 × 1015 M⊙. To compare the kinematic properties between brightest group and cluster members, we analyse similar spatially resolved long-slit spectroscopy for 23 nearby brightest group galaxies (BGGs) from the Complete Local-Volume Groups Sample. We find a surprisingly large variety in velocity dispersion slopes for BCGs, with a significantly larger fraction of positive slopes, unique compared to other (non-central) early-type galaxies as well as the majority of the brightest members of the groups. We find that the velocity dispersion slopes of the BCGs and BGGs correlate with the luminosity of the galaxies, and we quantify this correlation. It is not clear whether the full diversity in velocity dispersion slopes that we see is reproduced in simulations.

  14. A pilot cluster randomized controlled trial of structured goal-setting following stroke.

    Science.gov (United States)

    Taylor, William J; Brown, Melanie; William, Levack; McPherson, Kathryn M; Reed, Kirk; Dean, Sarah G; Weatherall, Mark

    2012-04-01

    To determine the feasibility, the cluster design effect and the variance and minimal clinical importance difference in the primary outcome in a pilot study of a structured approach to goal-setting. A cluster randomized controlled trial. Inpatient rehabilitation facilities. People who were admitted to inpatient rehabilitation following stroke who had sufficient cognition to engage in structured goal-setting and complete the primary outcome measure. Structured goal elicitation using the Canadian Occupational Performance Measure. Quality of life at 12 weeks using the Schedule for Individualised Quality of Life (SEIQOL-DW), Functional Independence Measure, Short Form 36 and Patient Perception of Rehabilitation (measuring satisfaction with rehabilitation). Assessors were blinded to the intervention. Four rehabilitation services and 41 patients were randomized. We found high values of the intraclass correlation for the outcome measures (ranging from 0.03 to 0.40) and high variance of the SEIQOL-DW (SD 19.6) in relation to the minimally importance difference of 2.1, leading to impractically large sample size requirements for a cluster randomized design. A cluster randomized design is not a practical means of avoiding contamination effects in studies of inpatient rehabilitation goal-setting. Other techniques for coping with contamination effects are necessary.

  15. VizieR Online Data Catalog: 44 SZ-selected galaxy clusters ACT observations (Sifon+, 2016)

    Science.gov (United States)

    Sifon, C.; Battaglia, N.; Hasselfield, M.; Menanteau, F.; Barrientos, L. F.; Bond, J. R.; Crichton, D.; Devlin, M. J.; Dunner, R.; Hilton, M.; Hincks, A. D.; Hlozek, R.; Huffenberger, K. M.; Hughes, J. P.; Infante, L.; Kosowsky, A.; Marsden, D.; Marriage, T. A.; Moodley, K.; Niemack, M. D.; Page, L. A.; Spergel, D. N.; Staggs, S. T.; Trac, H.; Wollack, E. J.

    2017-11-01

    ACT is a 6-metre off-axis Gregorian telescope located at an altitude of 5200um in the Atacama desert in Chile, designed to observe the CMB at arcminute resolution. Galaxy clusters were detected in the 148GHz band by matched-filtering the maps with the pressure profile suggested by Arnaud et al. (2010A&A...517A..92A), fit to X-ray selected local (zGMOS) on the Gemini-South telescope, split in semesters 2011B (ObsID:GS-2011B-C-1, PI:Barrientos/Menanteau) and 2012A (ObsID:GS-2012A-C-1, PI:Menanteau), prioritizing clusters in the cosmological sample at 0.3sample (Sifon et al. 2013, Cat. J/ApJ/772/25). We also observed seven clusters in S82 with the Robert Stobie Spectrograph (RSS) on the Southern African Large Telescope (SALT), using MOS. Details of these observations are given in Kirk et al. (2015, Cat. J/MNRAS/449/4010). In order to enlarge the sample of studied clusters and member galaxies, we also compiled archival data for the equatorial sample. (1 data file).

  16. Reliability of impingement sampling designs: An example from the Indian Point station

    International Nuclear Information System (INIS)

    Mattson, M.T.; Waxman, J.B.; Watson, D.A.

    1988-01-01

    A 4-year data base (1976-1979) of daily fish impingement counts at the Indian Point electric power station on the Hudson River was used to compare the precision and reliability of three random-sampling designs: (1) simple random, (2) seasonally stratified, and (3) empirically stratified. The precision of daily impingement estimates improved logarithmically for each design as more days in the year were sampled. Simple random sampling was the least, and empirically stratified sampling was the most precise design, and the difference in precision between the two stratified designs was small. Computer-simulated sampling was used to estimate the reliability of the two stratified-random-sampling designs. A seasonally stratified sampling design was selected as the most appropriate reduced-sampling program for Indian Point station because: (1) reasonably precise and reliable impingement estimates were obtained using this design for all species combined and for eight common Hudson River fish by sampling only 30% of the days in a year (110 d); and (2) seasonal strata may be more precise and reliable than empirical strata if future changes in annual impingement patterns occur. The seasonally stratified design applied to the 1976-1983 Indian Point impingement data showed that selection of sampling dates based on daily species-specific impingement variability gave results that were more precise, but not more consistently reliable, than sampling allocations based on the variability of all fish species combined. 14 refs., 1 fig., 6 tabs

  17. Conditional estimation of exponential random graph models from snowball sampling designs

    NARCIS (Netherlands)

    Pattison, Philippa E.; Robins, Garry L.; Snijders, Tom A. B.; Wang, Peng

    2013-01-01

    A complete survey of a network in a large population may be prohibitively difficult and costly. So it is important to estimate models for networks using data from various network sampling designs, such as link-tracing designs. We focus here on snowball sampling designs, designs in which the members

  18. Filaments and clusters of galaxies

    International Nuclear Information System (INIS)

    Soltan, A.

    1987-01-01

    A statistical test to investigate filaments of galaxies is performed. Only particular form of filaments is considered, viz. filaments connecting Abell clusters of galaxies. Relative position of triplets ''cluster - field object - cluster'' is analysed. Though neither cluster sample nor field object sample are homogeneous and complete only peculiar form of selection effects could affect the present statistics. Comparison of observational data with simulations shows that less than 15 per cent of all field galaxies is concentrated in filaments connecting rich clusters. Most of the field objects used in the analysis are not normal galaxies and it is possible that this conclusion is not in conflict with apparent filaments seen in the Lick counts and in some nearby 3D maps of the galaxy distribution. 26 refs., 2 figs. (author)

  19. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Science.gov (United States)

    Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga

    2015-01-01

    Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding). Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants). Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol) that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively, we discuss how

  20. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Directory of Open Access Journals (Sweden)

    Abhishek Mitra

    Full Text Available Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding. Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants. Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively

  1. ANL small-sample calorimeter system design and operation

    International Nuclear Information System (INIS)

    Roche, C.T.; Perry, R.B.; Lewis, R.N.; Jung, E.A.; Haumann, J.R.

    1978-07-01

    The Small-Sample Calorimetric System is a portable instrument designed to measure the thermal power produced by radioactive decay of plutonium-containing fuels. The small-sample calorimeter is capable of measuring samples producing power up to 32 milliwatts at a rate of one sample every 20 min. The instrument is contained in two packages: a data-acquisition module consisting of a microprocessor with an 8K-byte nonvolatile memory, and a measurement module consisting of the calorimeter and a sample preheater. The total weight of the system is 18 kg

  2. Model-based and design-based inference goals frame how to account for neighborhood clustering in studies of health in overlapping context types

    Directory of Open Access Journals (Sweden)

    Gina S. Lovasi

    2017-12-01

    Full Text Available Accounting for non-independence in health research often warrants attention. Particularly, the availability of geographic information systems data has increased the ease with which studies can add measures of the local “neighborhood” even if participant recruitment was through other contexts, such as schools or clinics. We highlight a tension between two perspectives that is often present, but particularly salient when more than one type of potentially health-relevant context is indexed (e.g., both neighborhood and school. On the one hand, a model-based perspective emphasizes the processes producing outcome variation, and observed data are used to make inference about that process. On the other hand, a design-based perspective emphasizes inference to a well-defined finite population, and is commonly invoked by those using complex survey samples or those with responsibility for the health of local residents. These two perspectives have divergent implications when deciding whether clustering must be accounted for analytically and how to select among candidate cluster definitions, though the perspectives are by no means monolithic. There are tensions within each perspective as well as between perspectives. We aim to provide insight into these perspectives and their implications for population health researchers. We focus on the crucial step of deciding which cluster definition or definitions to use at the analysis stage, as this has consequences for all subsequent analytic and interpretational challenges with potentially clustered data.

  3. Simultaneous alignment and clustering of peptide data using a Gibbs sampling approach

    DEFF Research Database (Denmark)

    Andreatta, Massimo; Lund, Ole; Nielsen, Morten

    2013-01-01

    Motivation: Proteins recognizing short peptide fragments play a central role in cellular signaling. As a result of high-throughput technologies, peptide-binding protein specificities can be studied using large peptide libraries at dramatically lower cost and time. Interpretation of such large...... peptide datasets, however, is a complex task, especially when the data contain multiple receptor binding motifs, and/or the motifs are found at different locations within distinct peptides.Results: The algorithm presented in this article, based on Gibbs sampling, identifies multiple specificities...... of unaligned peptide datasets of variable length. Example applications described in this article include mixtures of binders to different MHC class I and class II alleles, distinct classes of ligands for SH3 domains and sub-specificities of the HLA-A*02:01 molecule.Availability: The Gibbs clustering method...

  4. Adaptive designs for the one-sample log-rank test.

    Science.gov (United States)

    Schmidt, Rene; Faldum, Andreas; Kwiecien, Robert

    2017-09-22

    Traditional designs in phase IIa cancer trials are single-arm designs with a binary outcome, for example, tumor response. In some settings, however, a time-to-event endpoint might appear more appropriate, particularly in the presence of loss to follow-up. Then the one-sample log-rank test might be the method of choice. It allows to compare the survival curve of the patients under treatment to a prespecified reference survival curve. The reference curve usually represents the expected survival under standard of the care. In this work, convergence of the one-sample log-rank statistic to Brownian motion is proven using Rebolledo's martingale central limit theorem while accounting for staggered entry times of the patients. On this basis, a confirmatory adaptive one-sample log-rank test is proposed where provision is made for data dependent sample size reassessment. The focus is to apply the inverse normal method. This is done in two different directions. The first strategy exploits the independent increments property of the one-sample log-rank statistic. The second strategy is based on the patient-wise separation principle. It is shown by simulation that the proposed adaptive test might help to rescue an underpowered trial and at the same time lowers the average sample number (ASN) under the null hypothesis as compared to a single-stage fixed sample design. © 2017, The International Biometric Society.

  5. Spatial cluster modelling

    CERN Document Server

    Lawson, Andrew B

    2002-01-01

    Research has generated a number of advances in methods for spatial cluster modelling in recent years, particularly in the area of Bayesian cluster modelling. Along with these advances has come an explosion of interest in the potential applications of this work, especially in epidemiology and genome research. In one integrated volume, this book reviews the state-of-the-art in spatial clustering and spatial cluster modelling, bringing together research and applications previously scattered throughout the literature. It begins with an overview of the field, then presents a series of chapters that illuminate the nature and purpose of cluster modelling within different application areas, including astrophysics, epidemiology, ecology, and imaging. The focus then shifts to methods, with discussions on point and object process modelling, perfect sampling of cluster processes, partitioning in space and space-time, spatial and spatio-temporal process modelling, nonparametric methods for clustering, and spatio-temporal ...

  6. Baseline Design Compliance Matrix for the Rotary Mode Core Sampling System

    International Nuclear Information System (INIS)

    LECHELT, J.A.

    2000-01-01

    The purpose of the design compliance matrix (DCM) is to provide a single-source document of all design requirements associated with the fifteen subsystems that make up the rotary mode core sampling (RMCS) system. It is intended to be the baseline requirement document for the RMCS system and to be used in governing all future design and design verification activities associated with it. This document is the DCM for the RMCS system used on Hanford single-shell radioactive waste storage tanks. This includes the Exhauster System, Rotary Mode Core Sample Trucks, Universal Sampling System, Diesel Generator System, Distribution Trailer, X-Ray Cart System, Breathing Air Compressor, Nitrogen Supply Trailer, Casks and Cask Truck, Service Trailer, Core Sampling Riser Equipment, Core Sampling Support Trucks, Foot Clamp, Ramps and Platforms and Purged Camera System. Excluded items are tools such as light plants and light stands. Other items such as the breather inlet filter are covered by a different design baseline. In this case, the inlet breather filter is covered by the Tank Farms Design Compliance Matrix

  7. Combining multiple hypothesis testing and affinity propagation clustering leads to accurate, robust and sample size independent classification on gene expression data

    Directory of Open Access Journals (Sweden)

    Sakellariou Argiris

    2012-10-01

    Full Text Available Abstract Background A feature selection method in microarray gene expression data should be independent of platform, disease and dataset size. Our hypothesis is that among the statistically significant ranked genes in a gene list, there should be clusters of genes that share similar biological functions related to the investigated disease. Thus, instead of keeping N top ranked genes, it would be more appropriate to define and keep a number of gene cluster exemplars. Results We propose a hybrid FS method (mAP-KL, which combines multiple hypothesis testing and affinity propagation (AP-clustering algorithm along with the Krzanowski & Lai cluster quality index, to select a small yet informative subset of genes. We applied mAP-KL on real microarray data, as well as on simulated data, and compared its performance against 13 other feature selection approaches. Across a variety of diseases and number of samples, mAP-KL presents competitive classification results, particularly in neuromuscular diseases, where its overall AUC score was 0.91. Furthermore, mAP-KL generates concise yet biologically relevant and informative N-gene expression signatures, which can serve as a valuable tool for diagnostic and prognostic purposes, as well as a source of potential disease biomarkers in a broad range of diseases. Conclusions mAP-KL is a data-driven and classifier-independent hybrid feature selection method, which applies to any disease classification problem based on microarray data, regardless of the available samples. Combining multiple hypothesis testing and AP leads to subsets of genes, which classify unknown samples from both, small and large patient cohorts with high accuracy.

  8. Outcome-Dependent Sampling Design and Inference for Cox's Proportional Hazards Model.

    Science.gov (United States)

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P; Zhou, Haibo

    2016-11-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study.

  9. Post analysis of AE data of seal plug leakage of NAPS-2 and fatigue crack initiation of three point bend sample using cluster and artificial neural network

    International Nuclear Information System (INIS)

    Singh, A.K.; Mehta, H.R.; Bhattacharya, S.

    2003-01-01

    Acoustic Emission data is very weak and passive in nature that leads to a challenging task to separate AE data from noise. This paper illuminates the work done of post analysis of acoustic emission data of seal plug leakage of operating PHWR, NAPS-2, Narora and Fatigue Crack initiation of three-point bend sample using cluster analysis and artificial neural network (ANN). First the known AE data generated in lab by PCB debonding and pencil leak break were analyzed using ANN to get the confidence. After that the AE data acquired by scanning all 306-coolant channels at NAPS-2 was sorted out in five separate clusters for different leakage rate and background noise. Fatigue crack initiation, AE data generated in MSD lab on three-point bend sample was clustered in ten separate clusters in which one cluster was having 98% AE data of crack initiation period noted with the help of travelling microscope but remaining clusters indicating AE data of different sources and noise. The above data was further analysed with self organizing map of Artificial Neural Network. (author)

  10. Feasibility Study of Parallel Finite Element Analysis on Cluster-of-Clusters

    Science.gov (United States)

    Muraoka, Masae; Okuda, Hiroshi

    With the rapid growth of WAN infrastructure and development of Grid middleware, it's become a realistic and attractive methodology to connect cluster machines on wide-area network for the execution of computation-demanding applications. Many existing parallel finite element (FE) applications have been, however, designed and developed with a single computing resource in mind, since such applications require frequent synchronization and communication among processes. There have been few FE applications that can exploit the distributed environment so far. In this study, we explore the feasibility of FE applications on the cluster-of-clusters. First, we classify FE applications into two types, tightly coupled applications (TCA) and loosely coupled applications (LCA) based on their communication pattern. A prototype of each application is implemented on the cluster-of-clusters. We perform numerical experiments executing TCA and LCA on both the cluster-of-clusters and a single cluster. Thorough these experiments, by comparing the performances and communication cost in each case, we evaluate the feasibility of FEA on the cluster-of-clusters.

  11. A Frequency Domain Design Method For Sampled-Data Compensators

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Jannerup, Ole Erik

    1990-01-01

    A new approach to the design of a sampled-data compensator in the frequency domain is investigated. The starting point is a continuous-time compensator for the continuous-time system which satisfy specific design criteria. The new design method will graphically show how the discrete...

  12. The C4 clustering algorithm: Clusters of galaxies in the Sloan Digital Sky Survey

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Christopher J.; Nichol, Robert; Reichart, Dan; Wechsler, Risa H.; Evrard, August; Annis, James; McKay, Timothy; Bahcall, Neta; Bernardi, Mariangela; Boehringer,; Connolly, Andrew; Goto, Tomo; Kniazev, Alexie; Lamb, Donald; Postman, Marc; Schneider, Donald; Sheth, Ravi; Voges, Wolfgang; /Cerro-Tololo InterAmerican Obs. /Portsmouth U.,

    2005-03-01

    We present the ''C4 Cluster Catalog'', a new sample of 748 clusters of galaxies identified in the spectroscopic sample of the Second Data Release (DR2) of the Sloan Digital Sky Survey (SDSS). The C4 cluster-finding algorithm identifies clusters as overdensities in a seven-dimensional position and color space, thus minimizing projection effects that have plagued previous optical cluster selection. The present C4 catalog covers {approx}2600 square degrees of sky and ranges in redshift from z = 0.02 to z = 0.17. The mean cluster membership is 36 galaxies (with redshifts) brighter than r = 17.7, but the catalog includes a range of systems, from groups containing 10 members to massive clusters with over 200 cluster members with redshifts. The catalog provides a large number of measured cluster properties including sky location, mean redshift, galaxy membership, summed r-band optical luminosity (L{sub r}), velocity dispersion, as well as quantitative measures of substructure and the surrounding large-scale environment. We use new, multi-color mock SDSS galaxy catalogs, empirically constructed from the {Lambda}CDM Hubble Volume (HV) Sky Survey output, to investigate the sensitivity of the C4 catalog to the various algorithm parameters (detection threshold, choice of passbands and search aperture), as well as to quantify the purity and completeness of the C4 cluster catalog. These mock catalogs indicate that the C4 catalog is {approx_equal}90% complete and 95% pure above M{sub 200} = 1 x 10{sup 14} h{sup -1}M{sub {circle_dot}} and within 0.03 {le} z {le} 0.12. Using the SDSS DR2 data, we show that the C4 algorithm finds 98% of X-ray identified clusters and 90% of Abell clusters within 0.03 {le} z {le} 0.12. Using the mock galaxy catalogs and the full HV dark matter simulations, we show that the L{sub r} of a cluster is a more robust estimator of the halo mass (M{sub 200}) than the galaxy line-of-sight velocity dispersion or the richness of the cluster

  13. Baryon Content in a Sample of 91 Galaxy Clusters Selected by the South Pole Telescope at 0.2 < z < 1.25

    Science.gov (United States)

    Chiu, I.; Mohr, J. J.; McDonald, M.; Bocquet, S.; Desai, S.; Klein, M.; Israel, H.; Ashby, M. L. N.; Stanford, A.; Benson, B. A.; Brodwin, M.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bayliss, M.; Benoit-Lévy, A.; Bertin, E.; Bleem, L.; Brooks, D.; Buckley-Geer, E.; Bulbul, E.; Capasso, R.; Carlstrom, J. E.; Rosell, A. Carnero; Carretero, J.; Castander, F. J.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Davis, C.; Diehl, H. T.; Dietrich, J. P.; Doel, P.; Drlica-Wagner, A.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; García-Bellido, J.; Garmire, G.; Gaztanaga, E.; Gerdes, D. W.; Gonzalez, A.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gupta, N.; Gutierrez, G.; Hlavacek-L, J.; Honscheid, K.; James, D. J.; Jeltema, T.; Kraft, R.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Lima, M.; Maia, M. A. G.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Miquel, R.; Murray, S.; Nord, B.; Ogando, R. L. C.; Plazas, A. A.; Rapetti, D.; Reichardt, C. L.; Romer, A. K.; Roodman, A.; Sanchez, E.; Saro, A.; Scarpine, V.; Schindler, R.; Schubnell, M.; Sharon, K.; Smith, R. C.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Stalder, B.; Stern, C.; Strazzullo, V.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Vikram, V.; Walker, A. R.; Weller, J.; Zhang, Y.

    2018-05-01

    We estimate total mass (M500), intracluster medium (ICM) mass (MICM) and stellar mass (M⋆) in a Sunyaev-Zel'dovich effect (SZE) selected sample of 91 galaxy clusters with masses M500 ≳ 2.5 × 1014M⊙ and redshift 0.2 baryonic mass and the cold baryonic fraction with cluster halo mass and redshift. We find significant departures from self-similarity in the mass scaling for all quantities, while the redshift trends are all statistically consistent with zero, indicating that the baryon content of clusters at fixed mass has changed remarkably little over the past ≈9 Gyr. We compare our results to the mean baryon fraction (and the stellar mass fraction) in the field, finding that these values lie above (below) those in cluster virial regions in all but the most massive clusters at low redshift. Using a simple model of the matter assembly of clusters from infalling groups with lower masses and from infalling material from the low density environment or field surrounding the parent halos, we show that the measured mass trends without strong redshift trends in the stellar mass scaling relation could be explained by a mass and redshift dependent fractional contribution from field material. Similar analyses of the ICM and baryon mass scaling relations provide evidence for the so-called "missing baryons" outside cluster virial regions.

  14. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  15. The expression of Mirc1/Mir17-92 cluster in sputum samples correlates with pulmonary exacerbations in cystic fibrosis patients.

    Science.gov (United States)

    Krause, Kathrin; Kopp, Benjamin T; Tazi, Mia F; Caution, Kyle; Hamilton, Kaitlin; Badr, Asmaa; Shrestha, Chandra; Tumin, Dmitry; Hayes, Don; Robledo-Avila, Frank; Hall-Stoodley, Luanne; Klamer, Brett G; Zhang, Xiaoli; Partida-Sanchez, Santiago; Parinandi, Narasimham L; Kirkby, Stephen E; Dakhlallah, Duaa; McCoy, Karen S; Cormet-Boyaka, Estelle; Amer, Amal O

    2017-12-11

    Cystic fibrosis (CF) is a multi-organ disorder characterized by chronic sino-pulmonary infections and inflammation. Many patients with CF suffer from repeated pulmonary exacerbations that are predictors of worsened long-term morbidity and mortality. There are no reliable markers that associate with the onset or progression of an exacerbation or pulmonary deterioration. Previously, we found that the Mirc1/Mir17-92a cluster which is comprised of 6 microRNAs (Mirs) is highly expressed in CF mice and negatively regulates autophagy which in turn improves CF transmembrane conductance regulator (CFTR) function. Therefore, here we sought to examine the expression of individual Mirs within the Mirc1/Mir17-92 cluster in human cells and biological fluids and determine their role as biomarkers of pulmonary exacerbations and response to treatment. Mirc1/Mir17-92 cluster expression was measured in human CF and non-CF plasma, blood-derived neutrophils, and sputum samples. Values were correlated with pulmonary function, exacerbations and use of CFTR modulators. Mirc1/Mir17-92 cluster expression was not significantly elevated in CF neutrophils nor plasma when compared to the non-CF cohort. Cluster expression in CF sputum was significantly higher than its expression in plasma. Elevated CF sputum Mirc1/Mir17-92 cluster expression positively correlated with pulmonary exacerbations and negatively correlated with lung function. Patients with CF undergoing treatment with the CFTR modulator Ivacaftor/Lumacaftor did not demonstrate significant change in the expression Mirc1/Mir17-92 cluster after six months of treatment. Mirc1/Mir17-92 cluster expression is a promising biomarker of respiratory status in patients with CF including pulmonary exacerbation. Published by Elsevier B.V.

  16. MASS CALIBRATION AND COSMOLOGICAL ANALYSIS OF THE SPT-SZ GALAXY CLUSTER SAMPLE USING VELOCITY DISPERSION σ v AND X-RAY Y X MEASUREMENTS

    International Nuclear Information System (INIS)

    Bocquet, S.; Saro, A.; Mohr, J. J.; Bazin, G.; Chiu, I.; Desai, S.; Aird, K. A.; Ashby, M. L. N.; Bayliss, M.; Bautz, M.; Benson, B. A.; Bleem, L. E.; Carlstrom, J. E.; Chang, C. L.; Crawford, T. M.; Crites, A. T.; Brodwin, M.; Cho, H. M.; Clocchiatti, A.; De Haan, T.

    2015-01-01

    We present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg 2 of the survey along with 63 velocity dispersion (σ v ) and 16 X-ray Y X measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ v and Y X are consistent at the 0.6σ level, with the σ v calibration preferring ∼16% higher masses. We use the full SPT CL data set (SZ clusters+σ v +Y X ) to measure σ 8 (Ω m /0.27) 0.3 = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is ∑m ν = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger ∑m ν further reconciles the results. When we combine the SPT CL and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y X calibration and 0.8σ higher than the σ v calibration. Given the scale of these shifts (∼44% and ∼23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ω m = 0.299 ± 0.009 and σ 8 = 0.829 ± 0.011. Within a νCDM model we find ∑m ν = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation-of-state parameter w to vary, we find γ = 0.73 ± 0.28 and w = –1.007 ± 0.065, demonstrating that the

  17. Quantification of physical activity using the QAPACE Questionnaire: a two stage cluster sample design survey of children and adolescents attending urban school.

    Science.gov (United States)

    Barbosa, Nicolas; Sanchez, Carlos E; Patino, Efrain; Lozano, Benigno; Thalabard, Jean C; LE Bozec, Serge; Rieu, Michel

    2016-05-01

    Quantification of physical activity as energy expenditure is important since youth for the prevention of chronic non communicable diseases in adulthood. It is necessary to quantify physical activity expressed in daily energy expenditure (DEE) in school children and adolescents between 8-16 years, by age, gender and socioeconomic level (SEL) in Bogotá. This is a Two Stage Cluster Survey Sample. From a universe of 4700 schools and 760000 students from three existing socioeconomic levels in Bogotá (low, medium and high). The random sample was 20 schools and 1840 students (904 boys and 936 girls). Foreshadowing desertion of participants and inconsistency in the questionnaire responses, the sample size was increased. Thus, 6 individuals of each gender for each of the nine age groups were selected, resulting in a total sample of 2160 individuals. Selected students filled the QAPACE questionnaire under supervision. The data was analyzed comparing means with multivariate general linear model. Fixed factors used were: gender (boys and girls), age (8 to 16 years old) and tri-strata SEL (low, medium and high); as independent variables were assessed: height, weight, leisure time, expressed in hours/day and dependent variable: daily energy expenditure DEE (kJ.kg-1.day-1): during leisure time (DEE-LT), during school time (DEE-ST), during vacation time (DEE-VT), and total mean DEE per year (DEEm-TY) RESULTS: Differences in DEE by gender, in boys, LT and all DEE, with the SEL all variables were significant; but age-SEL was only significant in DEE-VT. In girls, with the SEL all variables were significant. The post hoc multiple comparisons tests were significant with age using Fisher's Least Significant Difference (LSD) test in all variables. For both genders and for all SELs the values in girls had the higher value except SEL high (5-6) The boys have higher values in DEE-LT, DEE-ST, DEE-VT; except in DEEm-TY in SEL (5-6) In SEL (5-6) all DEEs for both genders are highest. For SEL

  18. Connection between Seyfert galaxies and clusters

    International Nuclear Information System (INIS)

    Petrosyan, A.R.

    1988-01-01

    To identify Seyfert galaxies that are members of clusters, the sample of known Seyfert galaxies (464 objects) is tested against the Zwicky, Abell, and southern clusters. On the basis of the criteria adopted in the paper, 67 Seyfert galaxies are selected as probable members of Zwicky clusters, 15 as members of Abell clusters, and 18 as members of southern clusters. Lists of these objects are given

  19. Merging history of three bimodal clusters

    Science.gov (United States)

    Maurogordato, S.; Sauvageot, J. L.; Bourdin, H.; Cappi, A.; Benoist, C.; Ferrari, C.; Mars, G.; Houairi, K.

    2011-01-01

    We present a combined X-ray and optical analysis of three bimodal galaxy clusters selected as merging candidates at z ~ 0.1. These targets are part of MUSIC (MUlti-Wavelength Sample of Interacting Clusters), which is a general project designed to study the physics of merging clusters by means of multi-wavelength observations. Observations include spectro-imaging with XMM-Newton EPIC camera, multi-object spectroscopy (260 new redshifts), and wide-field imaging at the ESO 3.6 m and 2.2 m telescopes. We build a global picture of these clusters using X-ray luminosity and temperature maps together with galaxy density and velocity distributions. Idealized numerical simulations were used to constrain the merging scenario for each system. We show that A2933 is very likely an equal-mass advanced pre-merger ~200 Myr before the core collapse, while A2440 and A2384 are post-merger systems (~450 Myr and ~1.5 Gyr after core collapse, respectively). In the case of A2384, we detect a spectacular filament of galaxies and gas spreading over more than 1 h-1 Mpc, which we infer to have been stripped during the previous collision. The analysis of the MUSIC sample allows us to outline some general properties of merging clusters: a strong luminosity segregation of galaxies in recent post-mergers; the existence of preferential axes - corresponding to the merging directions - along which the BCGs and structures on various scales are aligned; the concomitance, in most major merger cases, of secondary merging or accretion events, with groups infalling onto the main cluster, and in some cases the evidence of previous merging episodes in one of the main components. These results are in good agreement with the hierarchical scenario of structure formation, in which clusters are expected to form by successive merging events, and matter is accreted along large-scale filaments. Based on data obtained with the European Southern Observatory, Chile (programs 072.A-0595, 075.A-0264, and 079.A-0425

  20. Molecular-based rapid inventories of sympatric diversity: a comparison of DNA barcode clustering methods applied to geography-based vs clade-based sampling of amphibians.

    Science.gov (United States)

    Paz, Andrea; Crawford, Andrew J

    2012-11-01

    Molecular markers offer a universal source of data for quantifying biodiversity. DNA barcoding uses a standardized genetic marker and a curated reference database to identify known species and to reveal cryptic diversity within wellsampled clades. Rapid biological inventories, e.g. rapid assessment programs (RAPs), unlike most barcoding campaigns, are focused on particular geographic localities rather than on clades. Because of the potentially sparse phylogenetic sampling, the addition of DNA barcoding to RAPs may present a greater challenge for the identification of named species or for revealing cryptic diversity. In this article we evaluate the use of DNA barcoding for quantifying lineage diversity within a single sampling site as compared to clade-based sampling, and present examples from amphibians. We compared algorithms for identifying DNA barcode clusters (e.g. species, cryptic species or Evolutionary Significant Units) using previously published DNA barcode data obtained from geography-based sampling at a site in Central Panama, and from clade-based sampling in Madagascar. We found that clustering algorithms based on genetic distance performed similarly on sympatric as well as clade-based barcode data, while a promising coalescent-based method performed poorly on sympatric data. The various clustering algorithms were also compared in terms of speed and software implementation. Although each method has its shortcomings in certain contexts, we recommend the use of the ABGD method, which not only performs fairly well under either sampling method, but does so in a few seconds and with a user-friendly Web interface.

  1. Sample design considerations of indoor air exposure surveys

    International Nuclear Information System (INIS)

    Cox, B.G.; Mage, D.T.; Immerman, F.W.

    1988-01-01

    Concern about the potential for indoor air pollution has prompted recent surveys of radon and NO 2 concentrations in homes and personal exposure studies of volatile organics, carbon monoxide and pesticides, to name a few. The statistical problems in designing sample surveys that measure the physical environment are diverse and more complicated than those encountered in traditional surveys of human attitudes and attributes. This paper addresses issues encountered when designing indoor air quality (IAQ) studies. General statistical concepts related to target population definition, frame creation, and sample selection for area household surveys and telephone surveys are presented. The implications of different measurement approaches are discussed, and response rate considerations are described

  2. Enhanced magnetocrystalline anisotropy in deposited cobalt clusters

    Energy Technology Data Exchange (ETDEWEB)

    Eastham, D.A.; Denby, P.M.; Kirkman, I.W. [Daresbury Laboratory, Daresbury, Warrington (United Kingdom); Harrison, A.; Whittaker, A.G. [Department of Chemistry, University of Edinburgh, Edinburgh (United Kingdom)

    2002-01-28

    The magnetic properties of nanomaterials made by embedding cobalt nanocrystals in a copper matrix have been studied using a SQUID magnetometer. The remanent magnetization at temperatures down to 1.8 K and the RT (room temperature) field-dependent magnetization of 1000- and 8000-atom (average-size) cobalt cluster samples have been measured. In all cases it has been possible to relate the morphology of the material to the magnetic properties. However, it is found that the deposited cluster samples contain a majority of sintered clusters even at cobalt concentrations as low as 5% by volume. The remanent magnetization of the 8000-atom samples was found to be bimodal, consisting of one contribution from spherical particles and one from touching (sintered) clusters. Using a Monte Carlo calculation to simulate the sintering it has been possible to calculate a size distribution which fits the RT superparamagnetic behaviour of the 1000-atom samples. The remanent magnetization for this average size of clusters could then be fitted to a simple model assuming that all the nanoparticles are spherical and have a size distribution which fits the superparamagnetic behaviour. This gives a value for the potential energy barrier height (for reversing the spin direction) of 2.0 {mu}eV/atom which is almost four times the accepted value for face-centred-cubic bulk cobalt. The remanent magnetization for the spherical component of the large-cluster sample could not be fitted with a single barrier height and it is conjectured that this is because the barriers change as a function of cluster size. The average value is 1.5 {mu}eV/atom but presumably this value tends toward the bulk value (0.5 {mu}eV/atom) for the largest clusters in this sample. (author)

  3. Are clusters of dietary patterns and cluster membership stable over time? Results of a longitudinal cluster analysis study.

    Science.gov (United States)

    Walthouwer, Michel Jean Louis; Oenema, Anke; Soetens, Katja; Lechner, Lilian; de Vries, Hein

    2014-11-01

    Developing nutrition education interventions based on clusters of dietary patterns can only be done adequately when it is clear if distinctive clusters of dietary patterns can be derived and reproduced over time, if cluster membership is stable, and if it is predictable which type of people belong to a certain cluster. Hence, this study aimed to: (1) identify clusters of dietary patterns among Dutch adults, (2) test the reproducibility of these clusters and stability of cluster membership over time, and (3) identify sociodemographic predictors of cluster membership and cluster transition. This study had a longitudinal design with online measurements at baseline (N=483) and 6 months follow-up (N=379). Dietary intake was assessed with a validated food frequency questionnaire. A hierarchical cluster analysis was performed, followed by a K-means cluster analysis. Multinomial logistic regression analyses were conducted to identify the sociodemographic predictors of cluster membership and cluster transition. At baseline and follow-up, a comparable three-cluster solution was derived, distinguishing a healthy, moderately healthy, and unhealthy dietary pattern. Male and lower educated participants were significantly more likely to have a less healthy dietary pattern. Further, 251 (66.2%) participants remained in the same cluster, 45 (11.9%) participants changed to an unhealthier cluster, and 83 (21.9%) participants shifted to a healthier cluster. Men and people living alone were significantly more likely to shift toward a less healthy dietary pattern. Distinctive clusters of dietary patterns can be derived. Yet, cluster membership is unstable and only few sociodemographic factors were associated with cluster membership and cluster transition. These findings imply that clusters based on dietary intake may not be suitable as a basis for nutrition education interventions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs

    Directory of Open Access Journals (Sweden)

    Faqir Muhammad

    2007-01-01

    Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.

  5. Sampling designs matching species biology produce accurate and affordable abundance indices

    Directory of Open Access Journals (Sweden)

    Grant Harris

    2013-12-01

    Full Text Available Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling, it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS data from 42 Alaskan brown bears (Ursus arctos. Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion, and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture

  6. Sampling designs matching species biology produce accurate and affordable abundance indices.

    Science.gov (United States)

    Harris, Grant; Farley, Sean; Russell, Gareth J; Butler, Matthew J; Selinger, Jeff

    2013-01-01

    Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling), it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS) data from 42 Alaskan brown bears (Ursus arctos). Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion), and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km(2) cells) and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture sessions

  7. Sampling designs matching species biology produce accurate and affordable abundance indices

    Science.gov (United States)

    Farley, Sean; Russell, Gareth J.; Butler, Matthew J.; Selinger, Jeff

    2013-01-01

    Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling), it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS) data from 42 Alaskan brown bears (Ursus arctos). Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion), and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells) and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture sessions, which

  8. Lagoa Real design. Description and evaluation of sampling system

    International Nuclear Information System (INIS)

    Hashizume, B.K.

    1982-10-01

    This report describes the samples preparation system of drilling from Lagoa Real Design, aiming obtainment representative fraction of the half from drilling outlier. The error of sampling + analysis and analytical accuracy was obtainment by delayed neutron analysis. (author)

  9. Older drivers' attitudes about instrument cluster designs in vehicles.

    Science.gov (United States)

    Owsley, Cynthia; McGwin, Gerald; Seder, Thomas

    2011-11-01

    Little is known about older drivers' preferences and attitudes about instrumentation design in vehicles. Yet visual processing impairments are common among older adults and could impact their ability to interface with a vehicle's dashboard. The purpose of this study is to obtain information from them about this topic, using focus groups and content analysis methodology. A trained facilitator led 8 focus groups of older adults. Discussion was stimulated by an outline relevant to dashboard interfaces, audiotaped, and transcribed. Using multi-step content analysis, a trained coder placed comments into thematic categories and coded comments as positive, negative, or neutral in meaning. Comments were coded into these categories: gauges, knobs/switches, interior lighting, color, lettering, symbols, location, entertainment, GPS, cost, uniformity, and getting information. Comments on gauges and knobs/switches represented half the comments. Women made more comments about getting information; men made more comments about uniformity. Positive and negative comments were made in each category; individual differences in preferences were broad. The results of this study will be used to guide the design of a population-based survey of older drivers about instrument cluster format, which will also examine how their responses are related to their visual processing capabilities. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Ethical implications of excessive cluster sizes in cluster randomised trials.

    Science.gov (United States)

    Hemming, Karla; Taljaard, Monica; Forbes, Gordon; Eldridge, Sandra M; Weijer, Charles

    2018-02-20

    The cluster randomised trial (CRT) is commonly used in healthcare research. It is the gold-standard study design for evaluating healthcare policy interventions. A key characteristic of this design is that as more participants are included, in a fixed number of clusters, the increase in achievable power will level off. CRTs with cluster sizes that exceed the point of levelling-off will have excessive numbers of participants, even if they do not achieve nominal levels of power. Excessively large cluster sizes may have ethical implications due to exposing trial participants unnecessarily to the burdens of both participating in the trial and the potential risks of harm associated with the intervention. We explore these issues through the use of two case studies. Where data are routinely collected, available at minimum cost and the intervention poses low risk, the ethical implications of excessively large cluster sizes are likely to be low (case study 1). However, to maximise the social benefit of the study, identification of excessive cluster sizes can allow for prespecified and fully powered secondary analyses. In the second case study, while there is no burden through trial participation (because the outcome data are routinely collected and non-identifiable), the intervention might be considered to pose some indirect risk to patients and risks to the healthcare workers. In this case study it is therefore important that the inclusion of excessively large cluster sizes is justifiable on other grounds (perhaps to show sustainability). In any randomised controlled trial, including evaluations of health policy interventions, it is important to minimise the burdens and risks to participants. Funders, researchers and research ethics committees should be aware of the ethical issues of excessively large cluster sizes in cluster trials. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is

  11. Design review report for rotary mode core sample truck (RMCST) modifications for flammable gas tanks, preliminary design

    International Nuclear Information System (INIS)

    Corbett, J.E.

    1996-02-01

    This report documents the completion of a preliminary design review for the Rotary Mode Core Sample Truck (RMCST) modifications for flammable gas tanks. The RMCST modifications are intended to support core sampling operations in waste tanks requiring flammable gas controls. The objective of this review was to validate basic design assumptions and concepts to support a path forward leading to a final design. The conclusion reached by the review committee was that the design was acceptable and efforts should continue toward a final design review

  12. A GMBCG GALAXY CLUSTER CATALOG OF 55,424 RICH CLUSTERS FROM SDSS DR7

    International Nuclear Information System (INIS)

    Hao Jiangang; Annis, James; Johnston, David E.; McKay, Timothy A.; Evrard, August; Siegel, Seth R.; Gerdes, David; Koester, Benjamin P.; Rykoff, Eli S.; Rozo, Eduardo; Wechsler, Risa H.; Busha, Michael; Becker, Matthew; Sheldon, Erin

    2010-01-01

    We present a large catalog of optically selected galaxy clusters from the application of a new Gaussian Mixture Brightest Cluster Galaxy (GMBCG) algorithm to SDSS Data Release 7 data. The algorithm detects clusters by identifying the red-sequence plus brightest cluster galaxy (BCG) feature, which is unique for galaxy clusters and does not exist among field galaxies. Red-sequence clustering in color space is detected using an Error Corrected Gaussian Mixture Model. We run GMBCG on 8240 deg 2 of photometric data from SDSS DR7 to assemble the largest ever optical galaxy cluster catalog, consisting of over 55,000 rich clusters across the redshift range from 0.1 < z < 0.55. We present Monte Carlo tests of completeness and purity and perform cross-matching with X-ray clusters and with the maxBCG sample at low redshift. These tests indicate high completeness and purity across the full redshift range for clusters with 15 or more members.

  13. A GMBCG galaxy cluster catalog of 55,880 rich clusters from SDSS DR7

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Jiangang; McKay, Timothy A.; Koester, Benjamin P.; Rykoff, Eli S.; Rozo, Eduardo; Annis, James; Wechsler, Risa H.; Evrard, August; Siegel, Seth R.; Becker, Matthew; Busha, Michael; /Fermilab /Michigan U. /Chicago U., Astron. Astrophys. Ctr. /UC, Santa Barbara /KICP, Chicago /KIPAC, Menlo Park /SLAC /Caltech /Brookhaven

    2010-08-01

    We present a large catalog of optically selected galaxy clusters from the application of a new Gaussian Mixture Brightest Cluster Galaxy (GMBCG) algorithm to SDSS Data Release 7 data. The algorithm detects clusters by identifying the red sequence plus Brightest Cluster Galaxy (BCG) feature, which is unique for galaxy clusters and does not exist among field galaxies. Red sequence clustering in color space is detected using an Error Corrected Gaussian Mixture Model. We run GMBCG on 8240 square degrees of photometric data from SDSS DR7 to assemble the largest ever optical galaxy cluster catalog, consisting of over 55,000 rich clusters across the redshift range from 0.1 < z < 0.55. We present Monte Carlo tests of completeness and purity and perform cross-matching with X-ray clusters and with the maxBCG sample at low redshift. These tests indicate high completeness and purity across the full redshift range for clusters with 15 or more members.

  14. Time Clustered Sampling Can Inflate the Inferred Substitution Rate in Foot-And-Mouth Disease Virus Analyses.

    Science.gov (United States)

    Pedersen, Casper-Emil T; Frandsen, Peter; Wekesa, Sabenzia N; Heller, Rasmus; Sangula, Abraham K; Wadsworth, Jemma; Knowles, Nick J; Muwanika, Vincent B; Siegismund, Hans R

    2015-01-01

    With the emergence of analytical software for the inference of viral evolution, a number of studies have focused on estimating important parameters such as the substitution rate and the time to the most recent common ancestor (tMRCA) for rapidly evolving viruses. Coupled with an increasing abundance of sequence data sampled under widely different schemes, an effort to keep results consistent and comparable is needed. This study emphasizes commonly disregarded problems in the inference of evolutionary rates in viral sequence data when sampling is unevenly distributed on a temporal scale through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully consider how samples are combined.

  15. Mechanical design and simulation of an automatized sample exchanger

    International Nuclear Information System (INIS)

    Lopez, Yon; Gora, Jimmy; Bedregal, Patricia; Hernandez, Yuri; Baltuano, Oscar; Gago, Javier

    2013-01-01

    The design of a turntable type sample exchanger for irradiation and with a capacity for up to 20 capsules was performed. Its function is the automatic sending of samples contained in polyethylene capsules, for irradiation in the grid position of the reactor core, using a pneumatic system and further analysis by neutron activation. This study shows the structural design analysis and calculations in selecting motors and actuators. This development will improve efficiency in the analysis, reducing the contribution of the workers and also the radiation exposure time. (authors).

  16. X-Ray Morphological Analysis of the Planck ESZ Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Lovisari, Lorenzo; Forman, William R.; Jones, Christine; Andrade-Santos, Felipe; Randall, Scott; Kraft, Ralph [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Ettori, Stefano [INAF, Osservatorio Astronomico di Bologna, via Ranzani 1, I-40127 Bologna (Italy); Arnaud, Monique; Démoclès, Jessica; Pratt, Gabriel W. [Laboratoire AIM, IRFU/Service d’Astrophysique—CEA/DRF—CNRS—Université Paris Diderot, Bât. 709, CEA-Saclay, F-91191 Gif-sur-Yvette Cedex (France)

    2017-09-01

    X-ray observations show that galaxy clusters have a very large range of morphologies. The most disturbed systems, which are good to study how clusters form and grow and to test physical models, may potentially complicate cosmological studies because the cluster mass determination becomes more challenging. Thus, we need to understand the cluster properties of our samples to reduce possible biases. This is complicated by the fact that different experiments may detect different cluster populations. For example, Sunyaev–Zeldovich (SZ) selected cluster samples have been found to include a greater fraction of disturbed systems than X-ray selected samples. In this paper we determine eight morphological parameters for the Planck Early Sunyaev–Zeldovich (ESZ) objects observed with XMM-Newton . We found that two parameters, concentration and centroid shift, are the best to distinguish between relaxed and disturbed systems. For each parameter we provide the values that allow selecting the most relaxed or most disturbed objects from a sample. We found that there is no mass dependence on the cluster dynamical state. By comparing our results with what was obtained with REXCESS clusters, we also confirm that the ESZ clusters indeed tend to be more disturbed, as found by previous studies.

  17. Extended radio sources in the cluster environment

    International Nuclear Information System (INIS)

    Burns, J.O. Jr.

    1979-01-01

    Extended radio galaxies that lie in rich and poor clusters were studied. A sample of 3CR and 4C radio sources that spatially coincide with poor Zwicky clusters of galaxies was observed to obtain accurate positions and flux densities. Then interferometer observations at a resolution of approx. = 10 arcsec were performed on the sample. The resulting maps were used to determine the nature of the extended source structure, to make secure optical identifications, and to eliminate possible background sources. The results suggest that the environments around both classical double and head-tail radio sources are similar in rich and poor clusters. The majority of the poor cluster sources exhibit some signs of morphological distortion (i.e., head-tails) indicative of dynamic interaction with a relatively dense intracluster medium. A large fraction (60 to 100%) of all radio sources appear to be members of clusters of galaxies if one includes both poor and rich cluster sources. Detailed total intensity and polarization observations for a more restricted sample of two classical double sources and nine head-tail galaxies were also performed. The purpose was to examine the spatial distributions of spectral index and polarization. Thin streams of radio emission appear to connect the nuclear radio-point components to the more extended structures in the head-tail galaxies. It is suggested that a non-relativistic plasma beam can explain both the appearance of the thin streams and larger-scale structure as well as the energy needed to generate the observed radio emission. The rich and poor radio cluster samples are combined to investigate the relationship between source morphology and the scale sizes of clustering. There is some indication that a large fraction of radio sources, including those in these samples, are in superclusters of galaxies

  18. Mutation Clusters from Cancer Exome.

    Science.gov (United States)

    Kakushadze, Zura; Yu, Willie

    2017-08-15

    We apply our statistically deterministic machine learning/clustering algorithm *K-means (recently developed in https://ssrn.com/abstract=2908286) to 10,656 published exome samples for 32 cancer types. A majority of cancer types exhibit a mutation clustering structure. Our results are in-sample stable. They are also out-of-sample stable when applied to 1389 published genome samples across 14 cancer types. In contrast, we find in- and out-of-sample instabilities in cancer signatures extracted from exome samples via nonnegative matrix factorization (NMF), a computationally-costly and non-deterministic method. Extracting stable mutation structures from exome data could have important implications for speed and cost, which are critical for early-stage cancer diagnostics, such as novel blood-test methods currently in development.

  19. Employing post-DEA cross-evaluation and cluster analysis in a sample of Greek NHS hospitals.

    Science.gov (United States)

    Flokou, Angeliki; Kontodimopoulos, Nick; Niakas, Dimitris

    2011-10-01

    To increase Data Envelopment Analysis (DEA) discrimination of efficient Decision Making Units (DMUs), by complementing "self-evaluated" efficiencies with "peer-evaluated" cross-efficiencies and, based on these results, to classify the DMUs using cluster analysis. Healthcare, which is deprived of such studies, was chosen as the study area. The sample consisted of 27 small- to medium-sized (70-500 beds) NHS general hospitals distributed throughout Greece, in areas where they are the sole NHS representatives. DEA was performed on 2005 data collected from the Ministry of Health and the General Secretariat of the National Statistical Service. Three inputs -hospital beds, physicians and other health professionals- and three outputs -case-mix adjusted hospitalized cases, surgeries and outpatient visits- were included in input-oriented, constant-returns-to-scale (CRS) and variable-returns-to-scale (VRS) models. In a second stage (post-DEA), aggressive and benevolent cross-efficiency formulations and clustering were employed, to validate (or not) the initial DEA scores. The "maverick index" was used to sort the peer-appraised hospitals. All analyses were performed using custom-made software. Ten benchmark hospitals were identified by DEA, but using the aggressive and benevolent formulations showed that two and four of them respectively were at the lower end of the maverick index list. On the other hand, only one 100% efficient (self-appraised) hospital was at the higher end of the list, using either formulation. Cluster analysis produced a hierarchical "tree" structure which dichotomized the hospitals in accordance to the cross-evaluation results, and provided insight on the two-dimensional path to improving efficiency. This is, to our awareness, the first study in the healthcare domain to employ both of these post-DEA techniques (cross efficiency and clustering) at the hospital (i.e. micro) level. The potential benefit for decision-makers is the capability to examine high

  20. Outcome-Dependent Sampling Design and Inference for Cox’s Proportional Hazards Model

    Science.gov (United States)

    Yu, Jichang; Liu, Yanyan; Cai, Jianwen; Sandler, Dale P.; Zhou, Haibo

    2016-01-01

    We propose a cost-effective outcome-dependent sampling design for the failure time data and develop an efficient inference procedure for data collected with this design. To account for the biased sampling scheme, we derive estimators from a weighted partial likelihood estimating equation. The proposed estimators for regression parameters are shown to be consistent and asymptotically normally distributed. A criteria that can be used to optimally implement the ODS design in practice is proposed and studied. The small sample performance of the proposed method is evaluated by simulation studies. The proposed design and inference procedure is shown to be statistically more powerful than existing alternative designs with the same sample sizes. We illustrate the proposed method with an existing real data from the Cancer Incidence and Mortality of Uranium Miners Study. PMID:28090134

  1. Micro-scale Spatial Clustering of Cholera Risk Factors in Urban Bangladesh.

    Science.gov (United States)

    Bi, Qifang; Azman, Andrew S; Satter, Syed Moinuddin; Khan, Azharul Islam; Ahmed, Dilruba; Riaj, Altaf Ahmed; Gurley, Emily S; Lessler, Justin

    2016-02-01

    population suggests a possible role for highly targeted interventions. Studies with cluster designs in areas with strong spatial clustering of exposures should increase sample size to account for the correlation of these exposures.

  2. Micro-scale Spatial Clustering of Cholera Risk Factors in Urban Bangladesh.

    Directory of Open Access Journals (Sweden)

    Qifang Bi

    2016-02-01

    cholera endemic population suggests a possible role for highly targeted interventions. Studies with cluster designs in areas with strong spatial clustering of exposures should increase sample size to account for the correlation of these exposures.

  3. A random cluster survey and a convenience sample give comparable estimates of immunity to vaccine preventable diseases in children of school age in Victoria, Australia.

    Science.gov (United States)

    Kelly, Heath; Riddell, Michaela A; Gidding, Heather F; Nolan, Terry; Gilbert, Gwendolyn L

    2002-08-19

    We compared estimates of the age-specific population immunity to measles, mumps, rubella, hepatitis B and varicella zoster viruses in Victorian school children obtained by a national sero-survey, using a convenience sample of residual sera from diagnostic laboratories throughout Australia, with those from a three-stage random cluster survey. When grouped according to school age (primary or secondary school) there was no significant difference in the estimates of immunity to measles, mumps, hepatitis B or varicella. Compared with the convenience sample, the random cluster survey estimated higher immunity to rubella in samples from both primary (98.7% versus 93.6%, P = 0.002) and secondary school students (98.4% versus 93.2%, P = 0.03). Despite some limitations, this study suggests that the collection of a convenience sample of sera from diagnostic laboratories is an appropriate sampling strategy to provide population immunity data that will inform Australia's current and future immunisation policies. Copyright 2002 Elsevier Science Ltd.

  4. In silico sampling reveals the effect of clustering and shows that the log-normal rank abundance curve is an artefact

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    The impact of clustering on rank abundance, species-individual (S-N)and species-area curves was investigated using a computer programme for in silico sampling. In a rank abundance curve the abundances of species are plotted on log-scale against species sequence. In an S-N curve the number of species

  5. A large sample of shear-selected clusters from the Hyper Suprime-Cam Subaru Strategic Program S16A Wide field mass maps

    Science.gov (United States)

    Miyazaki, Satoshi; Oguri, Masamune; Hamana, Takashi; Shirasaki, Masato; Koike, Michitaro; Komiyama, Yutaka; Umetsu, Keiichi; Utsumi, Yousuke; Okabe, Nobuhiro; More, Surhud; Medezinski, Elinor; Lin, Yen-Ting; Miyatake, Hironao; Murayama, Hitoshi; Ota, Naomi; Mitsuishi, Ikuyuki

    2018-01-01

    We present the result of searching for clusters of galaxies based on weak gravitational lensing analysis of the ˜160 deg2 area surveyed by Hyper Suprime-Cam (HSC) as a Subaru Strategic Program. HSC is a new prime focus optical imager with a 1.5°-diameter field of view on the 8.2 m Subaru telescope. The superb median seeing on the HSC i-band images of 0.56" allows the reconstruction of high angular resolution mass maps via weak lensing, which is crucial for the weak lensing cluster search. We identify 65 mass map peaks with a signal-to-noise (S/N) ratio larger than 4.7, and carefully examine their properties by cross-matching the clusters with optical and X-ray cluster catalogs. We find that all the 39 peaks with S/N > 5.1 have counterparts in the optical cluster catalogs, and only 2 out of the 65 peaks are probably false positives. The upper limits of X-ray luminosities from the ROSAT All Sky Survey (RASS) imply the existence of an X-ray underluminous cluster population. We show that the X-rays from the shear-selected clusters can be statistically detected by stacking the RASS images. The inferred average X-ray luminosity is about half that of the X-ray-selected clusters of the same mass. The radial profile of the dark matter distribution derived from the stacking analysis is well modeled by the Navarro-Frenk-White profile with a small concentration parameter value of c500 ˜ 2.5, which suggests that the selection bias on the orientation or the internal structure for our shear-selected cluster sample is not strong.

  6. Damage evolution analysis of coal samples under cyclic loading based on single-link cluster method

    Science.gov (United States)

    Zhang, Zhibo; Wang, Enyuan; Li, Nan; Li, Xuelong; Wang, Xiaoran; Li, Zhonghui

    2018-05-01

    In this paper, the acoustic emission (AE) response of coal samples under cyclic loading is measured. The results show that there is good positive relation between AE parameters and stress. The AE signal of coal samples under cyclic loading exhibits an obvious Kaiser Effect. The single-link cluster (SLC) method is applied to analyze the spatial evolution characteristics of AE events and the damage evolution process of coal samples. It is found that a subset scale of the SLC structure becomes smaller and smaller when the number of cyclic loading increases, and there is a negative linear relationship between the subset scale and the degree of damage. The spatial correlation length ξ of an SLC structure is calculated. The results show that ξ fluctuates around a certain value from the second cyclic loading process to the fifth cyclic loading process, but spatial correlation length ξ clearly increases in the sixth loading process. Based on the criterion of microcrack density, the coal sample failure process is the transformation from small-scale damage to large-scale damage, which is the reason for changes in the spatial correlation length. Through a systematic analysis, the SLC method is an effective method to research the damage evolution process of coal samples under cyclic loading, and will provide important reference values for studying coal bursts.

  7. Alloy design as an inverse problem of cluster expansion models

    DEFF Research Database (Denmark)

    Larsen, Peter Mahler; Kalidindi, Arvind R.; Schmidt, Søren

    2017-01-01

    Central to a lattice model of an alloy system is the description of the energy of a given atomic configuration, which can be conveniently developed through a cluster expansion. Given a specific cluster expansion, the ground state of the lattice model at 0 K can be solved by finding the configurat......Central to a lattice model of an alloy system is the description of the energy of a given atomic configuration, which can be conveniently developed through a cluster expansion. Given a specific cluster expansion, the ground state of the lattice model at 0 K can be solved by finding...... the inverse problem in terms of energetically distinct configurations, using a constraint satisfaction model to identify constructible configurations, and show that a convex hull can be used to identify ground states. To demonstrate the approach, we solve for all ground states for a binary alloy in a 2D...

  8. Clustering of near clusters versus cluster compactness

    International Nuclear Information System (INIS)

    Yu Gao; Yipeng Jing

    1989-01-01

    The clustering properties of near Zwicky clusters are studied by using the two-point angular correlation function. The angular correlation functions for compact and medium compact clusters, for open clusters, and for all near Zwicky clusters are estimated. The results show much stronger clustering for compact and medium compact clusters than for open clusters, and that open clusters have nearly the same clustering strength as galaxies. A detailed study of the compactness-dependence of correlation function strength is worth investigating. (author)

  9. Relative efficiency of unequal versus equal cluster sizes in cluster randomized trials using generalized estimating equation models.

    Science.gov (United States)

    Liu, Jingxia; Colditz, Graham A

    2018-05-01

    There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the "working correlation structure" is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs-exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Comparison of sampling strategies for tobacco retailer inspections to maximize coverage in vulnerable areas and minimize cost.

    Science.gov (United States)

    Lee, Joseph G L; Shook-Sa, Bonnie E; Bowling, J Michael; Ribisl, Kurt M

    2017-06-23

    In the United States, tens of thousands of inspections of tobacco retailers are conducted each year. Various sampling choices can reduce travel costs, emphasize enforcement in areas with greater non-compliance, and allow for comparability between states and over time. We sought to develop a model sampling strategy for state tobacco retailer inspections. Using a 2014 list of 10,161 North Carolina tobacco retailers, we compared results from simple random sampling; stratified, clustered at the ZIP code sampling; and, stratified, clustered at the census tract sampling. We conducted a simulation of repeated sampling and compared approaches for their comparative level of precision, coverage, and retailer dispersion. While maintaining an adequate design effect and statistical precision appropriate for a public health enforcement program, both stratified, clustered ZIP- and tract-based approaches were feasible. Both ZIP and tract strategies yielded improvements over simple random sampling, with relative improvements, respectively, of average distance between retailers (reduced 5.0% and 1.9%), percent Black residents in sampled neighborhoods (increased 17.2% and 32.6%), percent Hispanic residents in sampled neighborhoods (reduced 2.2% and increased 18.3%), percentage of sampled retailers located near schools (increased 61.3% and 37.5%), and poverty rate in sampled neighborhoods (increased 14.0% and 38.2%). States can make retailer inspections more efficient and targeted with stratified, clustered sampling. Use of statistically appropriate sampling strategies like these should be considered by states, researchers, and the Food and Drug Administration to improve program impact and allow for comparisons over time and across states. The authors present a model tobacco retailer sampling strategy for promoting compliance and reducing costs that could be used by U.S. states and the Food and Drug Administration (FDA). The design is feasible to implement in North Carolina. Use of

  11. A Unique Sample of Extreme-BCG Clusters at 0.2 < z < 0.5

    Science.gov (United States)

    Garmire, Gordon

    2017-09-01

    The recently-discovered Phoenix cluster harbors the most extreme BCG in the known universe. Despite the cluster's high mass and X-ray luminosity, it was consistently identified by surveys as an isolated AGN, due to the bright central point source and the compact cool core. Armed with hindsight, we have undertaken an all-sky survey based on archival X-ray, OIR, and radio data to identify other similarly-extreme systems that were likewise missed. A pilot study demonstrated that this strategy works, leading to the discovery of a new, massive cluster at z 0.2 which was missed by previous X-ray surveys due to the presence of a bright central QSO. We propose here to observe 6 new clusters from our complete northern-sky survey, which harbor some of the most extreme central galaxies known.

  12. Cluster-cluster clustering

    International Nuclear Information System (INIS)

    Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C.S.; Yale Univ., New Haven, CT; California Univ., Santa Barbara; Cambridge Univ., England; Sussex Univ., Brighton, England)

    1985-01-01

    The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales. 30 references

  13. Novel Ordered Stepped-Wedge Cluster Trial Designs for Detecting Ebola Vaccine Efficacy Using a Spatially Structured Mathematical Model.

    Directory of Open Access Journals (Sweden)

    Ibrahim Diakite

    2016-08-01

    Full Text Available During the 2014 Ebola virus disease (EVD outbreak, policy-makers were confronted with difficult decisions on how best to test the efficacy of EVD vaccines. On one hand, many were reluctant to withhold a vaccine that might prevent a fatal disease from study participants randomized to a control arm. On the other, regulatory bodies called for rigorous placebo-controlled trials to permit direct measurement of vaccine efficacy prior to approval of the products. A stepped-wedge cluster study (SWCT was proposed as an alternative to a more traditional randomized controlled vaccine trial to address these concerns. Here, we propose novel "ordered stepped-wedge cluster trial" (OSWCT designs to further mitigate tradeoffs between ethical concerns, logistics, and statistical rigor.We constructed a spatially structured mathematical model of the EVD outbreak in Sierra Leone. We used the output of this model to simulate and compare a series of stepped-wedge cluster vaccine studies. Our model reproduced the observed order of first case occurrence within districts of Sierra Leone. Depending on the infection risk within the trial population and the trial start dates, the statistical power to detect a vaccine efficacy of 90% varied from 14% to 32% for standard SWCT, and from 67% to 91% for OSWCTs for an alpha error of 5%. The model's projection of first case occurrence was robust to changes in disease natural history parameters.Ordering clusters in a step-wedge trial based on the cluster's underlying risk of infection as predicted by a spatial model can increase the statistical power of a SWCT. In the event of another hemorrhagic fever outbreak, implementation of our proposed OSWCT designs could improve statistical power when a step-wedge study is desirable based on either ethical concerns or logistical constraints.

  14. Academic and Behavioral Design Parameters for Cluster Randomized Trials in Kindergarten: An Analysis of the Early Childhood Longitudinal Study 2011 Kindergarten Cohort (ECLS-K 2011).

    Science.gov (United States)

    Hedberg, E C

    2016-06-28

    There is an increased focus on randomized trials for proximal behavioral outcomes in early childhood research. However, planning sample sizes for such designs requires extant information on the size of effect, variance decomposition, and effectiveness of covariates. The purpose of this article is to employ a recent large representative sample of early childhood longitudinal study kindergartners to estimate design parameters for use in planning cluster randomized trials. A secondary objective is to compare the results of math and reading with the previous kindergartner cohort of 1999. For each measure, fall-spring gains in effect size units are calculated. In addition, multilevel models are fit to estimate variance components that are used to calculate intraclass correlations (ICCs) and R 2 statistics. The implications of the reported parameters are summarized in tables of required school sample sizes to detect small effects. The outcomes include information about student scores regarding learning behaviors, general behaviors, and academic abilities. Aside from math and reading, there were small gains in these measures from fall to spring, leading to effect sizes between about .1 and .2. In addition, the nonacademic ICCs are smaller than the academic ICCs but are still nontrivial. Use of a pretest covariate is generally effective in reducing the required sample size in power analyses. The ICCs for math and reading are smaller for the current sample compared with the 1999 sample. © The Author(s) 2016.

  15. Design development of robotic system for on line sampling in fuel reprocessing

    International Nuclear Information System (INIS)

    Balasubramanian, G.R.; Venugopal, P.R.; Padmashali, G.K.

    1990-01-01

    This presentation describes the design and developmental work that is being carried out for the design of an automated sampling system for fast reactor fuel reprocessing plants. The plant proposes to use integrated sampling system. The sample is taken across regular process streams from any intermediate hold up pot. A robot system is planned to take the sample from the sample pot, transfer it to the sample bottle, cap the bottle and transfer the bottle to a pneumatic conveying station. The system covers a large number of sample pots. Alternate automated systems are also examined (1). (author). 4 refs., 2 figs

  16. THE MASSIVE DISTANT CLUSTERS OF WISE SURVEY: THE FIRST DISTANT GALAXY CLUSTER DISCOVERED BY WISE

    International Nuclear Information System (INIS)

    Gettings, Daniel P.; Gonzalez, Anthony H.; Mancone, Conor; Stanford, S. Adam; Eisenhardt, Peter R. M.; Stern, Daniel; Brodwin, Mark; Zeimann, Gregory R.; Masci, Frank J.; Papovich, Casey; Tanaka, Ichi; Wright, Edward L.

    2012-01-01

    We present spectroscopic confirmation of a z = 0.99 galaxy cluster discovered using data from the Wide-field Infrared Survey Explorer (WISE). This is the first z ∼ 1 cluster candidate from the Massive Distant Clusters of WISE Survey to be confirmed. It was selected as an overdensity of probable z ∼> 1 sources using a combination of WISE and Sloan Digital Sky Survey DR8 photometric catalogs. Deeper follow-up imaging data from Subaru and WIYN reveal the cluster to be a rich system of galaxies, and multi-object spectroscopic observations from Keck confirm five cluster members at z = 0.99. The detection and confirmation of this cluster represents a first step toward constructing a uniformly selected sample of distant, high-mass galaxy clusters over the full extragalactic sky using WISE data.

  17. Composition design of superhigh strength maraging stainless steels using a cluster model

    Directory of Open Access Journals (Sweden)

    Zhen Li

    2014-02-01

    Full Text Available The composition characteristics of maraging stainless steels were studied in the present work investigation using a cluster-plus-glue-atom model. The least solubility limit of high-temperature austenite to form martensite in basic Fe–Ni–Cr corresponds to the cluster formula [NiFe12]Cr3, where NiFe12 is a cuboctahedron centered by Ni and surrounded by 12 Fe atoms in FCC structure and Cr serves as glue atoms. A cluster formula [NiFe12](Cr2Ni with surplus Ni was then determined to ensure the second phase (Ni3M precipitation, based on which new multi-component alloys [(Ni,Cu16Fe192](Cr32(Ni,Mo,Ti,Nb,Al,V16 were designed. These alloys were prepared by copper mould suction casting method, then solid-solution treated at 1273 K for 1 h followed by water-quenching, and finally aged at 783 K for 3 h. The experimental results showed that the multi-element alloying results in Ni3M precipitation on the martensite, which enhances the strengths of alloys sharply after ageing treatment. Among them, the aged [(Cu4Ni12Fe192](Cr32(Ni8.5Mo2Ti2Nb0.5Al1V1 alloy (Fe74.91Ni8.82Cr11.62Mo1.34Ti0.67Nb0.32Al0.19V0.36Cu1.78 wt% has higher tensile strengths with YS=1456 MPa and UTS=1494 MPa. It also exhibits good corrosion-resistance in 3.5 wt% NaCl solution.

  18. Connecting Remote Clusters with ATM

    Energy Technology Data Exchange (ETDEWEB)

    Hu, T.C.; Wyckoff, P.S.

    1998-10-01

    Sandia's entry into utilizing clusters of networked workstations is called Computational Plant or CPlant for short. The design of CPlant uses Ethernet to boot the individual nodes, Myrinet to communicate within a node cluster, and ATM to connect between remote clusters. This SAND document covers the work done to enable the use of ATM on the CPlant nodes in the Fall of 1997.

  19. Depressive Symptoms and Deliberate Self-Harm in a Community Sample of Adolescents: A Prospective Study

    Directory of Open Access Journals (Sweden)

    Lars-Gunnar Lundh

    2011-01-01

    Full Text Available The associations between depressive symptoms and deliberate self-harm were studied by means of a 2-wave longitudinal design in a community sample of 1052 young adolescents, with longitudinal data for 83.6% of the sample. Evidence was found for a bidirectional relationship in girls, with depressive symptoms being a risk factor for increased self-harm one year later and self-harm a risk factor for increased depressive symptoms. Cluster analysis of profiles of depressive symptoms led to the identification of two clusters with clear depressive profiles (one severe, the other mild/moderate which were both characterized by an overrepresentation of girls and elevated levels of self-harm. Clusters with more circumscribed problems were also identified; of these, significantly increased levels of self-harm were found in a cluster characterized by negative self-image and in a cluster characterized by dysphoric relations to parents. It is suggested that self-harm serves more to regulate negative self-related feelings than sadness.

  20. Latent spatial models and sampling design for landscape genetics

    Science.gov (United States)

    Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.

    2016-01-01

    We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.

  1. A cluster randomized controlled trial of a clinical pathway for hospital treatment of heart failure: study design and population

    Directory of Open Access Journals (Sweden)

    Gardini Andrea

    2007-11-01

    Full Text Available Abstract Background The hospital treatment of heart failure frequently does not follow published guidelines, potentially contributing to the high morbidity, mortality and economic cost of this disorder. Consequently the development of clinical pathways has the potential to reduce the current variability in care, enhance guideline adherence, and improve outcomes for patients. Despite enthusiasm and diffusion, the widespread acceptance of clinical pathways remain questionable because very little prospective controlled data demonstrated their effectiveness. The Experimental Prospective Study on the Effectiveness and Efficiency of the Implementation of Clinical Pathways was designed in order to conduct a rigorous evaluation of clinical pathways in hospital treatment of acute heart failure. The primary objective of the trial was to evaluate the effectiveness of the implementation of clinical pathways for hospital treatment of heart failure in Italian hospitals. Methods/design Two-arm, cluster-randomized trial. 14 community hospitals were randomized either to arm 1 (clinical pathway: appropriate use of practice guidelines and supplies of drugs and ancillary services, new organization and procedures, patient education, etc. or to arm 2 (no intervention, usual care. 424 patients sample (212 in each group, 80% of power at the 5% significance level (two-sided. The primary outcome measure is in-hospital mortality. We will also analyze the impact of the clinical pathways comparing the length and the appropriateness of the stay, the rate of unscheduled readmissions, the customers' satisfaction and the costs treating the patients with the pathways and with the current practice along all the observation period. The quality of the care will be assessed by monitoring the use of diagnostic and therapeutic procedures during hospital stay and by measuring key quality indicators at discharge. Discussion This paper examines the design of the evaluation of a complex

  2. A Novel Double Cluster and Principal Component Analysis-Based Optimization Method for the Orbit Design of Earth Observation Satellites

    Directory of Open Access Journals (Sweden)

    Yunfeng Dong

    2017-01-01

    Full Text Available The weighted sum and genetic algorithm-based hybrid method (WSGA-based HM, which has been applied to multiobjective orbit optimizations, is negatively influenced by human factors through the artificial choice of the weight coefficients in weighted sum method and the slow convergence of GA. To address these two problems, a cluster and principal component analysis-based optimization method (CPC-based OM is proposed, in which many candidate orbits are gradually randomly generated until the optimal orbit is obtained using a data mining method, that is, cluster analysis based on principal components. Then, the second cluster analysis of the orbital elements is introduced into CPC-based OM to improve the convergence, developing a novel double cluster and principal component analysis-based optimization method (DCPC-based OM. In DCPC-based OM, the cluster analysis based on principal components has the advantage of reducing the human influences, and the cluster analysis based on six orbital elements can reduce the search space to effectively accelerate convergence. The test results from a multiobjective numerical benchmark function and the orbit design results of an Earth observation satellite show that DCPC-based OM converges more efficiently than WSGA-based HM. And DCPC-based OM, to some degree, reduces the influence of human factors presented in WSGA-based HM.

  3. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  4. A Clustering Routing Protocol for Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Jinke Huang

    2016-01-01

    Full Text Available The dynamic topology of a mobile ad hoc network poses a real challenge in the design of hierarchical routing protocol, which combines proactive with reactive routing protocols and takes advantages of both. And as an essential technique of hierarchical routing protocol, clustering of nodes provides an efficient method of establishing a hierarchical structure in mobile ad hoc networks. In this paper, we designed a novel clustering algorithm and a corresponding hierarchical routing protocol for large-scale mobile ad hoc networks. Each cluster is composed of a cluster head, several cluster gateway nodes, several cluster guest nodes, and other cluster members. The proposed routing protocol uses proactive protocol between nodes within individual clusters and reactive protocol between clusters. Simulation results show that the proposed clustering algorithm and hierarchical routing protocol provide superior performance with several advantages over existing clustering algorithm and routing protocol, respectively.

  5. Visual Sample Plan (VSP) Software: Designs and Data Analyses for Sampling Contaminated Buildings

    International Nuclear Information System (INIS)

    Pulsipher, Brent A.; Wilson, John E.; Gilbert, Richard O.; Nuffer, Lisa L.; Hassig, Nancy L.

    2005-01-01

    A new module of the Visual Sample Plan (VSP) software has been developed to provide sampling designs and data analyses for potentially contaminated buildings. An important application is assessing levels of contamination in buildings after a terrorist attack. This new module, funded by DHS through the Combating Terrorism Technology Support Office, Technical Support Working Group, was developed to provide a tailored, user-friendly and visually-orientated buildings module within the existing VSP software toolkit, the latest version of which can be downloaded from http://dqo.pnl.gov/vsp. In case of, or when planning against, a chemical, biological, or radionuclide release within a building, the VSP module can be used to quickly and easily develop and visualize technically defensible sampling schemes for walls, floors, ceilings, and other surfaces to statistically determine if contamination is present, its magnitude and extent throughout the building and if decontamination has been effective. This paper demonstrates the features of this new VSP buildings module, which include: the ability to import building floor plans or to easily draw, manipulate, and view rooms in several ways; being able to insert doors, windows and annotations into a room; 3-D graphic room views with surfaces labeled and floor plans that show building zones that have separate air handing units. The paper will also discuss the statistical design and data analysis options available in the buildings module. Design objectives supported include comparing an average to a threshold when the data distribution is normal or unknown, and comparing measurements to a threshold to detect hotspots or to insure most of the area is uncontaminated when the data distribution is normal or unknown

  6. Two transistor cluster DICE Cells with the minimum area for a hardened 28-nm CMOS and 65-nm SRAM layout design

    International Nuclear Information System (INIS)

    Stenin, V.Ya.; Stepanov, P.V.

    2015-01-01

    A hardened DICE cell layout design is based on the two spaced transistor clusters of the DICE cell each consisting of four transistors. The larger the distance between these two CMOS transistor clusters, the more robust the hardened DICE SRAM to Single Event Upsets. Some versions of the 28-nm and 65-nm DICE CMOS SRAM block composition have been suggested with minimum cluster distances of 2.27-2.32 mkm. The area of hardened 28-nm DICE CMOS cells is larger than the area of 28-nm 6T CMOS cells by a factor of 2.1 [ru

  7. Rapid Sampling of Hydrogen Bond Networks for Computational Protein Design.

    Science.gov (United States)

    Maguire, Jack B; Boyken, Scott E; Baker, David; Kuhlman, Brian

    2018-05-08

    Hydrogen bond networks play a critical role in determining the stability and specificity of biomolecular complexes, and the ability to design such networks is important for engineering novel structures, interactions, and enzymes. One key feature of hydrogen bond networks that makes them difficult to rationally engineer is that they are highly cooperative and are not energetically favorable until the hydrogen bonding potential has been satisfied for all buried polar groups in the network. Existing computational methods for protein design are ill-equipped for creating these highly cooperative networks because they rely on energy functions and sampling strategies that are focused on pairwise interactions. To enable the design of complex hydrogen bond networks, we have developed a new sampling protocol in the molecular modeling program Rosetta that explicitly searches for sets of amino acid mutations that can form self-contained hydrogen bond networks. For a given set of designable residues, the protocol often identifies many alternative sets of mutations/networks, and we show that it can readily be applied to large sets of residues at protein-protein interfaces or in the interior of proteins. The protocol builds on a recently developed method in Rosetta for designing hydrogen bond networks that has been experimentally validated for small symmetric systems but was not extensible to many larger protein structures and complexes. The sampling protocol we describe here not only recapitulates previously validated designs with performance improvements but also yields viable hydrogen bond networks for cases where the previous method fails, such as the design of large, asymmetric interfaces relevant to engineering protein-based therapeutics.

  8. Super computer made with Linux cluster

    International Nuclear Information System (INIS)

    Lee, Jeong Hun; Oh, Yeong Eun; Kim, Jeong Seok

    2002-01-01

    This book consists of twelve chapters, which introduce super computer made with Linux cluster. The contents of this book are Linux cluster, the principle of cluster, design of Linux cluster, general things for Linux, building up terminal server and client, Bear wolf cluster by Debian GNU/Linux, cluster system with red hat, Monitoring system, application programming-MPI, on set-up and install application programming-PVM, with PVM programming and XPVM application programming-open PBS with composition and install and set-up and GRID with GRID system, GSI, GRAM, MDS, its install and using of tool kit

  9. Numerical experiment designs: study of the vibrational behaviour of the control rod cluster of a pressurized water reactor

    International Nuclear Information System (INIS)

    Soulier, B.; Bosselut, D.; Regnier, G.

    1997-01-01

    A finite element model has been performed at EDF to simulate the vibrations of control rod cluster assembly and to analyse the wear phenomenon of control rods. A parametrical study bas been performed for a given computer experiment domain with an experimental design method. The building of the computer experiment design is described. The influence of parameters on calculated mean wear power has been determined along rods and responses surfaces have been easily approximated. Systematism and closeness of experiment design technique is underlined. (authors)

  10. Random sampling or geostatistical modelling? Choosing between design-based and model-based sampling strategies for soil (with discussion)

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    1997-01-01

    Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based

  11. Synthetic Multiple-Imputation Procedure for Multistage Complex Samples

    Directory of Open Access Journals (Sweden)

    Zhou Hanzhi

    2016-03-01

    Full Text Available Multiple imputation (MI is commonly used when item-level missing data are present. However, MI requires that survey design information be built into the imputation models. For multistage stratified clustered designs, this requires dummy variables to represent strata as well as primary sampling units (PSUs nested within each stratum in the imputation model. Such a modeling strategy is not only operationally burdensome but also inferentially inefficient when there are many strata in the sample design. Complexity only increases when sampling weights need to be modeled. This article develops a generalpurpose analytic strategy for population inference from complex sample designs with item-level missingness. In a simulation study, the proposed procedures demonstrate efficient estimation and good coverage properties. We also consider an application to accommodate missing body mass index (BMI data in the analysis of BMI percentiles using National Health and Nutrition Examination Survey (NHANES III data. We argue that the proposed methods offer an easy-to-implement solution to problems that are not well-handled by current MI techniques. Note that, while the proposed method borrows from the MI framework to develop its inferential methods, it is not designed as an alternative strategy to release multiply imputed datasets for complex sample design data, but rather as an analytic strategy in and of itself.

  12. Application of clustering methods: Regularized Markov clustering (R-MCL) for analyzing dengue virus similarity

    Science.gov (United States)

    Lestari, D.; Raharjo, D.; Bustamam, A.; Abdillah, B.; Widhianto, W.

    2017-07-01

    Dengue virus consists of 10 different constituent proteins and are classified into 4 major serotypes (DEN 1 - DEN 4). This study was designed to perform clustering against 30 protein sequences of dengue virus taken from Virus Pathogen Database and Analysis Resource (VIPR) using Regularized Markov Clustering (R-MCL) algorithm and then we analyze the result. By using Python program 3.4, R-MCL algorithm produces 8 clusters with more than one centroid in several clusters. The number of centroid shows the density level of interaction. Protein interactions that are connected in a tissue, form a complex protein that serves as a specific biological process unit. The analysis of result shows the R-MCL clustering produces clusters of dengue virus family based on the similarity role of their constituent protein, regardless of serotypes.

  13. cluML: A markup language for clustering and cluster validity assessment of microarray data.

    Science.gov (United States)

    Bolshakova, Nadia; Cunningham, Pádraig

    2005-01-01

    cluML is a new markup language for microarray data clustering and cluster validity assessment. The XML-based format has been designed to address some of the limitations observed in traditional formats, such as inability to store multiple clustering (including biclustering) and validation results within a dataset. cluML is an effective tool to support biomedical knowledge representation in gene expression data analysis. Although cluML was developed for DNA microarray analysis applications, it can be effectively used for the representation of clustering and for the validation of other biomedical and physical data that has no limitations.

  14. Testing the accuracy of clustering redshifts with simulations

    Science.gov (United States)

    Scottez, V.; Benoit-Lévy, A.; Coupon, J.; Ilbert, O.; Mellier, Y.

    2018-03-01

    We explore the accuracy of clustering-based redshift inference within the MICE2 simulation. This method uses the spatial clustering of galaxies between a spectroscopic reference sample and an unknown sample. This study give an estimate of the reachable accuracy of this method. First, we discuss the requirements for the number objects in the two samples, confirming that this method does not require a representative spectroscopic sample for calibration. In the context of next generation of cosmological surveys, we estimated that the density of the Quasi Stellar Objects in BOSS allows us to reach 0.2 per cent accuracy in the mean redshift. Secondly, we estimate individual redshifts for galaxies in the densest regions of colour space ( ˜ 30 per cent of the galaxies) without using the photometric redshifts procedure. The advantage of this procedure is threefold. It allows: (i) the use of cluster-zs for any field in astronomy, (ii) the possibility to combine photo-zs and cluster-zs to get an improved redshift estimation, (iii) the use of cluster-z to define tomographic bins for weak lensing. Finally, we explore this last option and build five cluster-z selected tomographic bins from redshift 0.2 to 1. We found a bias on the mean redshift estimate of 0.002 per bin. We conclude that cluster-z could be used as a primary redshift estimator by next generation of cosmological surveys.

  15. Defining objective clusters for rabies virus sequences using affinity propagation clustering.

    Directory of Open Access Journals (Sweden)

    Susanne Fischer

    2018-01-01

    Full Text Available Rabies is caused by lyssaviruses, and is one of the oldest known zoonoses. In recent years, more than 21,000 nucleotide sequences of rabies viruses (RABV, from the prototype species rabies lyssavirus, have been deposited in public databases. Subsequent phylogenetic analyses in combination with metadata suggest geographic distributions of RABV. However, these analyses somewhat experience technical difficulties in defining verifiable criteria for cluster allocations in phylogenetic trees inviting for a more rational approach. Therefore, we applied a relatively new mathematical clustering algorythm named 'affinity propagation clustering' (AP to propose a standardized sub-species classification utilizing full-genome RABV sequences. Because AP has the advantage that it is computationally fast and works for any meaningful measure of similarity between data samples, it has previously been applied successfully in bioinformatics, for analysis of microarray and gene expression data, however, cluster analysis of sequences is still in its infancy. Existing (516 and original (46 full genome RABV sequences were used to demonstrate the application of AP for RABV clustering. On a global scale, AP proposed four clusters, i.e. New World cluster, Arctic/Arctic-like, Cosmopolitan, and Asian as previously assigned by phylogenetic studies. By combining AP with established phylogenetic analyses, it is possible to resolve phylogenetic relationships between verifiably determined clusters and sequences. This workflow will be useful in confirming cluster distributions in a uniform transparent manner, not only for RABV, but also for other comparative sequence analyses.

  16. Design and capabilities of an experimental setup based on magnetron sputtering for formation and deposition of size-selected metal clusters on ultra-clean surfaces

    DEFF Research Database (Denmark)

    Hartmann, Hannes; Popok, Vladimir; Barke, Ingo

    2012-01-01

    The design and performance of an experimental setup utilizing a magnetron sputtering source for production of beams of ionized size-selected clusters for deposition in ultra-high vacuum is described. For the case of copper cluster formation the influence of different source parameters is studied...

  17. OPEN CLUSTERS AS PROBES OF THE GALACTIC MAGNETIC FIELD. I. CLUSTER PROPERTIES

    Energy Technology Data Exchange (ETDEWEB)

    Hoq, Sadia; Clemens, D. P., E-mail: shoq@bu.edu, E-mail: clemens@bu.edu [Institute for Astrophysical Research, 725 Commonwealth Avenue, Boston University, Boston, MA 02215 (United States)

    2015-10-15

    Stars in open clusters are powerful probes of the intervening Galactic magnetic field via background starlight polarimetry because they provide constraints on the magnetic field distances. We use 2MASS photometric data for a sample of 31 clusters in the outer Galaxy for which near-IR polarimetric data were obtained to determine the cluster distances, ages, and reddenings via fitting theoretical isochrones to cluster color–magnitude diagrams. The fitting approach uses an objective χ{sup 2} minimization technique to derive the cluster properties and their uncertainties. We found the ages, distances, and reddenings for 24 of the clusters, and the distances and reddenings for 6 additional clusters that were either sparse or faint in the near-IR. The derived ranges of log(age), distance, and E(B−V) were 7.25–9.63, ∼670–6160 pc, and 0.02–1.46 mag, respectively. The distance uncertainties ranged from ∼8% to 20%. The derived parameters were compared to previous studies, and most cluster parameters agree within our uncertainties. To test the accuracy of the fitting technique, synthetic clusters with 50, 100, or 200 cluster members and a wide range of ages were fit. These tests recovered the input parameters within their uncertainties for more than 90% of the individual synthetic cluster parameters. These results indicate that the fitting technique likely provides reliable estimates of cluster properties. The distances derived will be used in an upcoming study of the Galactic magnetic field in the outer Galaxy.

  18. X-ray and optical substructures of the DAFT/FADA survey clusters

    Science.gov (United States)

    Guennou, L.; Durret, F.; Adami, C.; Lima Neto, G. B.

    2013-04-01

    We have undertaken the DAFT/FADA survey with the double aim of setting constraints on dark energy based on weak lensing tomography and of obtaining homogeneous and high quality data for a sample of 91 massive clusters in the redshift range 0.4-0.9 for which there were HST archive data. We have analysed the XMM-Newton data available for 42 of these clusters to derive their X-ray temperatures and luminosities and search for substructures. Out of these, a spatial analysis was possible for 30 clusters, but only 23 had deep enough X-ray data for a really robust analysis. This study was coupled with a dynamical analysis for the 26 clusters having at least 30 spectroscopic galaxy redshifts in the cluster range. Altogether, the X-ray sample of 23 clusters and the optical sample of 26 clusters have 14 clusters in common. We present preliminary results on the coupled X-ray and dynamical analyses of these 14 clusters.

  19. Baryon Content in a Sample of 91 Galaxy Clusters Selected by the South Pole Telescope at 0.2 < z < 1.25

    Energy Technology Data Exchange (ETDEWEB)

    Chiu, I.; et al.

    2017-11-02

    We estimate total mass ($M_{500}$), intracluster medium (ICM) mass ($M_{\\mathrm{ICM}}$) and stellar mass ($M_{\\star}$) in a Sunyaev-Zel'dovich effect (SZE) selected sample of 91 galaxy clusters with masses $M_{500}\\gtrsim2.5\\times10^{14}M_{\\odot}$ and redshift $0.2 < z < 1.25$ from the 2500 deg$^2$ South Pole Telescope SPT-SZ survey. The total masses $M_{500}$ are estimated from the SZE observable, the ICM masses $M_{\\mathrm{ICM}}$ are obtained from the analysis of $Chandra$ X-ray observations, and the stellar masses $M_{\\star}$ are derived by fitting spectral energy distribution templates to Dark Energy Survey (DES) $griz$ optical photometry and $WISE$ or $Spitzer$ near-infrared photometry. We study trends in the stellar mass, the ICM mass, the total baryonic mass and the cold baryonic fraction with cluster mass and redshift. We find significant departures from self-similarity in the mass scaling for all quantities, while the redshift trends are all statistically consistent with zero, indicating that the baryon content of clusters at fixed mass has changed remarkably little over the past $\\approx9$ Gyr. We compare our results to the mean baryon fraction (and the stellar mass fraction) in the field, finding that these values lie above (below) those in cluster virial regions in all but the most massive clusters at low redshift. Using a simple model of the matter assembly of clusters from infalling groups with lower masses and from infalling material from the low density environment or field surrounding the parent halos, we show that the strong mass and weak redshift trends in the stellar mass scaling relation suggest a mass and redshift dependent fractional contribution from field material. Similar analyses of the ICM and baryon mass scaling relations provide evidence for the so-called 'missing baryons' outside cluster virial regions.

  20. STAR CLUSTER DISRUPTION IN THE STARBURST GALAXY MESSIER 82

    International Nuclear Information System (INIS)

    Li, Shuo; Li, Chengyuan; De Grijs, Richard; Anders, Peter

    2015-01-01

    Using high-resolution, multiple-passband Hubble Space Telescope images spanning the entire optical/near-infrared wavelength range, we obtained a statistically complete U-band-selected sample of 846 extended star clusters across the disk of the nearby starburst galaxy M82. Based on a careful analysis of the clusters' spectral energy distributions, we determined their galaxy-wide age and mass distributions. The M82 clusters exhibit three clear peaks in their age distribution, thus defining relatively young, log (t yr –1 ) ≤ 7.5, intermediate-age, log (t yr –1 ) in [7.5, 8.5], and old samples, log (t yr –1 ) ≥ 8.5. Comparison of the completeness-corrected mass distributions offers a firm handle on the galaxy's star cluster disruption history. The most massive star clusters in the young and old samples are (almost) all concentrated in the most densely populated central region, while the intermediate-age sample's most massive clusters are more spatially dispersed, which may reflect the distribution of the highest-density gas throughout the galaxy's evolutionary history, combined with the solid-body nature of the galaxy's central region

  1. Improving person-centred care in nursing homes through dementia-care mapping: design of a cluster-randomised controlled trial

    Science.gov (United States)

    2012-01-01

    -centred approach to dementia care in nursing homes. The major strengths of the study design are the large sample size, the cluster-randomisation, and the one-year follow-up. The generalisability of the implementation strategies may be questionable because the motivation for person-centred care in both the intervention and control nursing homes is above average. The results of this study may be useful in improving the quality of care and are relevant for policymakers. Trial registration The trial is registered in the Netherlands National Trial Register: NTR2314. PMID:22214264

  2. Improving person-centred care in nursing homes through dementia-care mapping: design of a cluster-randomised controlled trial

    Directory of Open Access Journals (Sweden)

    van de Ven Geertje

    2012-01-01

    integral person-centred approach to dementia care in nursing homes. The major strengths of the study design are the large sample size, the cluster-randomisation, and the one-year follow-up. The generalisability of the implementation strategies may be questionable because the motivation for person-centred care in both the intervention and control nursing homes is above average. The results of this study may be useful in improving the quality of care and are relevant for policymakers. Trial registration The trial is registered in the Netherlands National Trial Register: NTR2314.

  3. A proposal of optimal sampling design using a modularity strategy

    Science.gov (United States)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  4. Effects of Group Size and Lack of Sphericity on the Recovery of Clusters in K-Means Cluster Analysis

    Science.gov (United States)

    de Craen, Saskia; Commandeur, Jacques J. F.; Frank, Laurence E.; Heiser, Willem J.

    2006-01-01

    K-means cluster analysis is known for its tendency to produce spherical and equally sized clusters. To assess the magnitude of these effects, a simulation study was conducted, in which populations were created with varying departures from sphericity and group sizes. An analysis of the recovery of clusters in the samples taken from these…

  5. Statistical Significance for Hierarchical Clustering

    Science.gov (United States)

    Kimes, Patrick K.; Liu, Yufeng; Hayes, D. Neil; Marron, J. S.

    2017-01-01

    Summary Cluster analysis has proved to be an invaluable tool for the exploratory and unsupervised analysis of high dimensional datasets. Among methods for clustering, hierarchical approaches have enjoyed substantial popularity in genomics and other fields for their ability to simultaneously uncover multiple layers of clustering structure. A critical and challenging question in cluster analysis is whether the identified clusters represent important underlying structure or are artifacts of natural sampling variation. Few approaches have been proposed for addressing this problem in the context of hierarchical clustering, for which the problem is further complicated by the natural tree structure of the partition, and the multiplicity of tests required to parse the layers of nested clusters. In this paper, we propose a Monte Carlo based approach for testing statistical significance in hierarchical clustering which addresses these issues. The approach is implemented as a sequential testing procedure guaranteeing control of the family-wise error rate. Theoretical justification is provided for our approach, and its power to detect true clustering structure is illustrated through several simulation studies and applications to two cancer gene expression datasets. PMID:28099990

  6. Swarm controlled emergence for ant clustering

    DEFF Research Database (Denmark)

    Scheidler, Alexander; Merkle, Daniel; Middendorf, Martin

    2013-01-01

    .g. moving robots, and clustering algorithms. Design/methodology/approach: Different types of control agents for that ant clustering model are designed by introducing slight changes to the behavioural rules of the normal agents. The clustering behaviour of the resulting swarms is investigated by extensive...... for future research to investigate the application of the method in other swarm systems. Swarm controlled emergence might be applied to control emergent effects in computing systems that consist of many autonomous components which make decentralized decisions based on local information. Practical...... simulation studies. Findings: It is shown that complex behavior can emerge in systems with two types of agents (normal agents and control agents). For a particular behavior of the control agents, an interesting swarm size dependent effect was found. The behaviour prevents clustering when the number...

  7. Secondary Analysis under Cohort Sampling Designs Using Conditional Likelihood

    Directory of Open Access Journals (Sweden)

    Olli Saarela

    2012-01-01

    Full Text Available Under cohort sampling designs, additional covariate data are collected on cases of a specific type and a randomly selected subset of noncases, primarily for the purpose of studying associations with a time-to-event response of interest. With such data available, an interest may arise to reuse them for studying associations between the additional covariate data and a secondary non-time-to-event response variable, usually collected for the whole study cohort at the outset of the study. Following earlier literature, we refer to such a situation as secondary analysis. We outline a general conditional likelihood approach for secondary analysis under cohort sampling designs and discuss the specific situations of case-cohort and nested case-control designs. We also review alternative methods based on full likelihood and inverse probability weighting. We compare the alternative methods for secondary analysis in two simulated settings and apply them in a real-data example.

  8. Optimal design of cluster-based ad-hoc networks using probabilistic solution discovery

    International Nuclear Information System (INIS)

    Cook, Jason L.; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    The reliability of ad-hoc networks is gaining popularity in two areas: as a topic of academic interest and as a key performance parameter for defense systems employing this type of network. The ad-hoc network is dynamic and scalable and these descriptions are what attract its users. However, these descriptions are also synonymous for undefined and unpredictable when considering the impacts to the reliability of the system. The configuration of an ad-hoc network changes continuously and this fact implies that no single mathematical expression or graphical depiction can describe the system reliability-wise. Previous research has used mobility and stochastic models to address this challenge successfully. In this paper, the authors leverage the stochastic approach and build upon it a probabilistic solution discovery (PSD) algorithm to optimize the topology for a cluster-based mobile ad-hoc wireless network (MAWN). Specifically, the membership of nodes within the back-bone network or networks will be assigned in such as way as to maximize reliability subject to a constraint on cost. The constraint may also be considered as a non-monetary cost, such as weight, volume, power, or the like. When a cost is assigned to each component, a maximum cost threshold is assigned to the network, and the method is run; the result is an optimized allocation of the radios enabling back-bone network(s) to provide the most reliable network possible without exceeding the allowable cost. The method is intended for use directly as part of the architectural design process of a cluster-based MAWN to efficiently determine an optimal or near-optimal design solution. It is capable of optimizing the topology based upon all-terminal reliability (ATR), all-operating terminal reliability (AoTR), or two-terminal reliability (2TR)

  9. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  10. Evaluation of cluster-randomized trials on maternal and child health research in developing countries

    DEFF Research Database (Denmark)

    Handlos, Line Neerup; Chakraborty, Hrishikesh; Sen, Pranab Kumar

    2009-01-01

    To summarize and evaluate all publications including cluster-randomized trials used for maternal and child health research in developing countries during the last 10 years. METHODS: All cluster-randomized trials published between 1998 and 2008 were reviewed, and those that met our criteria...... for inclusion were evaluated further. The criteria for inclusion were that the trial should have been conducted in maternal and child health care in a developing country and that the conclusions should have been made on an individual level. Methods of accounting for clustering in design and analysis were......, and the trials generally improved in quality. CONCLUSIONS: Shortcomings exist in the sample-size calculations and in the analysis of cluster-randomized trials conducted during maternal and child health research in developing countries. Even though there has been improvement over time, further progress in the way...

  11. Star clusters and K2

    Science.gov (United States)

    Dotson, Jessie; Barentsen, Geert; Cody, Ann Marie

    2018-01-01

    The K2 survey has expanded the Kepler legacy by using the repurposed spacecraft to observe over 20 star clusters. The sample includes open and globular clusters at all ages, including very young (1-10 Myr, e.g. Taurus, Upper Sco, NGC 6530), moderately young (0.1-1 Gyr, e.g. M35, M44, Pleiades, Hyades), middle-aged (e.g. M67, Ruprecht 147, NGC 2158), and old globular clusters (e.g. M9, M19, Terzan 5). K2 observations of stellar clusters are exploring the rotation period-mass relationship to significantly lower masses than was previously possible, shedding light on the angular momentum budget and its dependence on mass and circumstellar disk properties, and illuminating the role of multiplicity in stellar angular momentum. Exoplanets discovered by K2 in stellar clusters provides planetary systems ripe for modeling given the extensive information available about their ages and environment. I will review the star clusters sampled by K2 across 16 fields so far, highlighting several characteristics, caveats, and unexplored uses of the public data set along the way. With fuel expected to run out in 2018, I will discuss the closing Campaigns, highlight the final target selection opportunities, and explain the data archive and TESS-compatible software tools the K2 mission intends to leave behind for posterity.

  12. Numerical experiment designs. Study of vibratory behaviour of PWR'S control rod clusters

    International Nuclear Information System (INIS)

    Bosselut, D.; Soulier, B.; Regnier, G.

    1997-01-01

    The application of Experiment Design method to Finite Element Model (FEM) calculations is an original way of performing parametric studies. It has been used at EDF to simulate on a large parametric domain the vibrations of PWR's control rod cluster and to analyse the rod wear process. In the first part the FEM and the location of excitation sources are described. The calculated values are: rod displacement in the guiding cards, shock forces on the guiding cards and the wear power produced. In the second part, the computed Experiment Domain is described. This method approaches the response surface by a second degree polynomial. The retained model is composed for every parameters of all linear, quadratic and interaction terms (26 coefficients). In all, 34 polynomials have been built to approach the effective shock forces and the mean wear power at each of the 17 guiding points. In the third part the building of the computer Experiment Design is detailed: by Doehlert design adaptation to take into account a qualitative parameter, design optimization by adding four well chosen experiments and finally, design extension by passing from 4 to 6 parameters. In the last part, all the information deduced from application of this method are presented. The influence of parameters on calculated effective shock forces has been determined along rods and response surface have been easily approximated. The systematism and closeness of Experiment Design technique is underlined. Easy simulation of all the response domain by polynomial approach allows comparison with experimental results. (authors)

  13. MASS CALIBRATION AND COSMOLOGICAL ANALYSIS OF THE SPT-SZ GALAXY CLUSTER SAMPLE USING VELOCITY DISPERSION σ {sub v} AND X-RAY Y {sub X} MEASUREMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Bocquet, S.; Saro, A.; Mohr, J. J.; Bazin, G.; Chiu, I.; Desai, S. [Department of Physics, Ludwig-Maximilians-Universität, Scheinerstr. 1, D-81679 München (Germany); Aird, K. A. [University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Ashby, M. L. N.; Bayliss, M. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Bautz, M. [Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Benson, B. A. [Fermi National Accelerator Laboratory, Batavia, IL 60510-0500 (United States); Bleem, L. E.; Carlstrom, J. E.; Chang, C. L.; Crawford, T. M.; Crites, A. T. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Brodwin, M. [Department of Physics and Astronomy, University of Missouri, 5110 Rockhill Road, Kansas City, MO 64110 (United States); Cho, H. M. [NIST Quantum Devices Group, 325 Broadway Mailcode 817.03, Boulder, CO 80305 (United States); Clocchiatti, A. [Departamento de Astronomia y Astrosifica, Pontificia Universidad Catolica (Chile); De Haan, T., E-mail: bocquet@usm.lmu.de [Department of Physics, McGill University, 3600 Rue University, Montreal, Quebec H3A 2T8 (Canada); and others

    2015-02-01

    We present a velocity-dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg{sup 2} of the survey along with 63 velocity dispersion (σ {sub v}) and 16 X-ray Y {sub X} measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. Our method accounts for cluster selection, cosmological sensitivity, and uncertainties in the mass calibrators. The calibrations using σ {sub v} and Y {sub X} are consistent at the 0.6σ level, with the σ {sub v} calibration preferring ∼16% higher masses. We use the full SPT{sub CL} data set (SZ clusters+σ {sub v}+Y {sub X}) to measure σ{sub 8}(Ω{sub m}/0.27){sup 0.3} = 0.809 ± 0.036 within a flat ΛCDM model. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming that the sum of the neutrino masses is ∑m {sub ν} = 0.06 eV, we find the data sets to be consistent at the 1.0σ level for WMAP9 and 1.5σ for Planck+WP. Allowing for larger ∑m {sub ν} further reconciles the results. When we combine the SPT{sub CL} and Planck+WP data sets with information from baryon acoustic oscillations and Type Ia supernovae, the preferred cluster masses are 1.9σ higher than the Y {sub X} calibration and 0.8σ higher than the σ {sub v} calibration. Given the scale of these shifts (∼44% and ∼23% in mass, respectively), we execute a goodness-of-fit test; it reveals no tension, indicating that the best-fit model provides an adequate description of the data. Using the multi-probe data set, we measure Ω{sub m} = 0.299 ± 0.009 and σ{sub 8} = 0.829 ± 0.011. Within a νCDM model we find ∑m {sub ν} = 0.148 ± 0.081 eV. We present a consistency test of the cosmic growth rate using SPT clusters. Allowing both the growth index γ and the dark energy equation

  14. Reproducibility of Cognitive Profiles in Psychosis Using Cluster Analysis.

    Science.gov (United States)

    Lewandowski, Kathryn E; Baker, Justin T; McCarthy, Julie M; Norris, Lesley A; Öngür, Dost

    2018-04-01

    Cognitive dysfunction is a core symptom dimension that cuts across the psychoses. Recent findings support classification of patients along the cognitive dimension using cluster analysis; however, data-derived groupings may be highly determined by sampling characteristics and the measures used to derive the clusters, and so their interpretability must be established. We examined cognitive clusters in a cross-diagnostic sample of patients with psychosis and associations with clinical and functional outcomes. We then compared our findings to a previous report of cognitive clusters in a separate sample using a different cognitive battery. Participants with affective or non-affective psychosis (n=120) and healthy controls (n=31) were administered the MATRICS Consensus Cognitive Battery, and clinical and community functioning assessments. Cluster analyses were performed on cognitive variables, and clusters were compared on demographic, cognitive, and clinical measures. Results were compared to findings from our previous report. A four-cluster solution provided a good fit to the data; profiles included a neuropsychologically normal cluster, a globally impaired cluster, and two clusters of mixed profiles. Cognitive burden was associated with symptom severity and poorer community functioning. The patterns of cognitive performance by cluster were highly consistent with our previous findings. We found evidence of four cognitive subgroups of patients with psychosis, with cognitive profiles that map closely to those produced in our previous work. Clusters were associated with clinical and community variables and a measure of premorbid functioning, suggesting that they reflect meaningful groupings: replicable, and related to clinical presentation and functional outcomes. (JINS, 2018, 24, 382-390).

  15. Characterization-Based Molecular Design of Biofuel Additives Using Chemometric and Property Clustering Techniques

    Directory of Open Access Journals (Sweden)

    Subin eHada

    2014-06-01

    Full Text Available In this work, multivariate characterization data such as infrared (IR spectroscopy was used as a source of descriptor data involving information on molecular architecture for designing structured molecules with tailored properties. Application of multivariate statistical techniques such as principal component analysis (PCA allowed capturing important features of the molecular architecture from complex data to build appropriate latent variable models. Combining the property clustering techniques and group contribution methods (GCM based on characterization data in a reverse problem formulation enabled identifying candidate components by combining or mixing molecular fragments until the resulting properties match the targets. The developed methodology is demonstrated using molecular design of biodiesel additive which when mixed with off-spec biodiesel produces biodiesel that meets the desired fuel specifications. The contribution of this work is that the complex structures and orientations of the molecule can be included in the design, thereby allowing enumeration of all feasible candidate molecules that matched the identified target but were not part of original training set of molecules.

  16. A Novel Cluster Head Selection Algorithm Based on Fuzzy Clustering and Particle Swarm Optimization.

    Science.gov (United States)

    Ni, Qingjian; Pan, Qianqian; Du, Huimin; Cao, Cen; Zhai, Yuqing

    2017-01-01

    An important objective of wireless sensor network is to prolong the network life cycle, and topology control is of great significance for extending the network life cycle. Based on previous work, for cluster head selection in hierarchical topology control, we propose a solution based on fuzzy clustering preprocessing and particle swarm optimization. More specifically, first, fuzzy clustering algorithm is used to initial clustering for sensor nodes according to geographical locations, where a sensor node belongs to a cluster with a determined probability, and the number of initial clusters is analyzed and discussed. Furthermore, the fitness function is designed considering both the energy consumption and distance factors of wireless sensor network. Finally, the cluster head nodes in hierarchical topology are determined based on the improved particle swarm optimization. Experimental results show that, compared with traditional methods, the proposed method achieved the purpose of reducing the mortality rate of nodes and extending the network life cycle.

  17. Subaru Weak Lensing Measurements of Four Strong Lensing Clusters: Are Lensing Clusters Over-Concentrated?

    Energy Technology Data Exchange (ETDEWEB)

    Oguri, Masamune; Hennawi, Joseph F.; Gladders, Michael D.; Dahle, Haakon; Natarajan, Priyamvada; Dalal, Neal; Koester, Benjamin P.; Sharon, Keren; Bayliss, Matthew

    2009-01-29

    We derive radial mass profiles of four strong lensing selected clusters which show prominent giant arcs (Abell 1703, SDSS J1446+3032, SDSS J1531+3414, and SDSS J2111-0115), by combining detailed strong lens modeling with weak lensing shear measured from deep Subaru Suprime-cam images. Weak lensing signals are detected at high significance for all four clusters, whose redshifts range from z = 0.28 to 0.64. We demonstrate that adding strong lensing information with known arc redshifts significantly improves constraints on the mass density profile, compared to those obtained from weak lensing alone. While the mass profiles are well fitted by the universal form predicted in N-body simulations of the {Lambda}-dominated cold dark matter model, all four clusters appear to be slightly more centrally concentrated (the concentration parameters c{sub vir} {approx} 8) than theoretical predictions, even after accounting for the bias toward higher concentrations inherent in lensing selected samples. Our results are consistent with previous studies which similarly detected a concentration excess, and increases the total number of clusters studied with the combined strong and weak lensing technique to ten. Combining our sample with previous work, we find that clusters with larger Einstein radii are more anomalously concentrated. We also present a detailed model of the lensing cluster Abell 1703 with constraints from multiple image families, and find the dark matter inner density profile to be cuspy with the slope consistent with -1, in agreement with expectations.

  18. Cluster processing business level monitor

    International Nuclear Information System (INIS)

    Muniz, Francisco J.

    2017-01-01

    This article describes a Cluster Processing Monitor. Several applications with this functionality can be freely found doing a search in the Google machine. However, those applications may offer more features that are needed on the Processing Monitor being proposed. Therefore, making the monitor output evaluation difficult to be understood by the user, at-a-glance. In addition, such monitors may add unnecessary processing cost to the Cluster. For these reasons, a completely new Cluster Processing Monitor module was designed and implemented. In the CDTN, Clusters are broadly used, mainly, in deterministic methods (CFD) and non-deterministic methods (Monte Carlo). (author)

  19. Cluster processing business level monitor

    Energy Technology Data Exchange (ETDEWEB)

    Muniz, Francisco J., E-mail: muniz@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2017-07-01

    This article describes a Cluster Processing Monitor. Several applications with this functionality can be freely found doing a search in the Google machine. However, those applications may offer more features that are needed on the Processing Monitor being proposed. Therefore, making the monitor output evaluation difficult to be understood by the user, at-a-glance. In addition, such monitors may add unnecessary processing cost to the Cluster. For these reasons, a completely new Cluster Processing Monitor module was designed and implemented. In the CDTN, Clusters are broadly used, mainly, in deterministic methods (CFD) and non-deterministic methods (Monte Carlo). (author)

  20. A new clustering algorithm for scanning electron microscope images

    Science.gov (United States)

    Yousef, Amr; Duraisamy, Prakash; Karim, Mohammad

    2016-04-01

    A scanning electron microscope (SEM) is a type of electron microscope that produces images of a sample by scanning it with a focused beam of electrons. The electrons interact with the sample atoms, producing various signals that are collected by detectors. The gathered signals contain information about the sample's surface topography and composition. The electron beam is generally scanned in a raster scan pattern, and the beam's position is combined with the detected signal to produce an image. The most common configuration for an SEM produces a single value per pixel, with the results usually rendered as grayscale images. The captured images may be produced with insufficient brightness, anomalous contrast, jagged edges, and poor quality due to low signal-to-noise ratio, grained topography and poor surface details. The segmentation of the SEM images is a tackling problems in the presence of the previously mentioned distortions. In this paper, we are stressing on the clustering of these type of images. In that sense, we evaluate the performance of the well-known unsupervised clustering and classification techniques such as connectivity based clustering (hierarchical clustering), centroid-based clustering, distribution-based clustering and density-based clustering. Furthermore, we propose a new spatial fuzzy clustering technique that works efficiently on this type of images and compare its results against these regular techniques in terms of clustering validation metrics.

  1. An Archival Search For Young Globular Clusters in Galaxies

    Science.gov (United States)

    Whitmore, Brad

    1995-07-01

    One of the most intriguing results from HST has been the discovery of ultraluminous star clusters in interacting and merging galaxies. These clusters have the luminosities, colors, and sizes that would be expected of young globular clusters produced by the interaction. We propose to use the data in the HST Archive to determine how prevalent this phenomena is, and to determine whether similar clusters are produced in other environments. Three samples will be extracted and studied in a systematic and consistent manner: 1} interacting and merging galaxies, 2} starburst galaxies, 3} a control sample of ``normal'' galaxies. A preliminary search of the archives shows that there are at least 20 galaxies in each of these samples, and the number will grow by about 50 observations become available. The data will be used to determine the luminosity function, color histogram , spatial distribution, and structural properties of the clusters using the same techniques employed in our study of NGC 7252 {``Atoms -for-Peace'' galaxy} and NGC 4038/4039 {``The Antennae''}. Our ultimate goals are: 1} to understand how globular clusters form, and 2} to use the clusters as evolutionary tracers to unravel the histories of interacting galaxies.

  2. DAFi: A directed recursive data filtering and clustering approach for improving and interpreting data clustering identification of cell populations from polychromatic flow cytometry data.

    Science.gov (United States)

    Lee, Alexandra J; Chang, Ivan; Burel, Julie G; Lindestam Arlehamn, Cecilia S; Mandava, Aishwarya; Weiskopf, Daniela; Peters, Bjoern; Sette, Alessandro; Scheuermann, Richard H; Qian, Yu

    2018-04-17

    Computational methods for identification of cell populations from polychromatic flow cytometry data are changing the paradigm of cytometry bioinformatics. Data clustering is the most common computational approach to unsupervised identification of cell populations from multidimensional cytometry data. However, interpretation of the identified data clusters is labor-intensive. Certain types of user-defined cell populations are also difficult to identify by fully automated data clustering analysis. Both are roadblocks before a cytometry lab can adopt the data clustering approach for cell population identification in routine use. We found that combining recursive data filtering and clustering with constraints converted from the user manual gating strategy can effectively address these two issues. We named this new approach DAFi: Directed Automated Filtering and Identification of cell populations. Design of DAFi preserves the data-driven characteristics of unsupervised clustering for identifying novel cell subsets, but also makes the results interpretable to experimental scientists through mapping and merging the multidimensional data clusters into the user-defined two-dimensional gating hierarchy. The recursive data filtering process in DAFi helped identify small data clusters which are otherwise difficult to resolve by a single run of the data clustering method due to the statistical interference of the irrelevant major clusters. Our experiment results showed that the proportions of the cell populations identified by DAFi, while being consistent with those by expert centralized manual gating, have smaller technical variances across samples than those from individual manual gating analysis and the nonrecursive data clustering analysis. Compared with manual gating segregation, DAFi-identified cell populations avoided the abrupt cut-offs on the boundaries. DAFi has been implemented to be used with multiple data clustering methods including K-means, FLOCK, FlowSOM, and

  3. Mass Distribution in Galaxy Cluster Cores

    Energy Technology Data Exchange (ETDEWEB)

    Hogan, M. T.; McNamara, B. R.; Pulido, F.; Vantyghem, A. N. [Department of Physics and Astronomy, University of Waterloo, Waterloo, ON, N2L 3G1 (Canada); Nulsen, P. E. J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Russell, H. R. [Institute of Astronomy, Madingley Road, Cambridge CB3 0HA (United Kingdom); Edge, A. C. [Centre for Extragalactic Astronomy, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Main, R. A., E-mail: m4hogan@uwaterloo.ca [Canadian Institute for Theoretical Astrophysics, University of Toronto, 60 St. George Street, Toronto, ON, M5S 3H8 (Canada)

    2017-03-01

    Many processes within galaxy clusters, such as those believed to govern the onset of thermally unstable cooling and active galactic nucleus feedback, are dependent upon local dynamical timescales. However, accurate mapping of the mass distribution within individual clusters is challenging, particularly toward cluster centers where the total mass budget has substantial radially dependent contributions from the stellar ( M {sub *}), gas ( M {sub gas}), and dark matter ( M {sub DM}) components. In this paper we use a small sample of galaxy clusters with deep Chandra observations and good ancillary tracers of their gravitating mass at both large and small radii to develop a method for determining mass profiles that span a wide radial range and extend down into the central galaxy. We also consider potential observational pitfalls in understanding cooling in hot cluster atmospheres, and find tentative evidence for a relationship between the radial extent of cooling X-ray gas and nebular H α emission in cool-core clusters. At large radii the entropy profiles of our clusters agree with the baseline power law of K ∝ r {sup 1.1} expected from gravity alone. At smaller radii our entropy profiles become shallower but continue with a power law of the form K ∝ r {sup 0.67} down to our resolution limit. Among this small sample of cool-core clusters we therefore find no support for the existence of a central flat “entropy floor.”.

  4. Ethical and policy issues in cluster randomized trials: rationale and design of a mixed methods research study

    Directory of Open Access Journals (Sweden)

    Chaudhry Shazia H

    2009-07-01

    Full Text Available Abstract Background Cluster randomized trials are an increasingly important methodological tool in health research. In cluster randomized trials, intact social units or groups of individuals, such as medical practices, schools, or entire communities – rather than individual themselves – are randomly allocated to intervention or control conditions, while outcomes are then observed on individual cluster members. The substantial methodological differences between cluster randomized trials and conventional randomized trials pose serious challenges to the current conceptual framework for research ethics. The ethical implications of randomizing groups rather than individuals are not addressed in current research ethics guidelines, nor have they even been thoroughly explored. The main objectives of this research are to: (1 identify ethical issues arising in cluster trials and learn how they are currently being addressed; (2 understand how ethics reviews of cluster trials are carried out in different countries (Canada, the USA and the UK; (3 elicit the views and experiences of trial participants and cluster representatives; (4 develop well-grounded guidelines for the ethical conduct and review of cluster trials by conducting an extensive ethical analysis and organizing a consensus process; (5 disseminate the guidelines to researchers, research ethics boards (REBs, journal editors, and research funders. Methods We will use a mixed-methods (qualitative and quantitative approach incorporating both empirical and conceptual work. Empirical work will include a systematic review of a random sample of published trials, a survey and in-depth interviews with trialists, a survey of REBs, and in-depth interviews and focus group discussions with trial participants and gatekeepers. The empirical work will inform the concurrent ethical analysis which will lead to a guidance document laying out principles, policy options, and rationale for proposed guidelines. An

  5. Clinical evaluation of nonsyndromic dental anomalies in Dravidian population: A cluster sample analysis.

    Science.gov (United States)

    Yamunadevi, Andamuthu; Selvamani, M; Vinitha, V; Srivandhana, R; Balakrithiga, M; Prabhu, S; Ganapathy, N

    2015-08-01

    To record the prevalence rate of dental anomalies in Dravidian population and analyze the percentage of individual anomalies in the population. A cluster sample analysis was done, where 244 subjects studying in a dental institution were all included and analyzed for occurrence of dental anomalies by clinical examination, excluding third molars from analysis. 31.55% of the study subjects had dental anomalies and shape anomalies were more prevalent (22.1%), followed by size (8.6%), number (3.2%) and position anomalies (0.4%). Retained deciduous was seen in 1.63%. Among the individual anomalies, Talon's cusp (TC) was seen predominantly (14.34%), followed by microdontia (6.6%) and supernumerary cusps (5.73%). Prevalence rate of dental anomalies in the Dravidian population is 31.55% in the present study, exclusive of third molars. Shape anomalies are more common, and TC is the most commonly noted anomaly. Varying prevalence rate is reported in different geographical regions of the world.

  6. INDIVIDUAL AND GROUP GALAXIES IN CNOC1 CLUSTERS

    International Nuclear Information System (INIS)

    Li, I. H.; Yee, H. K. C.; Ellingson, E.

    2009-01-01

    Using wide-field BVR c I imaging for a sample of 16 intermediate redshift (0.17 red ) to infer the evolutionary status of galaxies in clusters, using both individual galaxies and galaxies in groups. We apply the local galaxy density, Σ 5 , derived using the fifth nearest neighbor distance, as a measure of local environment, and the cluster-centric radius, r CL , as a proxy for global cluster environment. Our cluster sample exhibits a Butcher-Oemler effect in both luminosity-selected and stellar-mass-selected samples. We find that f red depends strongly on Σ 5 and r CL , and the Butcher-Oemler effect is observed in all Σ 5 and r CL bins. However, when the cluster galaxies are separated into r CL bins, or into group and nongroup subsamples, the dependence on local galaxy density becomes much weaker. This suggests that the properties of the dark matter halo in which the galaxy resides have a dominant effect on its galaxy population and evolutionary history. We find that our data are consistent with the scenario that cluster galaxies situated in successively richer groups (i.e., more massive dark matter halos) reach a high f red value at earlier redshifts. Associated with this, we observe a clear signature of 'preprocessing', in which cluster galaxies belonging to moderately massive infalling galaxy groups show a much stronger evolution in f red than those classified as nongroup galaxies, especially at the outskirts of the cluster. This result suggests that galaxies in groups infalling into clusters are significant contributors to the Butcher-Oemler effect.

  7. X-ray Spectra of Distant Clusters

    Science.gov (United States)

    Ellingson, E.

    1998-01-01

    The masses of galaxy clusters are dominated by dark matter, and a robust determination of their temperatures and masses has the potential of indicating how much dark matter exists on large scales in the universe, and the cosmological parameter Omega. X-ray observations of galaxy clusters provide a direct measure of both the gas mass in the intra-cluster medium, and also the total gravitating mass of the cluster. We used new and archival ASCA and ROSAT observations to measure these quantities for a sample of intermediate redshift clusters which have also been subject to intensive dynamical studies, in order to compare the mass estimates from different methods. We used data from 12 of the CNOC cluster sample at 0.18 less than z less than 0.55 for this study. A direct comparison of dynamical mass estimates from Carlberg, Yee & Ellingson (1997) yielded surprisingly good results. The X-ray/dynamical mass ratios have a mean of 0.96+/- 0.10, indicating that for this sample, both methods are probably yielding very robust mass estimates. Comparison with mass estimates from gravitational lensing studies from the literature showed a small systematic with weak lensing estimates, and large discrepancies with strong lensing estimates. This latter is not surprising, given that these measurements are made close to the central core, where optical and Xray estimates are less certain, and where substructure and the effects of individual galaxies will be more pronounced. These results are presented in Lewis, Ellingson, Morris \\& Carlberg, 1998, submitted to the Astrophysical Journal.

  8. Characterization-Based Molecular Design of Bio-Fuel Additives Using Chemometric and Property Clustering Techniques

    International Nuclear Information System (INIS)

    Hada, Subin; Solvason, Charles C.; Eden, Mario R.

    2014-01-01

    In this work, multivariate characterization data such as infrared spectroscopy was used as a source of descriptor data involving information on molecular architecture for designing structured molecules with tailored properties. Application of multivariate statistical techniques such as principal component analysis allowed capturing important features of the molecular architecture from enormous amount of complex data to build appropriate latent variable models. Combining the property clustering techniques and group contribution methods based on characterization (cGCM) data in a reverse problem formulation enabled identifying candidate components by combining or mixing molecular fragments until the resulting properties match the targets. The developed methodology is demonstrated using molecular design of biodiesel additive, which when mixed with off-spec biodiesel produces biodiesel that meets the desired fuel specifications. The contribution of this work is that the complex structures and orientations of the molecule can be included in the design, thereby allowing enumeration of all feasible candidate molecules that matched the identified target but were not part of original training set of molecules.

  9. Characterization-Based Molecular Design of Bio-Fuel Additives Using Chemometric and Property Clustering Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hada, Subin; Solvason, Charles C.; Eden, Mario R., E-mail: edenmar@auburn.edu [Department of Chemical Engineering, Auburn University, Auburn, AL (United States)

    2014-06-10

    In this work, multivariate characterization data such as infrared spectroscopy was used as a source of descriptor data involving information on molecular architecture for designing structured molecules with tailored properties. Application of multivariate statistical techniques such as principal component analysis allowed capturing important features of the molecular architecture from enormous amount of complex data to build appropriate latent variable models. Combining the property clustering techniques and group contribution methods based on characterization (cGCM) data in a reverse problem formulation enabled identifying candidate components by combining or mixing molecular fragments until the resulting properties match the targets. The developed methodology is demonstrated using molecular design of biodiesel additive, which when mixed with off-spec biodiesel produces biodiesel that meets the desired fuel specifications. The contribution of this work is that the complex structures and orientations of the molecule can be included in the design, thereby allowing enumeration of all feasible candidate molecules that matched the identified target but were not part of original training set of molecules.

  10. Sampling designs and methods for estimating fish-impingement losses at cooling-water intakes

    International Nuclear Information System (INIS)

    Murarka, I.P.; Bodeau, D.J.

    1977-01-01

    Several systems for estimating fish impingement at power plant cooling-water intakes are compared to determine the most statistically efficient sampling designs and methods. Compared to a simple random sampling scheme the stratified systematic random sampling scheme, the systematic random sampling scheme, and the stratified random sampling scheme yield higher efficiencies and better estimators for the parameters in two models of fish impingement as a time-series process. Mathematical results and illustrative examples of the applications of the sampling schemes to simulated and real data are given. Some sampling designs applicable to fish-impingement studies are presented in appendixes

  11. Community-led trials: Intervention co-design in a cluster randomised controlled trial.

    Science.gov (United States)

    Andersson, Neil

    2017-05-30

    In conventional randomised controlled trials (RCTs), researchers design the interventions. In the Camino Verde trial, each intervention community designed its own programmes to prevent dengue. Instead of fixed actions or menus of activities to choose from, the trial randomised clusters to a participatory research protocol that began with sharing and discussing evidence from a local survey, going on to local authorship of the action plan for vector control.Adding equitable stakeholder engagement to RCT infrastructure anchors the research culturally, making it more meaningful to stakeholders. Replicability in other conditions is straightforward, since all intervention clusters used the same engagement protocol to discuss and to mobilize for dengue prevention. The ethical codes associated with RCTs play out differently in community-led pragmatic trials, where communities essentially choose what they want to do. Several discussion groups in each intervention community produced multiple plans for prevention, recognising different time lines. Some chose fast turnarounds, like elimination of breeding sites, and some chose longer term actions like garbage disposal and improving water supplies.A big part of the skill set for community-led trials is being able to stand back and simply support communities in what they want to do and how they want to do it, something that does not come naturally to many vector control programs or to RCT researchers. Unexpected negative outcomes can come from the turbulence implicit in participatory research. One example was the gender dynamic in the Mexican arm of the Camino Verde trial. Strong involvement of women in dengue control activities seems to have discouraged men in settings where activity in public spaces or outside of the home would ordinarily be considered a "male competence".Community-led trials address the tension between one-size-fits-all programme interventions and local needs. Whatever the conventional wisdom about how

  12. Community-led trials: Intervention co-design in a cluster randomised controlled trial

    Directory of Open Access Journals (Sweden)

    Neil Andersson

    2017-05-01

    Full Text Available Abstract In conventional randomised controlled trials (RCTs, researchers design the interventions. In the Camino Verde trial, each intervention community designed its own programmes to prevent dengue. Instead of fixed actions or menus of activities to choose from, the trial randomised clusters to a participatory research protocol that began with sharing and discussing evidence from a local survey, going on to local authorship of the action plan for vector control. Adding equitable stakeholder engagement to RCT infrastructure anchors the research culturally, making it more meaningful to stakeholders. Replicability in other conditions is straightforward, since all intervention clusters used the same engagement protocol to discuss and to mobilize for dengue prevention. The ethical codes associated with RCTs play out differently in community-led pragmatic trials, where communities essentially choose what they want to do. Several discussion groups in each intervention community produced multiple plans for prevention, recognising different time lines. Some chose fast turnarounds, like elimination of breeding sites, and some chose longer term actions like garbage disposal and improving water supplies. A big part of the skill set for community-led trials is being able to stand back and simply support communities in what they want to do and how they want to do it, something that does not come naturally to many vector control programs or to RCT researchers. Unexpected negative outcomes can come from the turbulence implicit in participatory research. One example was the gender dynamic in the Mexican arm of the Camino Verde trial. Strong involvement of women in dengue control activities seems to have discouraged men in settings where activity in public spaces or outside of the home would ordinarily be considered a “male competence”. Community-led trials address the tension between one-size-fits-all programme interventions and local needs. Whatever the

  13. A multimembership catalogue for 1876 open clusters using UCAC4 data

    Science.gov (United States)

    Sampedro, L.; Dias, W. S.; Alfaro, E. J.; Monteiro, H.; Molino, A.

    2017-10-01

    The main objective of this work is to determine the cluster members of 1876 open clusters, using positions and proper motions of the astrometric fourth United States Naval Observatory (USNO) CCD Astrograph Catalog (UCAC4). For this purpose, we apply three different methods, all based on a Bayesian approach, but with different formulations: a purely parametric method, another completely non-parametric algorithm and a third, recently developed by Sampedro & Alfaro, using both formulations at different steps of the whole process. The first and second statistical moments of the members' phase-space subspace, obtained after applying the three methods, are compared for every cluster. Although, on average, the three methods yield similar results, there are also specific differences between them, as well as for some particular clusters. The comparison with other published catalogues shows good agreement. We have also estimated, for the first time, the mean proper motion for a sample of 18 clusters. The results are organized in a single catalogue formed by two main files, one with the most relevant information for each cluster, partially including that in UCAC4, and the other showing the individual membership probabilities for each star in the cluster area. The final catalogue, with an interface design that enables an easy interaction with the user, is available in electronic format at the Stellar Systems Group (SSG-IAA) web site (http://ssg.iaa.es/en/content/sampedro-cluster-catalog).

  14. Hierarchical Network Design

    DEFF Research Database (Denmark)

    Thomadsen, Tommy

    2005-01-01

    Communication networks are immensely important today, since both companies and individuals use numerous services that rely on them. This thesis considers the design of hierarchical (communication) networks. Hierarchical networks consist of layers of networks and are well-suited for coping...... with changing and increasing demands. Two-layer networks consist of one backbone network, which interconnects cluster networks. The clusters consist of nodes and links, which connect the nodes. One node in each cluster is a hub node, and the backbone interconnects the hub nodes of each cluster and thus...... the clusters. The design of hierarchical networks involves clustering of nodes, hub selection, and network design, i.e. selection of links and routing of ows. Hierarchical networks have been in use for decades, but integrated design of these networks has only been considered for very special types of networks...

  15. Properties of the disk system of globular clusters

    International Nuclear Information System (INIS)

    Armandroff, T.E.

    1989-01-01

    A large refined data sample is used to study the properties and origin of the disk system of globular clusters. A scale height for the disk cluster system of 800-1500 pc is found which is consistent with scale-height determinations for samples of field stars identified with the Galactic thick disk. A rotational velocity of 193 + or - 29 km/s and a line-of-sight velocity dispersion of 59 + or - 14 km/s have been found for the metal-rich clusters. 70 references

  16. Stabilizing ultrasmall Au clusters for enhanced photoredox catalysis.

    Science.gov (United States)

    Weng, Bo; Lu, Kang-Qiang; Tang, Zichao; Chen, Hao Ming; Xu, Yi-Jun

    2018-04-18

    Recently, loading ligand-protected gold (Au) clusters as visible light photosensitizers onto various supports for photoredox catalysis has attracted considerable attention. However, the efficient control of long-term photostability of Au clusters on the metal-support interface remains challenging. Herein, we report a simple and efficient method for enhancing the photostability of glutathione-protected Au clusters (Au GSH clusters) loaded on the surface of SiO 2 sphere by utilizing multifunctional branched poly-ethylenimine (BPEI) as a surface charge modifying, reducing and stabilizing agent. The sequential coating of thickness controlled TiO 2 shells can further significantly improve the photocatalytic efficiency, while such structurally designed core-shell SiO 2 -Au GSH clusters-BPEI@TiO 2 composites maintain high photostability during longtime light illumination conditions. This joint strategy via interfacial modification and composition engineering provides a facile guideline for stabilizing ultrasmall Au clusters and rational design of Au clusters-based composites with improved activity toward targeting applications in photoredox catalysis.

  17. Formal And Informal Macro-Regional Transport Clusters As A Primary Step In The Design And Implementation Of Cluster-Based Strategies

    Directory of Open Access Journals (Sweden)

    Nežerenko Olga

    2015-09-01

    Full Text Available The aim of the study is the identification of a formal macro-regional transport and logistics cluster and its development trends on a macro-regional level in 2007-2011 by means of the hierarchical cluster analysis. The central approach of the study is based on two concepts: 1 the concept of formal and informal macro-regions, and 2 the concept of clustering which is based on the similarities shared by the countries of a macro-region and tightly related to the concept of macro-region. The authors seek to answer the question whether the formation of a formal transport cluster could provide the BSR a stable competitive position in the global transportation and logistics market.

  18. A cluster-randomised, controlled trial to assess the impact of a workplace osteoporosis prevention intervention on the dietary and physical activity behaviours of working women: study protocol

    OpenAIRE

    Tan, Ai May; LaMontagne, Anthony D; Sarmugam, Rani; Howard, Peter

    2013-01-01

    Background Osteoporosis is a debilitating disease and its risk can be reduced through adequate calcium consumption and physical activity. This protocol paper describes a workplace-based intervention targeting behaviour change in premenopausal women working in sedentary occupations. Method/Design A cluster-randomised design was used, comparing the efficacy of a tailored intervention to standard care. Workplaces were the clusters and units of randomisation and intervention. Sample size calculat...

  19. The X-ray spectra of clusters of galaxies and their relationship to other cluster properties

    International Nuclear Information System (INIS)

    Mitchell, R.J.; Dickens, R.J.; Burnell, S.J.B.; Culhane, J.L.

    1979-01-01

    New observations with the MSSL proportional counter spectrometer on the Ariel V satellite of the X-ray spectra of 20 candidate clusters of galaxies are reported. The data are compared with the results from the OSO-8 satellite and the combined sample of some 30 cluster X-ray spectra are analysed. The present study finds generally larger values of Lsub(X) than do Uhuru or the SSI, which, because of the larger field of view, may indicate significant amounts of hot gas away from the cluster centres. The validity of all X-ray cluster identifications has been examined, and sources have been classified according to certainty of identification. The incidence of X-ray line emission from the clusters has been investigated and temperatures, kTsub(X), have been derived on the basis of an isothermal model. Relationships between X-ray, optical and radio properties of the clusters have been studied. The more massive, centrally condensed clusters generally contain higher temperature gas and have a greater luminosity than the less massive, more irregular clusters. (author)

  20. Integrated spectral study of small angular diameter galactic open clusters

    Science.gov (United States)

    Clariá, J. J.; Ahumada, A. V.; Bica, E.; Pavani, D. B.; Parisi, M. C.

    2017-10-01

    This paper presents flux-calibrated integrated spectra obtained at Complejo Astronómico El Leoncito (CASLEO, Argentina) for a sample of 9 Galactic open clusters of small angular diameter. The spectra cover the optical range (3800-6800 Å), with a resolution of ˜14 Å. With one exception (Ruprecht 158), the selected clusters are projected into the fourth Galactic quadrant (282o evaluate their membership status. The current cluster sample complements that of 46 open clusters previously studied by our group in an effort to gather a spectral library with several clusters per age bin. The cluster spectral library that we have been building is an important tool to tie studies of resolved and unresolved stellar content.

  1. Bionic Design for Mars Sampling Scoop Inspired by Himalayan Marmot Claw

    Directory of Open Access Journals (Sweden)

    Long Xue

    2016-01-01

    Full Text Available Cave animals are often adapted to digging and life underground, with claw toes similar in structure and function to a sampling scoop. In this paper, the clawed toes of the Himalayan marmot were selected as a biological prototype for bionic research. Based on geometric parameter optimization of the clawed toes, a bionic sampling scoop for use on Mars was designed. Using a 3D laser scanner, the point cloud data of the second front claw toe was acquired. Parametric equations and contour curves for the claw were then built with cubic polynomial fitting. We obtained 18 characteristic curve equations for the internal and external contours of the claw. A bionic sampling scoop was designed according to the structural parameters of Curiosity’s sampling shovel and the contours of the Himalayan marmot’s claw. Verifying test results showed that when the penetration angle was 45° and the sampling speed was 0.33 r/min, the bionic sampling scoops’ resistance torque was 49.6% less than that of the prototype sampling scoop. When the penetration angle was 60° and the sampling speed was 0.22 r/min, the resistance torque of the bionic sampling scoop was 28.8% lower than that of the prototype sampling scoop.

  2. Intrinsic alignment of redMaPPer clusters: cluster shape-matter density correlation

    Science.gov (United States)

    van Uitert, Edo; Joachimi, Benjamin

    2017-07-01

    We measure the alignment of the shapes of galaxy clusters, as traced by their satellite distributions, with the matter density field using the public redMaPPer catalogue based on Sloan Digital Sky Survey-Data Release 8 (SDSS-DR8), which contains 26 111 clusters up to z ˜ 0.6. The clusters are split into nine redshift and richness samples; in each of them, we detect a positive alignment, showing that clusters point towards density peaks. We interpret the measurements within the tidal alignment paradigm, allowing for a richness and redshift dependence. The intrinsic alignment (IA) amplitude at the pivot redshift z = 0.3 and pivot richness λ = 30 is A_IA^gen=12.6_{-1.2}^{+1.5}. We obtain tentative evidence that the signal increases towards higher richness and lower redshift. Our measurements agree well with results of maxBCG clusters and with dark-matter-only simulations. Comparing our results to the IA measurements of luminous red galaxies, we find that the IA amplitude of galaxy clusters forms a smooth extension towards higher mass. This suggests that these systems share a common alignment mechanism, which can be exploited to improve our physical understanding of IA.

  3. Investigation of clustering in sets of analytical data

    Energy Technology Data Exchange (ETDEWEB)

    Kajfosz, J [Institute of Nuclear Physics, Cracow (Poland)

    1993-04-01

    Foundation of the statistical method of cluster analysis are briefly presented and its usefulness for the examination and evaluation of analytical data obtained from series of samples investigated by PIXE, PIGE or other methods is discussed. A simple program for fast examination of dissimilarities between samples within an investigated series is described. Useful information on clustering for several hundreds of samples can be obtained with minimal time and storage requirements. (author). 5 refs, 10 figs.

  4. Investigation of clustering in sets of analytical data

    International Nuclear Information System (INIS)

    Kajfosz, J.

    1993-04-01

    Foundation of the statistical method of cluster analysis are briefly presented and its usefulness for the examination and evaluation of analytical data obtained from series of samples investigated by PIXE, PIGE or other methods is discussed. A simple program for fast examination of dissimilarities between samples within an investigated series is described. Useful information on clustering for several hundreds of samples can be obtained with minimal time and storage requirements. (author). 5 refs, 10 figs

  5. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    Science.gov (United States)

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  6. Small Launch Vehicle Design Approaches: Clustered Cores Compared with Multi-Stage Inline Concepts

    Science.gov (United States)

    Waters, Eric D.; Beers, Benjamin; Esther, Elizabeth; Philips, Alan; Threet, Grady E., Jr.

    2013-01-01

    In an effort to better define small launch vehicle design options two approaches were investigated from the small launch vehicle trade space. The primary focus was to evaluate a clustered common core design against a purpose built inline vehicle. Both designs focused on liquid oxygen (LOX) and rocket propellant grade kerosene (RP-1) stages with the terminal stage later evaluated as a LOX/methane (CH4) stage. A series of performance optimization runs were done in order to minimize gross liftoff weight (GLOW) including alternative thrust levels, delivery altitude for payload, vehicle length to diameter ratio, alternative engine feed systems, re-evaluation of mass growth allowances, passive versus active guidance systems, and rail and tower launch methods. Additionally manufacturability, cost, and operations also play a large role in the benefits and detriments for each design. Presented here is the Advanced Concepts Office's Earth to Orbit Launch Team methodology and high level discussion of the performance trades and trends of both small launch vehicle solutions along with design philosophies that shaped both concepts. Without putting forth a decree stating one approach is better than the other; this discussion is meant to educate the community at large and let the reader determine which architecture is truly the most economical; since each path has such a unique set of limitations and potential payoffs.

  7. Long-Ranged Oppositely Charged Interactions for Designing New Types of Colloidal Clusters

    Directory of Open Access Journals (Sweden)

    Ahmet Faik Demirörs

    2015-04-01

    Full Text Available Getting control over the valency of colloids is not trivial and has been a long-desired goal for the colloidal domain. Typically, tuning the preferred number of neighbors for colloidal particles requires directional bonding, as in the case of patchy particles, which is difficult to realize experimentally. Here, we demonstrate a general method for creating the colloidal analogs of molecules and other new regular colloidal clusters without using patchiness or complex bonding schemes (e.g., DNA coating by using a combination of long-ranged attractive and repulsive interactions between oppositely charged particles that also enable regular clusters of particles not all in close contact. We show that, due to the interplay between their attractions and repulsions, oppositely charged particles dispersed in an intermediate dielectric constant (4<ϵ<10 provide a viable approach for the formation of binary colloidal clusters. Tuning the size ratio and interactions of the particles enables control of the type and shape of the resulting regular colloidal clusters. Finally, we present an example of clusters made up of negatively charged large and positively charged small satellite particles, for which the electrostatic properties and interactions can be changed with an electric field. It appears that for sufficiently strong fields the satellite particles can move over the surface of the host particles and polarize the clusters. For even stronger fields, the satellite particles can be completely pulled off, reversing the net charge on the cluster. With computer simulations, we investigate how charged particles distribute on an oppositely charged sphere to minimize their energy and compare the results with the solutions to the well-known Thomson problem. We also use the simulations to explore the dependence of such clusters on Debye screening length κ^{−1} and the ratio of charges on the particles, showing good agreement with experimental observations.

  8. Cosmology with EMSS Clusters of Galaxies

    Science.gov (United States)

    Donahue, Megan; Voit, G. Mark

    1999-01-01

    We use ASCA observations of the Extended Medium Sensitivity Survey sample of clusters of galaxies to construct the first z = 0.5 - 0.8 cluster temperature function. This distant cluster temperature function, when compared to local z approximately 0 and to a similar moderate redshift (z = 0.3 - 0.4) temperature function strongly constrains the matter density of the universe. Best fits to the distributions of temperatures and redshifts of these cluster samples results in Omega(sub M) = 0.45 +/- 0.1 if Lambda = 0 and Omega = 0.27 +/- 0.1 if Lambda + Omega(sub M) = 1. The uncertainties are 1sigma statistical. We examine the systematics of our approach and find that systematics, stemming mainly from model assumptions and not measurement errors, are about the same size as the statistical uncertainty +/- 0.1. In this poster proceedings, we clarify the issue of a8 as reported in our paper Donahue & Voit (1999), since this was a matter of discussion at the meeting.

  9. Vacancy-indium clusters in implanted germanium

    KAUST Repository

    Chroneos, Alexander I.

    2010-04-01

    Secondary ion mass spectroscopy measurements of heavily indium doped germanium samples revealed that a significant proportion of the indium dose is immobile. Using electronic structure calculations we address the possibility of indium clustering with point defects by predicting the stability of indium-vacancy clusters, InnVm. We find that the formation of large clusters is energetically favorable, which can explain the immobility of the indium ions. © 2010 Elsevier B.V. All rights reserved.

  10. Vacancy-indium clusters in implanted germanium

    KAUST Repository

    Chroneos, Alexander I.; Kube, R.; Bracht, Hartmut A.; Grimes, Robin W.; Schwingenschlö gl, Udo

    2010-01-01

    Secondary ion mass spectroscopy measurements of heavily indium doped germanium samples revealed that a significant proportion of the indium dose is immobile. Using electronic structure calculations we address the possibility of indium clustering with point defects by predicting the stability of indium-vacancy clusters, InnVm. We find that the formation of large clusters is energetically favorable, which can explain the immobility of the indium ions. © 2010 Elsevier B.V. All rights reserved.

  11. Portable ultrahigh-vacuum sample storage system for polarization-dependent total-reflection fluorescence x-ray absorption fine structure spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Yoshihide, E-mail: e0827@mosk.tytlabs.co.jp; Nishimura, Yusaku F.; Suzuki, Ryo; Beniya, Atsushi; Isomura, Noritake [Toyota Central R& D Labs., Inc., Yokomichi 41-1, Nagakute, Aichi 480-1192 (Japan); Uehara, Hiromitsu; Asakura, Kiyotaka; Takakusagi, Satoru [Catalysis Research Center, Hokkaido University, Kita 21-10, Sapporo, Hokkaido 001-0021 (Japan); Nimura, Tomoyuki [AVC Co., Ltd., Inada 1450-6, Hitachinaka, Ibaraki 312-0061 (Japan)

    2016-03-15

    A portable ultrahigh-vacuum sample storage system was designed and built to investigate the detailed geometric structures of mass-selected metal clusters on oxide substrates by polarization-dependent total-reflection fluorescence x-ray absorption fine structure spectroscopy (PTRF-XAFS). This ultrahigh-vacuum (UHV) sample storage system provides the handover of samples between two different sample manipulating systems. The sample storage system is adaptable for public transportation, facilitating experiments using air-sensitive samples in synchrotron radiation or other quantum beam facilities. The samples were transferred by the developed portable UHV transfer system via a public transportation at a distance over 400 km. The performance of the transfer system was demonstrated by a successful PTRF-XAFS study of Pt{sub 4} clusters deposited on a TiO{sub 2}(110) surface.

  12. Evidence for the direct ejection of clusters from non-metallic solids during laser vaporization

    International Nuclear Information System (INIS)

    Bloomfield, L.A.; Yang, Y.A.; Xia, P.; Junkin, A.L.

    1991-01-01

    This paper reports on the formation of molecular scale particles or clusters of alkali halides and semiconductors during laser vaporization of solids. By measuring the abundances of cluster ions produced in several different source configurations, the authors have determined that clusters are ejected directly from the source sample and do not need to grow from atomic or molecular vapor. Using samples of mixed alkali halide powders, the authors have found that unalloyed clusters are easily produced in a source that prevents growth from occurring after the clusters leave the sample surface. However, melting the sample or encouraging growth after vaporization lead to the production of alloyed cluster species. The sizes of the ejected clusters are initially random, but the population spectrum quickly becomes structured as hot, unstable-sized clusters decay into smaller particles. In carbon, large clusters with odd number of atoms decay almost immediately. The hot even clusters also decay, but much more slowly. The longest lived clusters are the magic C 50 and C 60 fullerenes. The mass spectrum of large carbon clusters evolves in time from structureless, to only the even clusters, to primarily C 50 and C 60 . If cluster growth is encouraged, the odd clusters reappear and the population spectrum again becomes relatively structureless

  13. Energy Aware Clustering Algorithms for Wireless Sensor Networks

    Science.gov (United States)

    Rakhshan, Noushin; Rafsanjani, Marjan Kuchaki; Liu, Chenglian

    2011-09-01

    The sensor nodes deployed in wireless sensor networks (WSNs) are extremely power constrained, so maximizing the lifetime of the entire networks is mainly considered in the design. In wireless sensor networks, hierarchical network structures have the advantage of providing scalable and energy efficient solutions. In this paper, we investigate different clustering algorithms for WSNs and also compare these clustering algorithms based on metrics such as clustering distribution, cluster's load balancing, Cluster Head's (CH) selection strategy, CH's role rotation, node mobility, clusters overlapping, intra-cluster communications, reliability, security and location awareness.

  14. Uniform deposition of size-selected clusters using Lissajous scanning

    International Nuclear Information System (INIS)

    Beniya, Atsushi; Watanabe, Yoshihide; Hirata, Hirohito

    2016-01-01

    Size-selected clusters can be deposited on the surface using size-selected cluster ion beams. However, because of the cross-sectional intensity distribution of the ion beam, it is difficult to define the coverage of the deposited clusters. The aggregation probability of the cluster depends on coverage, whereas cluster size on the surface depends on the position, despite the size-selected clusters are deposited. It is crucial, therefore, to deposit clusters uniformly on the surface. In this study, size-selected clusters were deposited uniformly on surfaces by scanning the cluster ions in the form of Lissajous pattern. Two sets of deflector electrodes set in orthogonal directions were placed in front of the sample surface. Triangular waves were applied to the electrodes with an irrational frequency ratio to ensure that the ion trajectory filled the sample surface. The advantages of this method are simplicity and low cost of setup compared with raster scanning method. The authors further investigated CO adsorption on size-selected Pt n (n = 7, 15, 20) clusters uniformly deposited on the Al 2 O 3 /NiAl(110) surface and demonstrated the importance of uniform deposition.

  15. Uniform deposition of size-selected clusters using Lissajous scanning

    Energy Technology Data Exchange (ETDEWEB)

    Beniya, Atsushi; Watanabe, Yoshihide, E-mail: e0827@mosk.tytlabs.co.jp [Toyota Central R& D Labs., Inc., 41-1 Yokomichi, Nagakute, Aichi 480-1192 (Japan); Hirata, Hirohito [Toyota Motor Corporation, 1200 Mishuku, Susono, Shizuoka 410-1193 (Japan)

    2016-05-15

    Size-selected clusters can be deposited on the surface using size-selected cluster ion beams. However, because of the cross-sectional intensity distribution of the ion beam, it is difficult to define the coverage of the deposited clusters. The aggregation probability of the cluster depends on coverage, whereas cluster size on the surface depends on the position, despite the size-selected clusters are deposited. It is crucial, therefore, to deposit clusters uniformly on the surface. In this study, size-selected clusters were deposited uniformly on surfaces by scanning the cluster ions in the form of Lissajous pattern. Two sets of deflector electrodes set in orthogonal directions were placed in front of the sample surface. Triangular waves were applied to the electrodes with an irrational frequency ratio to ensure that the ion trajectory filled the sample surface. The advantages of this method are simplicity and low cost of setup compared with raster scanning method. The authors further investigated CO adsorption on size-selected Pt{sub n} (n = 7, 15, 20) clusters uniformly deposited on the Al{sub 2}O{sub 3}/NiAl(110) surface and demonstrated the importance of uniform deposition.

  16. Design and protocol of the weight loss lottery- a cluster randomized trial.

    Science.gov (United States)

    van der Swaluw, Koen; Lambooij, Mattijs S; Mathijssen, Jolanda J P; Schipper, Maarten; Zeelenberg, Marcel; Polder, Johan J; Prast, Henriëtte M

    2016-07-01

    People often intend to exercise but find it difficult to attend their gyms on a regular basis. At times, people seek and accept deadlines with consequences to realize their own goals (i.e. commitment devices). The aim of our cluster randomized controlled trial is to test whether a lottery-based commitment device can promote regular gym attendance. The winners of the lottery always get feedback on the outcome but can only claim their prize if they attended their gyms on a regular basis. In this paper we present the design and baseline characteristics of a three-arm trial which is performed with 163 overweight participants in six in-company fitness centers in the Netherlands. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Unsupervised active learning based on hierarchical graph-theoretic clustering.

    Science.gov (United States)

    Hu, Weiming; Hu, Wei; Xie, Nianhua; Maybank, Steve

    2009-10-01

    Most existing active learning approaches are supervised. Supervised active learning has the following problems: inefficiency in dealing with the semantic gap between the distribution of samples in the feature space and their labels, lack of ability in selecting new samples that belong to new categories that have not yet appeared in the training samples, and lack of adaptability to changes in the semantic interpretation of sample categories. To tackle these problems, we propose an unsupervised active learning framework based on hierarchical graph-theoretic clustering. In the framework, two promising graph-theoretic clustering algorithms, namely, dominant-set clustering and spectral clustering, are combined in a hierarchical fashion. Our framework has some advantages, such as ease of implementation, flexibility in architecture, and adaptability to changes in the labeling. Evaluations on data sets for network intrusion detection, image classification, and video classification have demonstrated that our active learning framework can effectively reduce the workload of manual classification while maintaining a high accuracy of automatic classification. It is shown that, overall, our framework outperforms the support-vector-machine-based supervised active learning, particularly in terms of dealing much more efficiently with new samples whose categories have not yet appeared in the training samples.

  18. Optimal experiment design in a filtering context with application to sampled network data

    OpenAIRE

    Singhal, Harsh; Michailidis, George

    2010-01-01

    We examine the problem of optimal design in the context of filtering multiple random walks. Specifically, we define the steady state E-optimal design criterion and show that the underlying optimization problem leads to a second order cone program. The developed methodology is applied to tracking network flow volumes using sampled data, where the design variable corresponds to controlling the sampling rate. The optimal design is numerically compared to a myopic and a naive strategy. Finally, w...

  19. Exploring the IMF of star clusters: a joint SLUG and LEGUS effort

    Science.gov (United States)

    Ashworth, G.; Fumagalli, M.; Krumholz, M. R.; Adamo, A.; Calzetti, D.; Chandar, R.; Cignoni, M.; Dale, D.; Elmegreen, B. G.; Gallagher, J. S., III; Gouliermis, D. A.; Grasha, K.; Grebel, E. K.; Johnson, K. E.; Lee, J.; Tosi, M.; Wofford, A.

    2017-08-01

    We present the implementation of a Bayesian formalism within the Stochastically Lighting Up Galaxies (slug) stellar population synthesis code, which is designed to investigate variations in the initial mass function (IMF) of star clusters. By comparing observed cluster photometry to large libraries of clusters simulated with a continuously varying IMF, our formalism yields the posterior probability distribution function (PDF) of the cluster mass, age and extinction, jointly with the parameters describing the IMF. We apply this formalism to a sample of star clusters from the nearby galaxy NGC 628, for which broad-band photometry in five filters is available as part of the Legacy ExtraGalactic UV Survey (LEGUS). After allowing the upper-end slope of the IMF (α3) to vary, we recover PDFs for the mass, age and extinction that are broadly consistent with what is found when assuming an invariant Kroupa IMF. However, the posterior PDF for α3 is very broad due to a strong degeneracy with the cluster mass, and it is found to be sensitive to the choice of priors, particularly on the cluster mass. We find only a modest improvement in the constraining power of α3 when adding Hα photometry from the companion Hα-LEGUS survey. Conversely, Hα photometry significantly improves the age determination, reducing the frequency of multi-modal PDFs. With the aid of mock clusters, we quantify the degeneracy between physical parameters, showing how constraints on the cluster mass that are independent of photometry can be used to pin down the IMF properties of star clusters.

  20. Installation and sampling of vadose zone monitoring devices

    International Nuclear Information System (INIS)

    Bergeron, S.M.; Strickland, D.J.; Pearson, R.

    1987-10-01

    A vadose zone monitoring system was installed in a sanitary landfill near the Y-12 facility on the Department of Energy's Oak Ridge, Tennessee Reservation. The work was completed as part of the LLWDDD program to develop, design, and demonstrate new low level radioactive waste disposal monitoring methods. The objective of the project was to evaluate the performance of three types of vadose zone samplers within a similar hydrogeologic environment for use as early detection monitoring devices. The three different types of samplers included the Soil Moisture Equipment Corporation Pressure-Vacuum samplers (Models 1920 and 1940), and the BAT Piezometer (Model MK II) manufactured by BAT Envitech, Inc. All three samplers are designed to remove soil moisture from the vadose (unsaturated) zone. Five clusters of three holes each were drilled to maximum depths of 45 ft around part of the periphery of the landfill. Three samplers, one of each type, were installed at each cluster location. Water samples were obtained from 13 of the 15 samplers and submitted to Martin Marietta for analysis. All three samplers performed satisfactorily when considering ease of installation, required in-hole development, and ability to collect water samples from the vadose zone. Advantages and disadvantages of each sampler type are discussed in the main report

  1. Importance of sampling frequency when collecting diatoms

    KAUST Repository

    Wu, Naicheng

    2016-11-14

    There has been increasing interest in diatom-based bio-assessment but we still lack a comprehensive understanding of how to capture diatoms’ temporal dynamics with an appropriate sampling frequency (ASF). To cover this research gap, we collected and analyzed daily riverine diatom samples over a 1-year period (25 April 2013–30 April 2014) at the outlet of a German lowland river. The samples were classified into five clusters (1–5) by a Kohonen Self-Organizing Map (SOM) method based on similarity between species compositions over time. ASFs were determined to be 25 days at Cluster 2 (June-July 2013) and 13 days at Cluster 5 (February-April 2014), whereas no specific ASFs were found at Cluster 1 (April-May 2013), 3 (August-November 2013) (>30 days) and Cluster 4 (December 2013 - January 2014) (<1 day). ASFs showed dramatic seasonality and were negatively related to hydrological wetness conditions, suggesting that sampling interval should be reduced with increasing catchment wetness. A key implication of our findings for freshwater management is that long-term bio-monitoring protocols should be developed with the knowledge of tracking algal temporal dynamics with an appropriate sampling frequency.

  2. A NEW TEST OF THE STATISTICAL NATURE OF THE BRIGHTEST CLUSTER GALAXIES

    International Nuclear Information System (INIS)

    Lin, Yen-Ting; Ostriker, Jeremiah P.; Miller, Christopher J.

    2010-01-01

    A novel statistic is proposed to examine the hypothesis that all cluster galaxies are drawn from the same luminosity distribution (LD). In such a 'statistical model' of galaxy LD, the brightest cluster galaxies (BCGs) are simply the statistical extreme of the galaxy population. Using a large sample of nearby clusters, we show that BCGs in high luminosity clusters (e.g., L tot ∼> 4 x 10 11 h -2 70 L sun ) are unlikely (probability ≤3 x 10 -4 ) to be drawn from the LD defined by all red cluster galaxies more luminous than M r = -20. On the other hand, BCGs in less luminous clusters are consistent with being the statistical extreme. Applying our method to the second brightest galaxies, we show that they are consistent with being the statistical extreme, which implies that the BCGs are also distinct from non-BCG luminous, red, cluster galaxies. We point out some issues with the interpretation of the classical tests proposed by Tremaine and Richstone (TR) that are designed to examine the statistical nature of BCGs, investigate the robustness of both our statistical test and those of TR against difficulties in photometry of galaxies of large angular size, and discuss the implication of our findings on surveys that use the luminous red galaxies to measure the baryon acoustic oscillation features in the galaxy power spectrum.

  3. Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model

    International Nuclear Information System (INIS)

    Ellefsen, Karl J.; Smith, David B.

    2016-01-01

    Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples. - Highlights: • We evaluate a clustering procedure by applying it to geochemical data. • The procedure generates a hierarchy of clusters. • Different levels of the hierarchy show geochemical processes at different spatial scales. • The clustering method is Bayesian finite mixture modeling. • Model parameters are estimated with Hamiltonian Monte Carlo sampling.

  4. The use of cluster sampling to determine aid needs in Grozny, Chechnya in 1995.

    Science.gov (United States)

    Drysdale, S; Howarth, J; Powell, V; Healing, T

    2000-09-01

    War broke out in Chechnya in November 1994 following a three-year economic blockade. It caused widespread destruction in the capital Grozny. In April 1995 Medical Relief International--or Merlin, a British medical non-governmental organisation (NGO)--began a programme to provide medical supplies, support health centres, control communicable disease and promote preventive health-care in Grozny. In July 1995 the agency undertook a city-wide needs assessment using a modification of the cluster sampling technique developed by the Expanded Programme on Immunisation. This showed that most people had enough drinking-water, food and fuel but that provision of medical care was inadequate. The survey allowed Merlin to redirect resources earmarked for a clean water programme towards health education and improving primary health-care services. It also showed that rapid assessment by a statistically satisfactory method is both possible and useful in such a situation.

  5. Design/Operations review of core sampling trucks and associated equipment

    International Nuclear Information System (INIS)

    Shrivastava, H.P.

    1996-01-01

    A systematic review of the design and operations of the core sampling trucks was commissioned by Characterization Equipment Engineering of the Westinghouse Hanford Company in October 1995. The review team reviewed the design documents, specifications, operating procedure, training manuals and safety analysis reports. The review process, findings and corrective actions are summarized in this supporting document

  6. Design, synthesis and photochemical properties of the first examples of iminosugar clusters based on fluorescent cores

    Directory of Open Access Journals (Sweden)

    Mathieu L. Lepage

    2015-05-01

    Full Text Available The synthesis and photophysical properties of the first examples of iminosugar clusters based on a BODIPY or a pyrene core are reported. The tri- and tetravalent systems designed as molecular probes and synthesized by way of Cu(I-catalysed azide–alkyne cycloadditions are fluorescent analogues of potent pharmacological chaperones/correctors recently reported in the field of Gaucher disease and cystic fibrosis, two rare genetic diseases caused by protein misfolding.

  7. Mismatch of Posttraumatic Stress Disorder (PTSD) Symptoms and DSM-IV Symptom Clusters in a Cancer Sample: Exploratory Factor Analysis of the PTSD Checklist-Civilian Version

    Science.gov (United States)

    Shelby, Rebecca A.; Golden-Kreutz, Deanna M.; Andersen, Barbara L.

    2007-01-01

    The Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV; American Psychiatric Association, 1994a) conceptualization of posttraumatic stress disorder (PTSD) includes three symptom clusters: reexperiencing, avoidance/numbing, and arousal. The PTSD Checklist-Civilian Version (PCL-C) corresponds to the DSM-IV PTSD symptoms. In the current study, we conducted exploratory factor analysis (EFA) of the PCL-C with two aims: (a) to examine whether the PCL-C evidenced the three-factor solution implied by the DSM-IV symptom clusters, and (b) to identify a factor solution for the PCL-C in a cancer sample. Women (N = 148) with Stage II or III breast cancer completed the PCL-C after completion of cancer treatment. We extracted two-, three-, four-, and five-factor solutions using EFA. Our data did not support the DSM-IV PTSD symptom clusters. Instead, EFA identified a four-factor solution including reexperiencing, avoidance, numbing, and arousal factors. Four symptom items, which may be confounded with illness and cancer treatment-related symptoms, exhibited poor factor loadings. Using these symptom items in cancer samples may lead to overdiagnosis of PTSD and inflated rates of PTSD symptoms. PMID:16281232

  8. Dispersed metal cluster catalysts by design. Synthesis, characterization, structure, and performance

    Energy Technology Data Exchange (ETDEWEB)

    Arslan, Ilke [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Dixon, David A. [Univ. of Alabama, Tuscaloosa, AL (United States); Gates, Bruce C. [Univ. of California, Davis, CA (United States); Katz, Alexander [Univ. of California, Berkeley, CA (United States)

    2015-09-30

    To understand the class of metal cluster catalysts better and to lay a foundation for the prediction of properties leading to improved catalysts, we have synthesized metal catalysts with well-defined structures and varied the cluster structures and compositions systematically—including the ligands bonded to the metals. These ligands include supports and bulky organics that are being tuned to control both the electron transfer to or from the metal and the accessibility of reactants to influence catalytic properties. We have developed novel syntheses to prepare these well-defined catalysts with atomic-scale control the environment by choice and placement of ligands and applied state-of-the art spectroscopic, microscopic, and computational methods to determine their structures, reactivities, and catalytic properties. The ligands range from nearly flat MgO surfaces to enveloping zeolites to bulky calixarenes to provide controlled coverages of the metal clusters, while also enforcing unprecedented degrees of coordinative unsaturation at the metal site—thereby facilitating bonding and catalysis events at exposed metal atoms. With this wide range of ligand properties and our arsenal of characterization tools, we worked to achieve a deep, fundamental understanding of how to synthesize robust supported and ligand-modified metal clusters with controlled catalytic properties, thereby bridging the gap between active site structure and function in unsupported and supported metal catalysts. We used methods of organometallic and inorganic chemistry combined with surface chemistry for the precise synthesis of metal clusters and nanoparticles, characterizing them at various stages of preparation and under various conditions (including catalytic reaction conditions) and determining their structures and reactivities and how their catalytic properties depend on their compositions and structures. Key characterization methods included IR, NMR, and EXAFS spectroscopies to identify

  9. The properties of the disk system of globular clusters

    Science.gov (United States)

    Armandroff, Taft E.

    1989-01-01

    A large refined data sample is used to study the properties and origin of the disk system of globular clusters. A scale height for the disk cluster system of 800-1500 pc is found which is consistent with scale-height determinations for samples of field stars identified with the Galactic thick disk. A rotational velocity of 193 + or - 29 km/s and a line-of-sight velocity dispersion of 59 + or - 14 km/s have been found for the metal-rich clusters.

  10. Practical iterative learning control with frequency domain design and sampled data implementation

    CERN Document Server

    Wang, Danwei; Zhang, Bin

    2014-01-01

    This book is on the iterative learning control (ILC) with focus on the design and implementation. We approach the ILC design based on the frequency domain analysis and address the ILC implementation based on the sampled data methods. This is the first book of ILC from frequency domain and sampled data methodologies. The frequency domain design methods offer ILC users insights to the convergence performance which is of practical benefits. This book presents a comprehensive framework with various methodologies to ensure the learnable bandwidth in the ILC system to be set with a balance between learning performance and learning stability. The sampled data implementation ensures effective execution of ILC in practical dynamic systems. The presented sampled data ILC methods also ensure the balance of performance and stability of learning process. Furthermore, the presented theories and methodologies are tested with an ILC controlled robotic system. The experimental results show that the machines can work in much h...

  11. EVOLUTION OF SHOCKS AND TURBULENCE IN MAJOR CLUSTER MERGERS

    International Nuclear Information System (INIS)

    Paul, S.; Mannheim, K.; Iapichino, L.; Miniati, F.; Bagchi, J.

    2011-01-01

    We performed a set of cosmological simulations of major mergers in galaxy clusters, in order to study the evolution of merger shocks and the subsequent injection of turbulence in the post-shock region and in the intra-cluster medium (ICM). The computations have been performed with the grid-based, adaptive mesh refinement hydrodynamical code Enzo, using a refinement criterion especially designed for refining turbulent flows in the vicinity of shocks. When a major merger event occurs, a substantial amount of turbulence energy is injected in the ICM of the newly formed cluster. Our simulations show that the shock launched after a major merger develops an ellipsoidal shape and gets broken by the interaction with the filamentary cosmic web around the merging cluster. The size of the post-shock region along the direction of shock propagation is of the order of 300 kpc h -1 , and the turbulent velocity dispersion in this region is larger than 100 km s -1 . We performed a scaling analysis of the turbulence energy within our cluster sample. The best fit for the scaling of the turbulence energy with the cluster mass is consistent with M 5/3 , which is also the scaling law for the thermal energy in the self-similar cluster model. This clearly indicates the close relation between virialization and injection of turbulence in the cluster evolution. As for the turbulence in the cluster core, we found that within 2 Gyr after the major merger (the timescale for the shock propagation in the ICM), the ratio of the turbulent to total pressure is larger than 10%, and after about 4 Gyr it is still larger than 5%, a typical value for nearly relaxed clusters. Turbulence at the cluster center is thus sustained for several gigayears, which is substantially longer than typically assumed in the turbulent re-acceleration models, invoked to explain the statistics of observed radio halos. Striking similarities in the morphology and other physical parameters between our simulations and the

  12. Design of sampling tools for Monte Carlo particle transport code JMCT

    International Nuclear Information System (INIS)

    Shangguan Danhua; Li Gang; Zhang Baoyin; Deng Li

    2012-01-01

    A class of sampling tools for general Monte Carlo particle transport code JMCT is designed. Two ways are provided to sample from distributions. One is the utilization of special sampling methods for special distribution; the other is the utilization of general sampling methods for arbitrary discrete distribution and one-dimensional continuous distribution on a finite interval. Some open source codes are included in the general sampling method for the maximum convenience of users. The sampling results show sampling correctly from distribution which are popular in particle transport can be achieved with these tools, and the user's convenience can be assured. (authors)

  13. RAP-3A Computer code for thermal and hydraulic calculations in steady state conditions for fuel element clusters

    International Nuclear Information System (INIS)

    Popescu, C.; Biro, L.; Iftode, I.; Turcu, I.

    1975-10-01

    The RAP-3A computer code is designed for calculating the main steady state thermo-hydraulic parameters of multirod fuel clusters with liquid metal cooling. The programme provides a double accuracy computation of temperatures and axial enthalpy distributions of pressure losses and axial heat flux distributions in fuel clusters before boiling conditions occur. Physical and mathematical models as well as a sample problem are presented. The code is written in FORTRAN-4 language and is running on a IBM-370/135 computer

  14. Infrared study of new star cluster candidates associated to dusty globules

    Science.gov (United States)

    Soto King, P.; Barbá, R.; Roman-Lopes, A.; Jaque, M.; Firpo, V.; Nilo, J. L.; Soto, M.; Minniti, D.

    2014-10-01

    We present results from a study of a sample of small star clusters associated to dusty globules and bright-rimmed clouds that have been observed under ESO/Chile public infrared survey Vista Variables in the Vía Láctea (VVV). In this short communication, we analyse the near-infrared properties of a set of four small clusters candidates associated to dark clouds. This sample of clusters associated to dusty globules are selected from the new VVV stellar cluster candidates developed by members of La Serena VVV Group (Barbá et al. 2014). Firstly, we are producing color-color and color-magnitude diagrams for both, cluster candidates and surrounding areas for comparison through PSF photometry. The cluster positions are determined from the morphology on the images and also from the comparison of the observed luminosity function for the cluster candidates and the surrounding star fields. Now, we are working in the procedures to establish the full sample of clusters to be analyzed and methods for subtraction of the star field contamination. These clusters associated to dusty globules are simple laboratories to study the star formation relatively free of the influence of large star-forming regions and populous clusters, and they will be compared with those clusters associated to bright-rimmed globules, which are influenced by the energetic action of nearby O and B massive stars.

  15. Types of Obesity and Its Association with the Clustering of Cardiovascular Disease Risk Factors in Jilin Province of China

    OpenAIRE

    Zhang, Peng; Wang, Rui; Gao, Chunshi; Song, Yuanyuan; Lv, Xin; Jiang, Lingling; Yu, Yaqin; Wang, Yuhan; Li, Bo

    2016-01-01

    Cardiovascular disease (CVD) has become a serious public health problem in recent years in China. Aggregation of CVD risk factors in one individual increases the risk of CVD and the risk increases substantially with each additional risk factor. This study aims to explore the relationship between the number of clustered CVD risk factors and different types of obesity. A multistage stratified random cluster sampling design was used in this population-based cross-sectional study in 2012. Informa...

  16. Stochastic sampling of the RNA structural alignment space.

    Science.gov (United States)

    Harmanci, Arif Ozgun; Sharma, Gaurav; Mathews, David H

    2009-07-01

    A novel method is presented for predicting the common secondary structures and alignment of two homologous RNA sequences by sampling the 'structural alignment' space, i.e. the joint space of their alignments and common secondary structures. The structural alignment space is sampled according to a pseudo-Boltzmann distribution based on a pseudo-free energy change that combines base pairing probabilities from a thermodynamic model and alignment probabilities from a hidden Markov model. By virtue of the implicit comparative analysis between the two sequences, the method offers an improvement over single sequence sampling of the Boltzmann ensemble. A cluster analysis shows that the samples obtained from joint sampling of the structural alignment space cluster more closely than samples generated by the single sequence method. On average, the representative (centroid) structure and alignment of the most populated cluster in the sample of structures and alignments generated by joint sampling are more accurate than single sequence sampling and alignment based on sequence alone, respectively. The 'best' centroid structure that is closest to the known structure among all the centroids is, on average, more accurate than structure predictions of other methods. Additionally, cluster analysis identifies, on average, a few clusters, whose centroids can be presented as alternative candidates. The source code for the proposed method can be downloaded at http://rna.urmc.rochester.edu.

  17. Color Gradients Within Globular Clusters: Restricted Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Young-Jong Sohn

    1997-06-01

    Full Text Available The results of a restricted numerical simulation for the color gradients within globular clusters have been presented. The standard luminosity function of M3 and Salpeter's initial mass functions were used to generate model clusters as a fundamental population. Color gradients with the sample clusters for both King and power law cusp models of surface brightness distributions are discussed in the case of using the standard luminosity function. The dependence of color gradients on several parameters for the simulations with Salpeter's initial mass functions, such as slope of initial mass functions, cluster ages, metallicities, concentration parameters of King model, and slopes of power law, are also discussed. No significant radial color gradients are shown to the sample clusters which are regenerated by a random number generation technique with various parameters in both of King and power law cusp models of surface brightness distributions. Dynamical mass segregation and stellar evolution of horizontal branch stars and blue stragglers should be included for the general case of model simulations to show the observed radial color gradients within globular clusters.

  18. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  19. Design and synthesis of polyoxometalate-framework materials from cluster precursors

    Science.gov (United States)

    Vilà-Nadal, Laia; Cronin, Leroy

    2017-10-01

    Inorganic oxide materials are used in semiconductor electronics, ion exchange, catalysis, coatings, gas sensors and as separation materials. Although their synthesis is well understood, the scope for new materials is reduced because of the stability limits imposed by high-temperature processing and top-down synthetic approaches. In this Review, we describe the derivatization of polyoxometalate (POM) clusters, which enables their assembly into a range of frameworks by use of organic or inorganic linkers. Additionally, bottom-up synthetic approaches can be used to make metal oxide framework materials, and the features of the molecular POM precursors are retained in these structures. Highly robust all-inorganic frameworks can be made using metal-ion linkers, which combine molecular synthetic control without the need for organic components. The resulting frameworks have high stability, and high catalytic, photochemical and electrochemical activity. Conceptually, these inorganic oxide materials bridge the gap between zeolites and metal-organic frameworks (MOFs) and establish a new class of all-inorganic POM frameworks that can be designed using topological and reactivity principles similar to MOFs.

  20. Star formation and substructure in galaxy clusters

    International Nuclear Information System (INIS)

    Cohen, Seth A.; Hickox, Ryan C.; Wegner, Gary A.; Einasto, Maret; Vennik, Jaan

    2014-01-01

    We investigate the relationship between star formation (SF) and substructure in a sample of 107 nearby galaxy clusters using data from the Sloan Digital Sky Survey. Several past studies of individual galaxy clusters have suggested that cluster mergers enhance cluster SF, while others find no such relationship. The SF fraction in multi-component clusters (0.228 ± 0.007) is higher than that in single-component clusters (0.175 ± 0.016) for galaxies with M r 0.1 <−20.5. In both single- and multi-component clusters, the fraction of star-forming galaxies increases with clustercentric distance and decreases with local galaxy number density, and multi-component clusters show a higher SF fraction than single-component clusters at almost all clustercentric distances and local densities. Comparing the SF fraction in individual clusters to several statistical measures of substructure, we find weak, but in most cases significant at greater than 2σ, correlations between substructure and SF fraction. These results could indicate that cluster mergers may cause weak but significant SF enhancement in clusters, or unrelaxed clusters exhibit slightly stronger SF due to their less evolved states relative to relaxed clusters.

  1. Using the latent class approach to cluster firms in benchmarking: An application to the US electricity transmission industry

    Directory of Open Access Journals (Sweden)

    Manuel Llorca

    2014-03-01

    Full Text Available In this paper we advocate using the latent class model (LCM approach to control for technological differences in traditional efficiency analysis of regulated electricity networks. Our proposal relies on the fact that latent class models are designed to cluster firms by uncovering differences in technology parameters. Moreover, it can be viewed as a supervised method for clustering data that takes into account the same (production or cost relationship that is analysed later, often using nonparametric frontier techniques. The simulation exercises show that the proposed approach outperforms other sample selection procedures. The proposed methodology is illustrated with an application to a sample of US electricity transmission firms for the period 2001–2009.

  2. Profiling Local Optima in K-Means Clustering: Developing a Diagnostic Technique

    Science.gov (United States)

    Steinley, Douglas

    2006-01-01

    Using the cluster generation procedure proposed by D. Steinley and R. Henson (2005), the author investigated the performance of K-means clustering under the following scenarios: (a) different probabilities of cluster overlap; (b) different types of cluster overlap; (c) varying samples sizes, clusters, and dimensions; (d) different multivariate…

  3. Active learning for semi-supervised clustering based on locally linear propagation reconstruction.

    Science.gov (United States)

    Chang, Chin-Chun; Lin, Po-Yi

    2015-03-01

    The success of semi-supervised clustering relies on the effectiveness of side information. To get effective side information, a new active learner learning pairwise constraints known as must-link and cannot-link constraints is proposed in this paper. Three novel techniques are developed for learning effective pairwise constraints. The first technique is used to identify samples less important to cluster structures. This technique makes use of a kernel version of locally linear embedding for manifold learning. Samples neither important to locally linear propagation reconstructions of other samples nor on flat patches in the learned manifold are regarded as unimportant samples. The second is a novel criterion for query selection. This criterion considers not only the importance of a sample to expanding the space coverage of the learned samples but also the expected number of queries needed to learn the sample. To facilitate semi-supervised clustering, the third technique yields inferred must-links for passing information about flat patches in the learned manifold to semi-supervised clustering algorithms. Experimental results have shown that the learned pairwise constraints can capture the underlying cluster structures and proven the feasibility of the proposed approach. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. CLUSTER LENSING PROFILES DERIVED FROM A REDSHIFT ENHANCEMENT OF MAGNIFIED BOSS-SURVEY GALAXIES

    International Nuclear Information System (INIS)

    Coupon, Jean; Umetsu, Keiichi; Broadhurst, Tom

    2013-01-01

    We report the first detection of a redshift-depth enhancement of background galaxies magnified by foreground clusters. Using 300,000 BOSS survey galaxies with accurate spectroscopic redshifts, we measure their mean redshift depth behind four large samples of optically selected clusters from the Sloan Digital Sky Survey (SDSS) surveys, totaling 5000-15,000 clusters. A clear trend of increasing mean redshift toward the cluster centers is found, averaged over each of the four cluster samples. In addition, we find similar but noisier behavior for an independent X-ray sample of 158 clusters lying in the foreground of the current BOSS sky area. By adopting the mass-richness relationships appropriate for each survey, we compare our results with theoretical predictions for each of the four SDSS cluster catalogs. The radial form of this redshift enhancement is well fitted by a richness-to-mass weighted composite Navarro-Frenk-White profile with an effective mass ranging between M 200 ∼ 1.4-1.8 × 10 14 M ☉ for the optically detected cluster samples, and M 200 ∼ 5.0 × 10 14 M ☉ for the X-ray sample. This lensing detection helps to establish the credibility of these SDSS cluster surveys, and provides a normalization for their respective mass-richness relations. In the context of the upcoming bigBOSS, Subaru Prime Focus Spectrograph, and EUCLID-NISP spectroscopic surveys, this method represents an independent means of deriving the masses of cluster samples for examining the cosmological evolution, and provides a relatively clean consistency check of weak-lensing measurements, free from the systematic limitations of shear calibration

  5. DCE: A Distributed Energy-Efficient Clustering Protocol for Wireless Sensor Network Based on Double-Phase Cluster-Head Election.

    Science.gov (United States)

    Han, Ruisong; Yang, Wei; Wang, Yipeng; You, Kaiming

    2017-05-01

    Clustering is an effective technique used to reduce energy consumption and extend the lifetime of wireless sensor network (WSN). The characteristic of energy heterogeneity of WSNs should be considered when designing clustering protocols. We propose and evaluate a novel distributed energy-efficient clustering protocol called DCE for heterogeneous wireless sensor networks, based on a Double-phase Cluster-head Election scheme. In DCE, the procedure of cluster head election is divided into two phases. In the first phase, tentative cluster heads are elected with the probabilities which are decided by the relative levels of initial and residual energy. Then, in the second phase, the tentative cluster heads are replaced by their cluster members to form the final set of cluster heads if any member in their cluster has more residual energy. Employing two phases for cluster-head election ensures that the nodes with more energy have a higher chance to be cluster heads. Energy consumption is well-distributed in the proposed protocol, and the simulation results show that DCE achieves longer stability periods than other typical clustering protocols in heterogeneous scenarios.

  6. Improved Density Based Spatial Clustering of Applications of Noise Clustering Algorithm for Knowledge Discovery in Spatial Data

    Directory of Open Access Journals (Sweden)

    Arvind Sharma

    2016-01-01

    Full Text Available There are many techniques available in the field of data mining and its subfield spatial data mining is to understand relationships between data objects. Data objects related with spatial features are called spatial databases. These relationships can be used for prediction and trend detection between spatial and nonspatial objects for social and scientific reasons. A huge data set may be collected from different sources as satellite images, X-rays, medical images, traffic cameras, and GIS system. To handle this large amount of data and set relationship between them in a certain manner with certain results is our primary purpose of this paper. This paper gives a complete process to understand how spatial data is different from other kinds of data sets and how it is refined to apply to get useful results and set trends to predict geographic information system and spatial data mining process. In this paper a new improved algorithm for clustering is designed because role of clustering is very indispensable in spatial data mining process. Clustering methods are useful in various fields of human life such as GIS (Geographic Information System, GPS (Global Positioning System, weather forecasting, air traffic controller, water treatment, area selection, cost estimation, planning of rural and urban areas, remote sensing, and VLSI designing. This paper presents study of various clustering methods and algorithms and an improved algorithm of DBSCAN as IDBSCAN (Improved Density Based Spatial Clustering of Application of Noise. The algorithm is designed by addition of some important attributes which are responsible for generation of better clusters from existing data sets in comparison of other methods.

  7. Cluster Analysis of Clinical Data Identifies Fibromyalgia Subgroups

    Science.gov (United States)

    Docampo, Elisa; Collado, Antonio; Escaramís, Geòrgia; Carbonell, Jordi; Rivera, Javier; Vidal, Javier; Alegre, José

    2013-01-01

    Introduction Fibromyalgia (FM) is mainly characterized by widespread pain and multiple accompanying symptoms, which hinder FM assessment and management. In order to reduce FM heterogeneity we classified clinical data into simplified dimensions that were used to define FM subgroups. Material and Methods 48 variables were evaluated in 1,446 Spanish FM cases fulfilling 1990 ACR FM criteria. A partitioning analysis was performed to find groups of variables similar to each other. Similarities between variables were identified and the variables were grouped into dimensions. This was performed in a subset of 559 patients, and cross-validated in the remaining 887 patients. For each sample and dimension, a composite index was obtained based on the weights of the variables included in the dimension. Finally, a clustering procedure was applied to the indexes, resulting in FM subgroups. Results Variables clustered into three independent dimensions: “symptomatology”, “comorbidities” and “clinical scales”. Only the two first dimensions were considered for the construction of FM subgroups. Resulting scores classified FM samples into three subgroups: low symptomatology and comorbidities (Cluster 1), high symptomatology and comorbidities (Cluster 2), and high symptomatology but low comorbidities (Cluster 3), showing differences in measures of disease severity. Conclusions We have identified three subgroups of FM samples in a large cohort of FM by clustering clinical data. Our analysis stresses the importance of family and personal history of FM comorbidities. Also, the resulting patient clusters could indicate different forms of the disease, relevant to future research, and might have an impact on clinical assessment. PMID:24098674

  8. Cluster analysis of clinical data identifies fibromyalgia subgroups.

    Directory of Open Access Journals (Sweden)

    Elisa Docampo

    Full Text Available INTRODUCTION: Fibromyalgia (FM is mainly characterized by widespread pain and multiple accompanying symptoms, which hinder FM assessment and management. In order to reduce FM heterogeneity we classified clinical data into simplified dimensions that were used to define FM subgroups. MATERIAL AND METHODS: 48 variables were evaluated in 1,446 Spanish FM cases fulfilling 1990 ACR FM criteria. A partitioning analysis was performed to find groups of variables similar to each other. Similarities between variables were identified and the variables were grouped into dimensions. This was performed in a subset of 559 patients, and cross-validated in the remaining 887 patients. For each sample and dimension, a composite index was obtained based on the weights of the variables included in the dimension. Finally, a clustering procedure was applied to the indexes, resulting in FM subgroups. RESULTS: VARIABLES CLUSTERED INTO THREE INDEPENDENT DIMENSIONS: "symptomatology", "comorbidities" and "clinical scales". Only the two first dimensions were considered for the construction of FM subgroups. Resulting scores classified FM samples into three subgroups: low symptomatology and comorbidities (Cluster 1, high symptomatology and comorbidities (Cluster 2, and high symptomatology but low comorbidities (Cluster 3, showing differences in measures of disease severity. CONCLUSIONS: We have identified three subgroups of FM samples in a large cohort of FM by clustering clinical data. Our analysis stresses the importance of family and personal history of FM comorbidities. Also, the resulting patient clusters could indicate different forms of the disease, relevant to future research, and might have an impact on clinical assessment.

  9. Fault Detection Using the Clustering-kNN Rule for Gas Sensor Arrays

    Directory of Open Access Journals (Sweden)

    Jingli Yang

    2016-12-01

    Full Text Available The k-nearest neighbour (kNN rule, which naturally handles the possible non-linearity of data, is introduced to solve the fault detection problem of gas sensor arrays. In traditional fault detection methods based on the kNN rule, the detection process of each new test sample involves all samples in the entire training sample set. Therefore, these methods can be computation intensive in monitoring processes with a large volume of variables and training samples and may be impossible for real-time monitoring. To address this problem, a novel clustering-kNN rule is presented. The landmark-based spectral clustering (LSC algorithm, which has low computational complexity, is employed to divide the entire training sample set into several clusters. Further, the kNN rule is only conducted in the cluster that is nearest to the test sample; thus, the efficiency of the fault detection methods can be enhanced by reducing the number of training samples involved in the detection process of each test sample. The performance of the proposed clustering-kNN rule is fully verified in numerical simulations with both linear and non-linear models and a real gas sensor array experimental system with different kinds of faults. The results of simulations and experiments demonstrate that the clustering-kNN rule can greatly enhance both the accuracy and efficiency of fault detection methods and provide an excellent solution to reliable and real-time monitoring of gas sensor arrays.

  10. Fault Detection Using the Clustering-kNN Rule for Gas Sensor Arrays

    Science.gov (United States)

    Yang, Jingli; Sun, Zhen; Chen, Yinsheng

    2016-01-01

    The k-nearest neighbour (kNN) rule, which naturally handles the possible non-linearity of data, is introduced to solve the fault detection problem of gas sensor arrays. In traditional fault detection methods based on the kNN rule, the detection process of each new test sample involves all samples in the entire training sample set. Therefore, these methods can be computation intensive in monitoring processes with a large volume of variables and training samples and may be impossible for real-time monitoring. To address this problem, a novel clustering-kNN rule is presented. The landmark-based spectral clustering (LSC) algorithm, which has low computational complexity, is employed to divide the entire training sample set into several clusters. Further, the kNN rule is only conducted in the cluster that is nearest to the test sample; thus, the efficiency of the fault detection methods can be enhanced by reducing the number of training samples involved in the detection process of each test sample. The performance of the proposed clustering-kNN rule is fully verified in numerical simulations with both linear and non-linear models and a real gas sensor array experimental system with different kinds of faults. The results of simulations and experiments demonstrate that the clustering-kNN rule can greatly enhance both the accuracy and efficiency of fault detection methods and provide an excellent solution to reliable and real-time monitoring of gas sensor arrays. PMID:27929412

  11. Towards automating the discovery of certain innovative design principles through a clustering-based optimization technique

    Science.gov (United States)

    Bandaru, Sunith; Deb, Kalyanmoy

    2011-09-01

    In this article, a methodology is proposed for automatically extracting innovative design principles which make a system or process (subject to conflicting objectives) optimal using its Pareto-optimal dataset. Such 'higher knowledge' would not only help designers to execute the system better, but also enable them to predict how changes in one variable would affect other variables if the system has to retain its optimal behaviour. This in turn would help solve other similar systems with different parameter settings easily without the need to perform a fresh optimization task. The proposed methodology uses a clustering-based optimization technique and is capable of discovering hidden functional relationships between the variables, objective and constraint functions and any other function that the designer wishes to include as a 'basis function'. A number of engineering design problems are considered for which the mathematical structure of these explicit relationships exists and has been revealed by a previous study. A comparison with the multivariate adaptive regression splines (MARS) approach reveals the practicality of the proposed approach due to its ability to find meaningful design principles. The success of this procedure for automated innovization is highly encouraging and indicates its suitability for further development in tackling more complex design scenarios.

  12. Hierarchical Cluster Analysis of Three-Dimensional Reconstructions of Unbiased Sampled Microglia Shows not Continuous Morphological Changes from Stage 1 to 2 after Multiple Dengue Infections in Callithrix penicillata

    Science.gov (United States)

    Diniz, Daniel G.; Silva, Geane O.; Naves, Thaís B.; Fernandes, Taiany N.; Araújo, Sanderson C.; Diniz, José A. P.; de Farias, Luis H. S.; Sosthenes, Marcia C. K.; Diniz, Cristovam G.; Anthony, Daniel C.; da Costa Vasconcelos, Pedro F.; Picanço Diniz, Cristovam W.

    2016-01-01

    It is known that microglial morphology and function are related, but few studies have explored the subtleties of microglial morphological changes in response to specific pathogens. In the present report we quantitated microglia morphological changes in a monkey model of dengue disease with virus CNS invasion. To mimic multiple infections that usually occur in endemic areas, where higher dengue infection incidence and abundant mosquito vectors carrying different serotypes coexist, subjects received once a week subcutaneous injections of DENV3 (genotype III)-infected culture supernatant followed 24 h later by an injection of anti-DENV2 antibody. Control animals received either weekly anti-DENV2 antibodies, or no injections. Brain sections were immunolabeled for DENV3 antigens and IBA-1. Random and systematic microglial samples were taken from the polymorphic layer of dentate gyrus for 3-D reconstructions, where we found intense immunostaining for TNFα and DENV3 virus antigens. We submitted all bi- or multimodal morphological parameters of microglia to hierarchical cluster analysis and found two major morphological phenotypes designated types I and II. Compared to type I (stage 1), type II microglia were more complex; displaying higher number of nodes, processes and trees and larger surface area and volumes (stage 2). Type II microglia were found only in infected monkeys, whereas type I microglia was found in both control and infected subjects. Hierarchical cluster analysis of morphological parameters of 3-D reconstructions of random and systematic selected samples in control and ADE dengue infected monkeys suggests that microglia morphological changes from stage 1 to stage 2 may not be continuous. PMID:27047345

  13. Aligning experimental design with bioinformatics analysis to meet discovery research objectives.

    Science.gov (United States)

    Kane, Michael D

    2002-01-01

    The utility of genomic technology and bioinformatic analytical support to provide new and needed insight into the molecular basis of disease, development, and diversity continues to grow as more research model systems and populations are investigated. Yet deriving results that meet a specific set of research objectives requires aligning or coordinating the design of the experiment, the laboratory techniques, and the data analysis. The following paragraphs describe several important interdependent factors that need to be considered to generate high quality data from the microarray platform. These factors include aligning oligonucleotide probe design with the sample labeling strategy if oligonucleotide probes are employed, recognizing that compromises are inherent in different sample procurement methods, normalizing 2-color microarray raw data, and distinguishing the difference between gene clustering and sample clustering. These factors do not represent an exhaustive list of technical variables in microarray-based research, but this list highlights those variables that span both experimental execution and data analysis. Copyright 2001 Wiley-Liss, Inc.

  14. Completion Report for Well Cluster ER-6-1

    Energy Technology Data Exchange (ETDEWEB)

    Bechtel Nevada

    2004-10-01

    Well Cluster ER-6-1 was constructed for the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office in support of the Nevada Environmental Restoration Division at the Nevada Test Site, Nye County, Nevada. This work was initiated as part of the Groundwater Characterization Project, now known as the Underground Test Area Project. The well cluster is located in southeastern Yucca Flat. Detailed lithologic descriptions with stratigraphic assignments for Well Cluster ER-6-1 are included in this report. These are based on composite drill cuttings collected every 3 meters and conventional core samples taken below 639 meters, supplemented by geophysical log data. Detailed petrographic, chemical, and mineralogical studies of rock samples were conducted on 11 samples to resolve complex interrelationships between several of the Tertiary tuff units. Additionally, paleontological analyses by the U.S. Geological Survey confirmed the stratigraphic assignments below 539 meters within the Paleozoic sedimentary section. All three wells in the Well ER-6-1 cluster were drilled within the Quaternary and Tertiary alluvium section, the Tertiary volcanic section, and into the Paleozoic sedimentary section.

  15. STAR-FORMING GALAXIES IN THE HERCULES CLUSTER: Hα IMAGING OF A2151

    International Nuclear Information System (INIS)

    Cedres, Bernabe; Iglesias-Paramo, Jorge; VIlchez, Jose Manuel; Reverte, Daniel; Petropoulou, Vasiliki; Hernandez-Fernandez, Jonathan

    2009-01-01

    This paper presents the first results of an Hα imaging survey of galaxies in the central regions of the A2151 cluster. A total of 50 sources were detected in Hα, from which 41 were classified as secure members of the cluster and 2 as likely members based on spectroscopic and photometric redshift considerations. The remaining seven galaxies were classified as background contaminants and thus excluded from our study on the Hα properties of the cluster. The morphologies of the 43 Hα selected galaxies range from grand design spirals and interacting galaxies to blue compacts and tidal dwarfs or isolated extragalactic H II regions, spanning a range of magnitudes of -21 ≤ M B ≤ -12.5 mag. From these 43 galaxies, 7 have been classified as active galactic nucleus (AGN) candidates. These AGN candidates follow the L(Hα) versus M B relationship of the normal galaxies, implying that the emission associated with the nuclear engine has a rather secondary impact on the total Hα emission of these galaxies. A comparison with the clusters Coma and A1367 and a sample of field galaxies has shown the presence of cluster galaxies with L(Hα) lower than expected for their M B , a consequence of the cluster environment. This fact results in differences in the L(Hα) versus EW(Hα) and L(Hα) distributions of the clusters with respect to the field, and in cluster-to-cluster variations of these quantities, which we propose are driven by a global cluster property as the total mass. In addition, the cluster Hα emitting galaxies tend to avoid the central regions of the clusters, again with different intensity depending on the cluster total mass. For the particular case of A2151, we find that most Hα emitting galaxies are located close to the regions with the higher galaxy density, offset from the main X-ray peak. Overall, we conclude that both the global cluster environment and the cluster merging history play a non-negligible role in the integral star formation properties of

  16. Radio emission of Abell galaxy clusters with red shifts from 0.02 to 0.075 at 102.5 MHz. Observations of clusters southward from the galactic plane

    International Nuclear Information System (INIS)

    Gubanov, A.G.

    1983-01-01

    The sample of 121 Abell clusters of galaxies with measured red shifts from 0.02 to 0.075, delta= 10 deg - +80 deg and within the completeness galactic-latitude region is presented. The completeness. with respect to the Abell's catalog is 80%. The completeness of the sample in function of distance (the completeness function) was constructed and the mean cluster density of 1.5x10 -6 Mpc -3 was derived. Observations at 102.5 MHz of 39 clusters southward from the galactic plane were carried out with BSA radio telescope. Flux density measurements for radio sources in the directions of the clusters have been made, integrated fluxes of clusters and luminosity estimations for their radio halos are presented. Radio emission of 11 clusters was detected , and for two of these and for other clust rs radio sources detected in the directions to the outskirts of clusters. Radio halos having the luminosity comparable to that of the A1656 (Coma) cluster are not typical for clusters

  17. Performance of the cluster-jet target for PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Hergemoeller, Ann-Katrin; Bonaventura, Daniel; Grieser, Silke; Hetz, Benjamin; Koehler, Esperanza; Khoukaz, Alfons [Institut fuer Kernphysik, Westfaelische Wilhelms-Universitaet Muenster, 48149 Muenster (Germany)

    2016-07-01

    The success of storage ring experiments strongly depends on the choice of the target. For this purpose, a very appropriate internal target for such an experiment is a cluster-jet target, which will be the first operated target at the PANDA experiment at FAIR. In this kind of target the cluster beam itself is formed due to the expansion of pre-cooled gases within a Laval nozzle and is prepared afterwards via two orifices, the skimmer and the collimator. The target prototype, operating successfully for years at the University of Muenster, provides routinely target thicknesses of more than 2 x 10{sup 15} (atoms)/(cm{sup 2}) in a distance of 2.1 m behind the nozzle. Based on the results of the performance of the cluster target prototype the final cluster-jet target source was designed and set into operation in Muenster as well. Besides the monitoring of the cluster beam itself and the thickness with two different monitoring systems at this target, investigations on the cluster mass via Mie scattering will be performed. In this presentation an overview of the cluster target design, its performance and the Mie scattering method are presented and discussed.

  18. The observed clustering of damaging extratropical cyclones in Europe

    Science.gov (United States)

    Cusack, Stephen

    2016-04-01

    The clustering of severe European windstorms on annual timescales has substantial impacts on the (re-)insurance industry. Our knowledge of the risk is limited by large uncertainties in estimates of clustering from typical historical storm data sets covering the past few decades. Eight storm data sets are gathered for analysis in this study in order to reduce these uncertainties. Six of the data sets contain more than 100 years of severe storm information to reduce sampling errors, and observational errors are reduced by the diversity of information sources and analysis methods between storm data sets. All storm severity measures used in this study reflect damage, to suit (re-)insurance applications. The shortest storm data set of 42 years provides indications of stronger clustering with severity, particularly for regions off the main storm track in central Europe and France. However, clustering estimates have very large sampling and observational errors, exemplified by large changes in estimates in central Europe upon removal of one stormy season, 1989/1990. The extended storm records place 1989/1990 into a much longer historical context to produce more robust estimates of clustering. All the extended storm data sets show increased clustering between more severe storms from return periods (RPs) of 0.5 years to the longest measured RPs of about 20 years. Further, they contain signs of stronger clustering off the main storm track, and weaker clustering for smaller-sized areas, though these signals are more uncertain as they are drawn from smaller data samples. These new ultra-long storm data sets provide new information on clustering to improve our management of this risk.

  19. Sampling design for long-term regional trends in marine rocky intertidal communities

    Science.gov (United States)

    Irvine, Gail V.; Shelley, Alice

    2013-01-01

    Probability-based designs reduce bias and allow inference of results to the pool of sites from which they were chosen. We developed and tested probability-based designs for monitoring marine rocky intertidal assemblages at Glacier Bay National Park and Preserve (GLBA), Alaska. A multilevel design was used that varied in scale and inference. The levels included aerial surveys, extensive sampling of 25 sites, and more intensive sampling of 6 sites. Aerial surveys of a subset of intertidal habitat indicated that the original target habitat of bedrock-dominated sites with slope ≤30° was rare. This unexpected finding illustrated one value of probability-based surveys and led to a shift in the target habitat type to include steeper, more mixed rocky habitat. Subsequently, we evaluated the statistical power of different sampling methods and sampling strategies to detect changes in the abundances of the predominant sessile intertidal taxa: barnacles Balanomorpha, the mussel Mytilus trossulus, and the rockweed Fucus distichus subsp. evanescens. There was greatest power to detect trends in Mytilus and lesser power for barnacles and Fucus. Because of its greater power, the extensive, coarse-grained sampling scheme was adopted in subsequent years over the intensive, fine-grained scheme. The sampling attributes that had the largest effects on power included sampling of “vertical” line transects (vs. horizontal line transects or quadrats) and increasing the number of sites. We also evaluated the power of several management-set parameters. Given equal sampling effort, sampling more sites fewer times had greater power. The information gained through intertidal monitoring is likely to be useful in assessing changes due to climate, including ocean acidification; invasive species; trampling effects; and oil spills.

  20. Automated detection of very Low Surface Brightness galaxies in the Virgo Cluster

    Science.gov (United States)

    Prole, D. J.; Davies, J. I.; Keenan, O. C.; Davies, L. J. M.

    2018-04-01

    We report the automatic detection of a new sample of very low surface brightness (LSB) galaxies, likely members of the Virgo cluster. We introduce our new software, DeepScan, that has been designed specifically to detect extended LSB features automatically using the DBSCAN algorithm. We demonstrate the technique by applying it over a 5 degree2 portion of the Next-Generation Virgo Survey (NGVS) data to reveal 53 low surface brightness galaxies that are candidate cluster members based on their sizes and colours. 30 of these sources are new detections despite the region being searched specifically for LSB galaxies previously. Our final sample contains galaxies with 26.0 ≤ ⟨μe⟩ ≤ 28.5 and 19 ≤ mg ≤ 21, making them some of the faintest known in Virgo. The majority of them have colours consistent with the red sequence, and have a mean stellar mass of 106.3 ± 0.5M⊙ assuming cluster membership. After using ProFit to fit Sérsic profiles to our detections, none of the new sources have effective radii larger than 1.5 Kpc and do not meet the criteria for ultra-diffuse galaxy (UDG) classification, so we classify them as ultra-faint dwarfs.

  1. Merging Galaxy Clusters: Analysis of Simulated Analogs

    Science.gov (United States)

    Nguyen, Jayke; Wittman, David; Cornell, Hunter

    2018-01-01

    The nature of dark matter can be better constrained by observing merging galaxy clusters. However, uncertainty in the viewing angle leads to uncertainty in dynamical quantities such as 3-d velocities, 3-d separations, and time since pericenter. The classic timing argument links these quantities via equations of motion, but neglects effects of nonzero impact parameter (i.e. it assumes velocities are parallel to the separation vector), dynamical friction, substructure, and larger-scale environment. We present a new approach using n-body cosmological simulations that naturally incorporate these effects. By uniformly sampling viewing angles about simulated cluster analogs, we see projected merger parameters in the many possible configurations of a given cluster. We select comparable simulated analogs and evaluate the likelihood of particular merger parameters as a function of viewing angle. We present viewing angle constraints for a sample of observed mergers including the Bullet cluster and El Gordo, and show that the separation vectors are closer to the plane of the sky than previously reported.

  2. Beams of mass-selected clusters: realization and first experiments

    International Nuclear Information System (INIS)

    Kamalou, O.

    2007-04-01

    The main objective of this work concerns the production of beams of mass-selected clusters of metallic and semiconductor materials. Clusters are produced in magnetron sputtering source combined with a gas aggregation chamber, cooled by liquid nitrogen circulation. Downstream of the cluster source, a Wiley-McLaren time-of-flight setup allows to select a given cluster size or a narrow size range. The pulsed mass-selected cluster ion beam is separated from the continuous neutral one by an electrostatic 90-quadrupole deflector. After the deflector, the density of the pulsed beam amounts to about 10 3 particles/cm 3 . Preliminary deposition experiments of mass-selected copper clusters with a deposition energy of about 0.5 eV/atom have ben performed on highly oriented pyrolytic graphite (HOPG) substrates, indicating that copper clusters are evidently mobile on the HOPG-surface until they reach cleavage steps, dislocation lines or other surface defects. In order to lower the cluster mobility on the HOPG-surface, we have first irradiated HOPG samples with slow highly charged ions (high dose) in order to create superficial defects. In a second step we have deposited mass-selected copper clusters on these pre-irradiated samples. The first analysis by AFM (Atomic Force Microscopy) techniques showed that the copper clusters are trapped on the defects produced by the highly charged ions. (author)

  3. A multi purpose source chamber at the PLEIADES beamline at SOLEIL for spectroscopic studies of isolated species: cold molecules, clusters, and nanoparticles.

    Science.gov (United States)

    Lindblad, Andreas; Söderström, Johan; Nicolas, Christophe; Robert, Emmanuel; Miron, Catalin

    2013-11-01

    This paper describes the philosophy and design goals regarding the construction of a versatile sample environment: a source capable of producing beams of atoms, molecules, clusters, and nanoparticles in view of studying their interaction with short wavelength (vacuum ultraviolet and x-ray) synchrotron radiation. In the design, specific care has been taken of (a) the use standard components, (b) ensuring modularity, i.e., that swiftly switching between different experimental configurations was possible. To demonstrate the efficiency of the design, proof-of-principle experiments have been conducted by recording x-ray absorption and photoelectron spectra from isolated nanoparticles (SiO2) and free mixed clusters (Ar/Xe). The results from those experiments are showcased and briefly discussed.

  4. Calculating Cluster Masses via the Sunyaev-Zel'dovich Effect

    Science.gov (United States)

    Lindley, Ashley; Landry, D.; Bonamente, M.; Joy, M.; Bulbul, E.; Carlstrom, J. E.; Culverhouse, T. L.; Gralla, M.; Greer, C.; Hawkins, D.; Lamb, J. W.; Leitch, E. M.; Marrone, D. P.; Miller, A.; Mroczkowski, T.; Muchovej, S.; Plagge, T.; Woody, D.

    2012-05-01

    Accurate measurements of the total mass of galaxy clusters are key for measuring the cluster mass function and therefore investigating the evolution of the universe. We apply two new methods to measure cluster masses for five galaxy clusters contained within the Brightest Cluster Sample (BCS), an X-ray luminous statistically complete sample of 35 clusters at z=0.15-0.30. These methods distinctively use only observations of the Sunyaev-Zel'dovich (SZ) effect, for which the brightness is redshift independent. At the low redshifts of the BCS, X-ray observations can easily be used to determine cluster masses, providing convenient calibrators for our SZ mass calculations. These clusters have been observed with the Sunyaev-Zel'dovich Array (SZA), an interferometer that is part of the Combined Array for Research in Millimeter-wave Astronomy (CARMA) that has been optimized for accurate measurement of the SZ effect in clusters of galaxies at 30 GHz. One method implements a scaling relation that relates the integrated pressure, Y, as determined by the SZ observations to the mass of the cluster calculated via optical weak lensing. The second method makes use of the Virial theorem to determine the mass given the integrated pressure of the cluster. We find that masses calculated utilizing these methods within a radius r500 are consistent with X-ray masses, calculated by manipulating the surface brightness and temperature data within the same radius, thus concluding that these are viable methods for the determination of cluster masses via the SZ effect. We present preliminary results of our analysis for five galaxy clusters.

  5. Extraction of design rules from multi-objective design exploration (MODE) using rough set theory

    International Nuclear Information System (INIS)

    Obayashi, Shigeru

    2011-01-01

    Multi-objective design exploration (MODE) and its application for design rule extraction are presented. MODE reveals the structure of design space from the trade-off information. The self-organizing map (SOM) is incorporated into MODE as a visual data-mining tool for design space. SOM divides the design space into clusters with specific design features. The sufficient conditions for belonging to a cluster of interest are extracted using rough set theory. The resulting MODE was applied to the multidisciplinary wing design problem, which revealed a cluster of good designs, and we extracted the design rules of such designs successfully.

  6. Dynamical Competition of IC-Industry Clustering from Taiwan to China

    Science.gov (United States)

    Tsai, Bi-Huei; Tsai, Kuo-Hui

    2009-08-01

    Most studies employ qualitative approach to explore the industrial clusters; however, few research has objectively quantified the evolutions of industry clustering. The purpose of this paper is to quantitatively analyze clustering among IC design, IC manufacturing as well as IC packaging and testing industries by using the foreign direct investment (FDI) data. The Lotka-Volterra system equations are first adopted here to capture the competition or cooperation among such three industries, thus explaining their clustering inclinations. The results indicate that the evolution of FDI into China for IC design industry significantly inspire the subsequent FDI of IC manufacturing as well as IC packaging and testing industries. Since IC design industry lie in the upstream stage of IC production, the middle-stream IC manufacturing and downstream IC packing and testing enterprises tend to cluster together with IC design firms, in order to sustain a steady business. Finally, Taiwan IC industry's FDI amount into China is predicted to cumulatively increase, which supports the industrial clustering tendency for Taiwan IC industry. Particularly, the FDI prediction of Lotka-Volterra model performs superior to that of the conventional Bass model after the forecast accuracy of these two models are compared. The prediction ability is dramatically improved as the industrial mutualism among each IC production stage is taken into account.

  7. Android Malware Clustering through Malicious Payload Mining

    OpenAIRE

    Li, Yuping; Jang, Jiyong; Hu, Xin; Ou, Xinming

    2017-01-01

    Clustering has been well studied for desktop malware analysis as an effective triage method. Conventional similarity-based clustering techniques, however, cannot be immediately applied to Android malware analysis due to the excessive use of third-party libraries in Android application development and the widespread use of repackaging in malware development. We design and implement an Android malware clustering system through iterative mining of malicious payload and checking whether malware s...

  8. Concept of the Ural pharmaceutical cluster formation

    Directory of Open Access Journals (Sweden)

    Aleksandr Petrovich Petrov

    2011-06-01

    cluster is made; it's impact on economic development of areas the of cluster participants location is estimated. Areas of the state and community support for clustered forms of business organization are designated. A complex of proposed actions was designed to meet the life cycle of a cluster. This positive experience of the formation and development of cluster structure in Sverdlovsk region can be implemented in other regions of Russia.

  9. Segmentation of Residential Gas Consumers Using Clustering Analysis

    Directory of Open Access Journals (Sweden)

    Marta P. Fernandes

    2017-12-01

    Full Text Available The growing environmental concerns and liberalization of energy markets have resulted in an increased competition between utilities and a strong focus on efficiency. To develop new energy efficiency measures and optimize operations, utilities seek new market-related insights and customer engagement strategies. This paper proposes a clustering-based methodology to define the segmentation of residential gas consumers. The segments of gas consumers are obtained through a detailed clustering analysis using smart metering data. Insights are derived from the segmentation, where the segments result from the clustering process and are characterized based on the consumption profiles, as well as according to information regarding consumers’ socio-economic and household key features. The study is based on a sample of approximately one thousand households over one year. The representative load profiles of consumers are essentially characterized by two evident consumption peaks, one in the morning and the other in the evening, and an off-peak consumption. Significant insights can be derived from this methodology regarding typical consumption curves of the different segments of consumers in the population. This knowledge can assist energy utilities and policy makers in the development of consumer engagement strategies, demand forecasting tools and in the design of more sophisticated tariff systems.

  10. Cluster Ion Implantation in Graphite and Diamond

    DEFF Research Database (Denmark)

    Popok, Vladimir

    2014-01-01

    Cluster ion beam technique is a versatile tool which can be used for controllable formation of nanosize objects as well as modification and processing of surfaces and shallow layers on an atomic scale. The current paper present an overview and analysis of data obtained on a few sets of graphite...... and diamond samples implanted by keV-energy size-selected cobalt and argon clusters. One of the emphases is put on pinning of metal clusters on graphite with a possibility of following selective etching of graphene layers. The other topic of concern is related to the development of scaling law for cluster...... implantation. Implantation of cobalt and argon clusters into two different allotropic forms of carbon, namely, graphite and diamond is analysed and compared in order to approach universal theory of cluster stopping in matter....

  11. Testing the Large-scale Environments of Cool-core and Non-cool-core Clusters with Clustering Bias

    Energy Technology Data Exchange (ETDEWEB)

    Medezinski, Elinor; Battaglia, Nicholas; Cen, Renyue; Gaspari, Massimo; Strauss, Michael A.; Spergel, David N. [Department of Astrophysical Sciences, 4 Ivy Lane, Princeton, NJ 08544 (United States); Coupon, Jean, E-mail: elinorm@astro.princeton.edu [Department of Astronomy, University of Geneva, ch. dEcogia 16, CH-1290 Versoix (Switzerland)

    2017-02-10

    There are well-observed differences between cool-core (CC) and non-cool-core (NCC) clusters, but the origin of this distinction is still largely unknown. Competing theories can be divided into internal (inside-out), in which internal physical processes transform or maintain the NCC phase, and external (outside-in), in which the cluster type is determined by its initial conditions, which in turn leads to different formation histories (i.e., assembly bias). We propose a new method that uses the relative assembly bias of CC to NCC clusters, as determined via the two-point cluster-galaxy cross-correlation function (CCF), to test whether formation history plays a role in determining their nature. We apply our method to 48 ACCEPT clusters, which have well resolved central entropies, and cross-correlate with the SDSS-III/BOSS LOWZ galaxy catalog. We find that the relative bias of NCC over CC clusters is b = 1.42 ± 0.35 (1.6 σ different from unity). Our measurement is limited by the small number of clusters with core entropy information within the BOSS footprint, 14 CC and 34 NCC clusters. Future compilations of X-ray cluster samples, combined with deep all-sky redshift surveys, will be able to better constrain the relative assembly bias of CC and NCC clusters and determine the origin of the bimodality.

  12. Testing the Large-scale Environments of Cool-core and Non-cool-core Clusters with Clustering Bias

    International Nuclear Information System (INIS)

    Medezinski, Elinor; Battaglia, Nicholas; Cen, Renyue; Gaspari, Massimo; Strauss, Michael A.; Spergel, David N.; Coupon, Jean

    2017-01-01

    There are well-observed differences between cool-core (CC) and non-cool-core (NCC) clusters, but the origin of this distinction is still largely unknown. Competing theories can be divided into internal (inside-out), in which internal physical processes transform or maintain the NCC phase, and external (outside-in), in which the cluster type is determined by its initial conditions, which in turn leads to different formation histories (i.e., assembly bias). We propose a new method that uses the relative assembly bias of CC to NCC clusters, as determined via the two-point cluster-galaxy cross-correlation function (CCF), to test whether formation history plays a role in determining their nature. We apply our method to 48 ACCEPT clusters, which have well resolved central entropies, and cross-correlate with the SDSS-III/BOSS LOWZ galaxy catalog. We find that the relative bias of NCC over CC clusters is b = 1.42 ± 0.35 (1.6 σ different from unity). Our measurement is limited by the small number of clusters with core entropy information within the BOSS footprint, 14 CC and 34 NCC clusters. Future compilations of X-ray cluster samples, combined with deep all-sky redshift surveys, will be able to better constrain the relative assembly bias of CC and NCC clusters and determine the origin of the bimodality.

  13. A fast learning method for large scale and multi-class samples of SVM

    Science.gov (United States)

    Fan, Yu; Guo, Huiming

    2017-06-01

    A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.

  14. Determination of the linear aperture of the SSC [Superconducting Supercollider] clustered lattice used for the conceptual design report

    International Nuclear Information System (INIS)

    Dell, G.F.

    1986-01-01

    A study is made of the linear aperture for the clustered lattice used for the SSC Conceptual Design Report. Random multipole errors are included in all magnetic elements including the insertion dipoles and quadrupoles. Based on the concept of smear, the linear aperture is equal to the dynamic aperture in the range -0.1 ≤ ΔP/P ≤ 0.03%. Strong coupling for ΔP/P > 0% produces large smears. A variation of the smear parameter that is insensitive to coupling is proposed. A comparison is made with results reported in the SSC Conceptual Design Report

  15. Photo-induced transformation process at gold clusters-semiconductor interface: Implications for the complexity of gold clusters-based photocatalysis

    Science.gov (United States)

    Liu, Siqi; Xu, Yi-Jun

    2016-03-01

    The recent thrust in utilizing atomically precise organic ligands protected gold clusters (Au clusters) as photosensitizer coupled with semiconductors for nano-catalysts has led to the claims of improved efficiency in photocatalysis. Nonetheless, the influence of photo-stability of organic ligands protected-Au clusters at the Au/semiconductor interface on the photocatalytic properties remains rather elusive. Taking Au clusters-TiO2 composites as a prototype, we for the first time demonstrate the photo-induced transformation of small molecular-like Au clusters to larger metallic Au nanoparticles under different illumination conditions, which leads to the diverse photocatalytic reaction mechanism. This transformation process undergoes a diffusion/aggregation mechanism accompanied with the onslaught of Au clusters by active oxygen species and holes resulting from photo-excited TiO2 and Au clusters. However, such Au clusters aggregation can be efficiently inhibited by tuning reaction conditions. This work would trigger rational structural design and fine condition control of organic ligands protected-metal clusters-semiconductor composites for diverse photocatalytic applications with long-term photo-stability.

  16. Understanding 3D human torso shape via manifold clustering

    Science.gov (United States)

    Li, Sheng; Li, Peng; Fu, Yun

    2013-05-01

    Discovering the variations in human torso shape plays a key role in many design-oriented applications, such as suit designing. With recent advances in 3D surface imaging technologies, people can obtain 3D human torso data that provide more information than traditional measurements. However, how to find different human shapes from 3D torso data is still an open problem. In this paper, we propose to use spectral clustering approach on torso manifold to address this problem. We first represent high-dimensional torso data in a low-dimensional space using manifold learning algorithm. Then the spectral clustering method is performed to get several disjoint clusters. Experimental results show that the clusters discovered by our approach can describe the discrepancies in both genders and human shapes, and our approach achieves better performance than the compared clustering method.

  17. Atomically precise arrays of fluorescent silver clusters: a modular approach for metal cluster photonics on DNA nanostructures.

    Science.gov (United States)

    Copp, Stacy M; Schultz, Danielle E; Swasey, Steven; Gwinn, Elisabeth G

    2015-03-24

    The remarkable precision that DNA scaffolds provide for arraying nanoscale optical elements enables optical phenomena that arise from interactions of metal nanoparticles, dye molecules, and quantum dots placed at nanoscale separations. However, control of ensemble optical properties has been limited by the difficulty of achieving uniform particle sizes and shapes. Ligand-stabilized metal clusters offer a route to atomically precise arrays that combine desirable attributes of both metals and molecules. Exploiting the unique advantages of the cluster regime requires techniques to realize controlled nanoscale placement of select cluster structures. Here we show that atomically monodisperse arrays of fluorescent, DNA-stabilized silver clusters can be realized on a prototypical scaffold, a DNA nanotube, with attachment sites separated by <10 nm. Cluster attachment is mediated by designed DNA linkers that enable isolation of specific clusters prior to assembly on nanotubes and preserve cluster structure and spectral purity after assembly. The modularity of this approach generalizes to silver clusters of diverse sizes and DNA scaffolds of many types. Thus, these silver cluster nano-optical elements, which themselves have colors selected by their particular DNA templating oligomer, bring unique dimensions of control and flexibility to the rapidly expanding field of nano-optics.

  18. Search for optical millisecond pulsars in globular clusters

    International Nuclear Information System (INIS)

    Middleditch, J.H.; Imamura, J.N.; Steiman-Cameron, T.Y.

    1988-01-01

    A search for millisecond optical pulsars in several bright, compact globular clusters was conducted. The sample included M28, and the X-ray clusters 47 Tuc, NGC 6441, NGC 6624, M22, and M15. The globular cluster M28 contains the recently discovered 327 Hz radio pulsar. Upper limits of 4 sigma to pulsed emission of (1-20) solar luminosities were found for the globular clusters tested, and 0.3 solar luminosity for the M28 pulsar for frequencies up to 500 Hz. 8 references

  19. Deployment Strategies and Clustering Protocols Efficiency

    Directory of Open Access Journals (Sweden)

    Chérif Diallo

    2017-06-01

    Full Text Available Wireless sensor networks face significant design challenges due to limited computing and storage capacities and, most importantly, dependence on limited battery power. Energy is a critical resource and is often an important issue to the deployment of sensor applications that claim to be omnipresent in the world of future. Thus optimizing the deployment of sensors becomes a major constraint in the design and implementation of a WSN in order to ensure better network operations. In wireless networking, clustering techniques add scalability, reduce the computation complexity of routing protocols, allow data aggregation and then enhance the network performance. The well-known MaxMin clustering algorithm was previously generalized, corrected and validated. Then, in a previous work we have improved MaxMin by proposing a Single- node Cluster Reduction (SNCR mechanism which eliminates single-node clusters and then improve energy efficiency. In this paper, we show that MaxMin, because of its original pathological case, does not support the grid deployment topology, which is frequently used in WSN architectures. The unreliability feature of the wireless links could have negative impacts on Link Quality Indicator (LQI based clustering protocols. So, in the second part of this paper we show how our distributed Link Quality based d- Clustering Protocol (LQI-DCP has good performance in both stable and high unreliable link environments. Finally, performance evaluation results also show that LQI-DCP fully supports the grid deployment topology and is more energy efficient than MaxMin.

  20. THE MULTI-EPOCH NEARBY CLUSTER SURVEY: TYPE Ia SUPERNOVA RATE MEASUREMENT IN z {approx} 0.1 CLUSTERS AND THE LATE-TIME DELAY TIME DISTRIBUTION

    Energy Technology Data Exchange (ETDEWEB)

    Sand, David J.; Graham, Melissa L. [Las Cumbres Observatory Global Telescope Network, 6740 Cortona Drive, Suite 102, Santa Barbara, CA 93117 (United States); Bildfell, Chris; Pritchet, Chris [Department of Physics and Astronomy, University of Victoria, P.O. Box 3055, STN CSC, Victoria BC V8W 3P6 (Canada); Zaritsky, Dennis; Just, Dennis W.; Herbert-Fort, Stephane [Steward Observatory, University of Arizona, Tucson, AZ 85721 (United States); Hoekstra, Henk [Leiden Observatory, Leiden University, Niels Bohrweg 2, NL-2333 CA Leiden (Netherlands); Sivanandam, Suresh [Dunlap Institute for Astronomy and Astrophysics, 50 St. George Street, Toronto, ON M5S 3H4 (Canada); Foley, Ryan J. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Mahdavi, Andisheh, E-mail: dsand@lcogt.net [Department of Physics and Astronomy, San Francisco State University, San Francisco, CA 94132 (United States)

    2012-02-20

    We describe the Multi-Epoch Nearby Cluster Survey, designed to measure the cluster Type Ia supernova (SN Ia) rate in a sample of 57 X-ray selected galaxy clusters, with redshifts of 0.05 < z < 0.15. Utilizing our real-time analysis pipeline, we spectroscopically confirmed twenty-three cluster SNe Ia, four of which were intracluster events. Using our deep Canada-France-Hawaii Telescope/MegaCam imaging, we measured total stellar luminosities in each of our galaxy clusters, and we performed detailed supernova (SN) detection efficiency simulations. Bringing these ingredients together, we measure an overall cluster SN Ia rate within R{sub 200} (1 Mpc) of 0.042{sup +0.012}{sub -0.010}{sup +0.010}{sub -0.008} SNuM (0.049{sup +0.016}{sub -0.014}{sup +0.005}{sub -0.004} SNuM) and an SN Ia rate within red-sequence galaxies of 0.041{sup +0.015}{sub -0.015}{sup +0.005}{sub -0.010} SNuM (0.041{sup +0.019}{sub -0.015}{sup +0.005}{sub -0.004} SNuM). The red-sequence SN Ia rate is consistent with published rates in early-type/elliptical galaxies in the 'field'. Using our red-sequence SN Ia rate, and other cluster SN measurements in early-type galaxies up to z {approx} 1, we derive the late-time (>2 Gyr) delay time distribution (DTD) of SN Ia assuming a cluster early-type galaxy star formation epoch of z{sub f} = 3. Assuming a power-law form for the DTD, {Psi}(t){proportional_to}t{sup s} , we find s = -1.62 {+-} 0.54. This result is consistent with predictions for the double degenerate SN Ia progenitor scenario (s {approx} -1) and is also in line with recent calculations for the double detonation explosion mechanism (s {approx} -2). The most recent calculations of the single degenerate scenario DTD predicts an order-of-magnitude drop-off in SN Ia rate {approx}6-7 Gyr after stellar formation, and the observed cluster rates cannot rule this out.

  1. The Most Massive Star Clusters: Supermassive Globular Clusters or Dwarf Galaxy Nuclei?

    Science.gov (United States)

    Harris, William

    2004-07-01

    Evidence is mounting that the most massive globular clusters, such as Omega Centauri and M31-G1, may be related to the recently discovered "Ultra-Compact Dwarfs" and the dense nuclei of dE, N galaxies. However, no systematic imaging investigation of these supermassive globular clusters - at the level of Omega Cen and beyond - has been done, and we do not know what fraction of them might bear the signatures {such as large effective radii or tidal tails} of having originated as dE nuclei. We propose to use the ACS/WFC to obtain deep images of 18 such clusters in NGC 5128 and M31, the two nearest rich globular cluster systems. These globulars are the richest star clusters that can be found in nature, the biggest of them reaching 10^7 Solar masses, and they are likely to represent the results of star formation under the densest and most extreme conditions known. Using the profiles of the clusters including their faint outer envelopes, we will carry out state-of-the-art dynamical modelling of their structures, and look for any clear evidence which would indicate that they are associated with stripped satellites. This study will build on our previous work with STIS and WFPC2 imaging designed to study the 'Fundamental Plane' of globular clusters. When our new work is combined with Archival WFPC2, STIS, and ACS material, we will also be able to construct the definitive mapping of the Fundamental Plane of globular clusters at its uppermost mass range, and confirm whether or not the UCD and dE, N objects occupy a different structural parameter space.

  2. ENERGY OPTIMIZATION IN CLUSTER BASED WIRELESS SENSOR NETWORKS

    Directory of Open Access Journals (Sweden)

    T. SHANKAR

    2014-04-01

    Full Text Available Wireless sensor networks (WSN are made up of sensor nodes which are usually battery-operated devices, and hence energy saving of sensor nodes is a major design issue. To prolong the networks lifetime, minimization of energy consumption should be implemented at all layers of the network protocol stack starting from the physical to the application layer including cross-layer optimization. Optimizing energy consumption is the main concern for designing and planning the operation of the WSN. Clustering technique is one of the methods utilized to extend lifetime of the network by applying data aggregation and balancing energy consumption among sensor nodes of the network. This paper proposed new version of Low Energy Adaptive Clustering Hierarchy (LEACH, protocols called Advanced Optimized Low Energy Adaptive Clustering Hierarchy (AOLEACH, Optimal Deterministic Low Energy Adaptive Clustering Hierarchy (ODLEACH, and Varying Probability Distance Low Energy Adaptive Clustering Hierarchy (VPDL combination with Shuffled Frog Leap Algorithm (SFLA that enables selecting best optimal adaptive cluster heads using improved threshold energy distribution compared to LEACH protocol and rotating cluster head position for uniform energy dissipation based on energy levels. The proposed algorithm optimizing the life time of the network by increasing the first node death (FND time and number of alive nodes, thereby increasing the life time of the network.

  3. VizieR Online Data Catalog: Cool-core clusters with Chandra obs. (Andrade-Santos+, 2017)

    Science.gov (United States)

    Andrade-Santos, F.; Jones, C.; Forman, W. R.; Lovisari, L.; Vikhlinin, A.; van Weeren, R. J.; Murray, S. S.; Arnaud, M.; Pratt, G. W.; Democles, J.; Kraft, R.; Mazzotta, P.; Bohringer, H.; Chon, G.; Giacintucci, S.; Clarke, T. E.; Borgani, S.; David, L.; Douspis, M.; Pointecouteau, E.; Dahle, H.; Brown, S.; Aghanim, N.; Rasia, E.

    2018-02-01

    The main goal of this work is to compare the fraction of cool-core (CC) clusters in X-ray-selected and SZ-selected samples. The first catalog of 189 SZ clusters detected by the Planck mission was released in early 2011 (Planck Collaboration 2011, VIII/88/esz). A Chandra XVP (X-ray Visionary Program--PI: Jones) and HRC Guaranteed Time Observations (PI: Murray) combined to form the Chandra-Planck Legacy Program for Massive Clusters of Galaxies. For each of the 164 ESZ Planck clusters at z<=0.35, we obtained Chandra exposures sufficient to collect at least 10000 source counts. The X-ray sample used here is an extension of the Voevodkin & Vikhlinin (2004ApJ...601..610V) sample. This sample contains 100 clusters and has an effective redshift depth of z<0.3. All have Chandra observations. Of the 100 X-ray-selected clusters, 49 are also in the ESZ sample, and 47 are in the HIFLUGCS (Reiprich & Boehringer 2002ApJ...567..716R) catalog. (2 data files).

  4. AN EXAMINATION OF THE OPTICAL SUBSTRUCTURE OF GALAXY CLUSTERS HOSTING RADIO SOURCES

    International Nuclear Information System (INIS)

    Wing, Joshua D.; Blanton, Elizabeth L.

    2013-01-01

    Using radio sources from the Faint Images of the Radio Sky at Twenty-cm survey, and optical counterparts in the Sloan Digital Sky Survey, we have identified a large number of galaxy clusters. The radio sources within these clusters are driven by active galactic nuclei, and our cluster samples include clusters with bent, and straight, double-lobed radio sources. We also included a single-radio-component comparison sample. We examine these galaxy clusters for evidence of optical substructure, testing the possibility that bent double-lobed radio sources are formed as a result of large-scale cluster mergers. We use a suite of substructure analysis tools to determine the location and extent of substructure visible in the optical distribution of cluster galaxies, and compare the rates of substructure in clusters with different types of radio sources. We found no preference for significant substructure in clusters hosting bent double-lobed radio sources compared to those with other types of radio sources.

  5. THE MULTI-EPOCH NEARBY CLUSTER SURVEY: TYPE Ia SUPERNOVA RATE MEASUREMENT IN z ∼ 0.1 CLUSTERS AND THE LATE-TIME DELAY TIME DISTRIBUTION

    International Nuclear Information System (INIS)

    Sand, David J.; Graham, Melissa L.; Bildfell, Chris; Pritchet, Chris; Zaritsky, Dennis; Just, Dennis W.; Herbert-Fort, Stéphane; Hoekstra, Henk; Sivanandam, Suresh; Foley, Ryan J.; Mahdavi, Andisheh

    2012-01-01

    We describe the Multi-Epoch Nearby Cluster Survey, designed to measure the cluster Type Ia supernova (SN Ia) rate in a sample of 57 X-ray selected galaxy clusters, with redshifts of 0.05 200 (1 Mpc) of 0.042 +0.012 –0.010 +0.010 –0.008 SNuM (0.049 +0.016 –0.014 +0.005 –0.004 SNuM) and an SN Ia rate within red-sequence galaxies of 0.041 +0.015 –0.015 +0.005 –0.010 SNuM (0.041 +0.019 –0.015 +0.005 –0.004 SNuM). The red-sequence SN Ia rate is consistent with published rates in early-type/elliptical galaxies in the 'field'. Using our red-sequence SN Ia rate, and other cluster SN measurements in early-type galaxies up to z ∼ 1, we derive the late-time (>2 Gyr) delay time distribution (DTD) of SN Ia assuming a cluster early-type galaxy star formation epoch of z f = 3. Assuming a power-law form for the DTD, Ψ(t)∝t s , we find s = –1.62 ± 0.54. This result is consistent with predictions for the double degenerate SN Ia progenitor scenario (s ∼ –1) and is also in line with recent calculations for the double detonation explosion mechanism (s ∼ –2). The most recent calculations of the single degenerate scenario DTD predicts an order-of-magnitude drop-off in SN Ia rate ∼6-7 Gyr after stellar formation, and the observed cluster rates cannot rule this out.

  6. THE REST-FRAME OPTICAL LUMINOSITY FUNCTION OF CLUSTER GALAXIES AT z < 0.8 AND THE ASSEMBLY OF THE CLUSTER RED SEQUENCE

    International Nuclear Information System (INIS)

    Rudnick, Gregory; Von der Linden, Anja; De Lucia, Gabriella; White, Simon; Pello, Roser; Aragon-Salamanca, Alfonso; Marchesini, Danilo; Clowe, Douglas; Halliday, Claire; Jablonka, Pascale; Milvang-Jensen, Bo; Poggianti, Bianca; Saglia, Roberto; Simard, Luc; Zaritsky, Dennis

    2009-01-01

    We present the rest-frame optical luminosity function (LF) of red-sequence galaxies in 16 clusters at 0.4 < z < 0.8 drawn from the ESO Distant Cluster Survey (EDisCS). We compare our clusters to an analogous sample from the Sloan Digital Sky Survey (SDSS) and match the EDisCS clusters to their most likely descendants. We measure all LFs down to M ∼ M * + (2.5-3.5). At z < 0.8, the bright end of the LF is consistent with passive evolution but there is a significant buildup of the faint end of the red sequence toward lower redshift. There is a weak dependence of the LF on cluster velocity dispersion for EDisCS but no such dependence for the SDSS clusters. We find tentative evidence that red-sequence galaxies brighter than a threshold magnitude are already in place, and that this threshold evolves to fainter magnitudes toward lower redshifts. We compare the EDisCS LFs with the LF of coeval red-sequence galaxies in the field and find that the bright end of the LFs agree. However, relative to the number of bright red galaxies, the field has more faint red galaxies than clusters at 0.6 < z < 0.8 but fewer at 0.4 < z < 0.6, implying differential evolution. We compare the total light in the EDisCS cluster red sequences to the total red-sequence light in our SDSS cluster sample. Clusters at 0.4 < z < 0.8 must increase their luminosity on the red sequence (and therefore stellar mass in red galaxies) by a factor of 1-3 by z = 0. The necessary processes that add mass to the red sequence in clusters predict local clusters that are overluminous as compared to those observed in the SDSS. The predicted cluster luminosities can be reconciled with observed local cluster luminosities by combining multiple previously known effects.

  7. An observational study of disk-population globular clusters

    International Nuclear Information System (INIS)

    Armandroff, T.E.

    1988-01-01

    Integrated-light spectroscopy was obtained for twenty-seven globular clusters at the Ca II infrared triplet. Line strengths and radial velocities were measured from the spectra. For the well-studied clusters in the sample, the strength of the CA II lines is very well correlated with previous metallicity estimates obtained using a variety of techniques. The greatly reduced effect of interstellar extinction at these wavelengths compared to the blue region of the spectrum has permitted observations of some of the most heavily reddened clusters in the Galaxy. For several such clusters, the Ca II triplet metallicities are in poor agreement with metallicity estimates from infrared photometry by Malkan. Color-magnitude diagrams were constructed for six previously unstudied metal-rich globular clusters and for the well-studied cluster 47 Tuc. The V magnitudes of the horizontal branch stars in the six clusters are in poor agreement with previous estimates based on secondary methods. The horizontal branch morphologies and reddenings of the program clusters were also determined. Using the improved set of metallicities, radial velocities, and distance moduli, the spatial distribution, kinematics, and metallicity distribution of the Galactic globulars were analyzed. The revised data supports Zinn's conclusion that the metal-rich clusters form a highly flattened, rapidly rotating disk system, while the metal-poor clusters make up the familiar, spherically distributed, slowly rotating halo population. The scale height, metallicity distribution, and kinematics of the metal-rich globulars are in good agreement with those of the stellar thick disk. Luminosity functions were constructed, and no significant difference is found between disk and halo samples. Metallicity gradients seem to be present in the disk cluster system. The implications of these results for the formation and evol

  8. Design of a gravity corer for near shore sediment sampling

    Digital Repository Service at National Institute of Oceanography (India)

    Bhat, S.T.; Sonawane, A.V.; Nayak, B.U.

    For the purpose of geotechnical investigation a gravity corer has been designed and fabricated to obtain undisturbed sediment core samples from near shore waters. The corer was successfully operated at 75 stations up to water depth 30 m. Simplicity...

  9. Two generalizations of Kohonen clustering

    Science.gov (United States)

    Bezdek, James C.; Pal, Nikhil R.; Tsao, Eric C. K.

    1993-01-01

    The relationship between the sequential hard c-means (SHCM), learning vector quantization (LVQ), and fuzzy c-means (FCM) clustering algorithms is discussed. LVQ and SHCM suffer from several major problems. For example, they depend heavily on initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such algorithms, even if they terminate, may not produce meaningful results in terms of prototypes for cluster representation. This is due in part to the fact that they update only the winning prototype for every input vector. The impact and interaction of these two families with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method, but which often leads ideas to clustering algorithms is discussed. Then two generalizations of LVQ that are explicitly designed as clustering algorithms are presented; these algorithms are referred to as generalized LVQ = GLVQ; and fuzzy LVQ = FLVQ. Learning rules are derived to optimize an objective function whose goal is to produce 'good clusters'. GLVQ/FLVQ (may) update every node in the clustering net for each input vector. Neither GLVQ nor FLVQ depends upon a choice for the update neighborhood or learning rate distribution - these are taken care of automatically. Segmentation of a gray tone image is used as a typical application of these algorithms to illustrate the performance of GLVQ/FLVQ.

  10. The status of dental caries and related factors in a sample of Iranian adolescents

    DEFF Research Database (Denmark)

    Pakpour, Amir H.; Hidarnia, Alireza; Hajizadeh, Ebrahim

    2011-01-01

    Objective: To describe the status of dental caries in a sample of Iranian adolescents aged 14 to 18 years in Qazvin, and to identify caries-related factors affecting this group. Study design: Qazvin was divided into three zones according to socio-economic status. The sampling procedure used...... was a stratified cluster sampling technique; incorporating 3 stratified zones, for each of which a cluster of school children were recruited from randomly selected high schools. The adolescents agreed to participate in the study and to complete a questionnaire. Dental caries status was assessed in terms of decayed...... their teeth on a regular basis. Although the incidence of caries was found to be moderate, it was influenced by demographic factors such as age and gender in addition to socio-behavioral variables such as family income, the level of education attained by parents, the frequency of dental brushing and flossing...

  11. Exploring the Internal Dynamics of Globular Clusters

    Science.gov (United States)

    Watkins, Laura L.; van der Marel, Roeland; Bellini, Andrea; Luetzgendorf, Nora; HSTPROMO Collaboration

    2018-01-01

    Exploring the Internal Dynamics of Globular ClustersThe formation histories and structural properties of globular clusters are imprinted on their internal dynamics. Energy equipartition results in velocity differences for stars of different mass, and leads to mass segregation, which results in different spatial distributions for stars of different mass. Intermediate-mass black holes significantly increase the velocity dispersions at the centres of clusters. By combining accurate measurements of their internal kinematics with state-of-the-art dynamical models, we can characterise both the velocity dispersion and mass profiles of clusters, tease apart the different effects, and understand how clusters may have formed and evolved.Using proper motions from the Hubble Space Telescope Proper Motion (HSTPROMO) Collaboration for a set of 22 Milky Way globular clusters, and our discrete dynamical modelling techniques designed to work with large, high-quality datasets, we are studying a variety of internal cluster properties. We will present the results of theoretical work on simulated clusters that demonstrates the efficacy of our approach, and preliminary results from application to real clusters.

  12. A binary logistic regression model with complex sampling design of ...

    African Journals Online (AJOL)

    2017-09-03

    Sep 3, 2017 ... Bi-variable and multi-variable binary logistic regression model with complex sampling design was fitted. .... Data was entered into STATA-12 and analyzed using. SPSS-21. .... lack of access/too far or costs too much. 35. 1.2.

  13. Interacting star clusters in the Large Magellanic Cloud. Overmerging problem solved by cluster group formation

    Science.gov (United States)

    Leon, Stéphane; Bergond, Gilles; Vallenari, Antonella

    1999-04-01

    We present the tidal tail distributions of a sample of candidate binary clusters located in the bar of the Large Magellanic Cloud (LMC). One isolated cluster, SL 268, is presented in order to study the effect of the LMC tidal field. All the candidate binary clusters show tidal tails, confirming that the pairs are formed by physically linked objects. The stellar mass in the tails covers a large range, from 1.8x 10(3) to 3x 10(4) \\msun. We derive a total mass estimate for SL 268 and SL 356. At large radii, the projected density profiles of SL 268 and SL 356 fall off as r(-gamma ) , with gamma = 2.27 and gamma =3.44, respectively. Out of 4 pairs or multiple systems, 2 are older than the theoretical survival time of binary clusters (going from a few 10(6) years to 10(8) years). A pair shows too large age difference between the components to be consistent with classical theoretical models of binary cluster formation (Fujimoto & Kumai \\cite{fujimoto97}). We refer to this as the ``overmerging'' problem. A different scenario is proposed: the formation proceeds in large molecular complexes giving birth to groups of clusters over a few 10(7) years. In these groups the expected cluster encounter rate is larger, and tidal capture has higher probability. Cluster pairs are not born together through the splitting of the parent cloud, but formed later by tidal capture. For 3 pairs, we tentatively identify the star cluster group (SCG) memberships. The SCG formation, through the recent cluster starburst triggered by the LMC-SMC encounter, in contrast with the quiescent open cluster formation in the Milky Way can be an explanation to the paucity of binary clusters observed in our Galaxy. Based on observations collected at the European Southern Observatory, La Silla, Chile}

  14. A cluster-randomized trial of a college health center-based alcohol and sexual violence intervention (GIFTSS): Design, rationale, and baseline sample.

    Science.gov (United States)

    Abebe, Kaleab Z; Jones, Kelley A; Rofey, Dana; McCauley, Heather L; Clark, Duncan B; Dick, Rebecca; Gmelin, Theresa; Talis, Janine; Anderson, Jocelyn; Chugani, Carla; Algarroba, Gabriela; Antonio, Ashley; Bee, Courtney; Edwards, Clare; Lethihet, Nadia; Macak, Justin; Paley, Joshua; Torres, Irving; Van Dusen, Courtney; Miller, Elizabeth

    2018-02-01

    Sexual violence (SV) on college campuses is common, especially alcohol-related SV. This is a 2-arm cluster randomized controlled trial to test a brief intervention to reduce risk for alcohol-related sexual violence (SV) among students receiving care from college health centers (CHCs). Intervention CHC staff are trained to deliver universal SV education to all students seeking care, to facilitate patient and provider comfort in discussing SV and related abusive experiences (including the role of alcohol). Control sites provide participants with information about drinking responsibly. Across 28 participating campuses (12 randomized to intervention and 16 to control), 2292 students seeking care at CHCs complete surveys prior to their appointment (baseline), immediately after (exit), 4months later (T2) and one year later (T3). The primary outcome is change in recognition of SV and sexual risk. Among those reporting SV exposure at baseline, changes in SV victimization, disclosure, and use of SV services are additional outcomes. Intervention effects will be assessed using generalized linear mixed models that account for clustering of repeated observations both within CHCs and within students. Slightly more than half of the participating colleges have undergraduate enrollment of ≥3000 students; two-thirds are public and almost half are urban. Among participants there were relatively more Asian (10 v 1%) and Black/African American (13 v 7%) and fewer White (58 v 74%) participants in the intervention compared to control. This study will offer the first formal assessment for SV prevention in the CHC setting. Clinical Trials #: NCT02355470. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Clustering problems for geochemical data

    International Nuclear Information System (INIS)

    Kane, V.E.; Larson, N.M.

    1977-01-01

    The Union Carbide Corporation, Nuclear Division, Uranium Resource Evaluation Project uses a two-stage sampling program to identify potential uranium districts. Cluster analysis techniques are used in locating high density sampling areas as well as in identifying potential uranium districts. Problems are considered involving the analysis of multivariate censored data, laboratory measurement error, and data standardization

  16. 2-Way k-Means as a Model for Microbiome Samples.

    Science.gov (United States)

    Jackson, Weston J; Agarwal, Ipsita; Pe'er, Itsik

    2017-01-01

    Motivation . Microbiome sequencing allows defining clusters of samples with shared composition. However, this paradigm poorly accounts for samples whose composition is a mixture of cluster-characterizing ones and which therefore lie in between them in the cluster space. This paper addresses unsupervised learning of 2-way clusters. It defines a mixture model that allows 2-way cluster assignment and describes a variant of generalized k -means for learning such a model. We demonstrate applicability to microbial 16S rDNA sequencing data from the Human Vaginal Microbiome Project.

  17. Supra-galactic colour patterns in globular cluster systems

    Science.gov (United States)

    Forte, Juan C.

    2017-07-01

    An analysis of globular cluster systems associated with galaxies included in the Virgo and Fornax Hubble Space Telescope-Advanced Camera Surveys reveals distinct (g - z) colour modulation patterns. These features appear on composite samples of globular clusters and, most evidently, in galaxies with absolute magnitudes Mg in the range from -20.2 to -19.2. These colour modulations are also detectable on some samples of globular clusters in the central galaxies NGC 1399 and NGC 4486 (and confirmed on data sets obtained with different instruments and photometric systems), as well as in other bright galaxies in these clusters. After discarding field contamination, photometric errors and statistical effects, we conclude that these supra-galactic colour patterns are real and reflect some previously unknown characteristic. These features suggest that the globular cluster formation process was not entirely stochastic but included a fraction of clusters that formed in a rather synchronized fashion over large spatial scales, and in a tentative time lapse of about 1.5 Gy at redshifts z between 2 and 4. We speculate that the putative mechanism leading to that synchronism may be associated with large scale feedback effects connected with violent star-forming events and/or with supermassive black holes.

  18. An Integrated Mixed Methods Research Design: Example of the Project Foreign Language Learning Strategies and Achievement: Analysis of Strategy Clusters and Sequences

    OpenAIRE

    Vlčková Kateřina

    2014-01-01

    The presentation focused on an so called integrated mixed method research design example on a basis of a Czech Science Foundation Project Nr. GAP407/12/0432 "Foreign Language Learning Strategies and Achievement: Analysis of Strategy Clusters and Sequences". All main integrated parts of the mixed methods research design were discussed: the aim, theoretical framework, research question, methods and validity threats. Prezentace se zaměřovala na tzv. integrovaný vícemetodový výzkumný design na...

  19. Are quantitative trait-dependent sampling designs cost-effective for analysis of rare and common variants?

    Science.gov (United States)

    Yilmaz, Yildiz E; Bull, Shelley B

    2011-11-29

    Use of trait-dependent sampling designs in whole-genome association studies of sequence data can reduce total sequencing costs with modest losses of statistical efficiency. In a quantitative trait (QT) analysis of data from the Genetic Analysis Workshop 17 mini-exome for unrelated individuals in the Asian subpopulation, we investigate alternative designs that sequence only 50% of the entire cohort. In addition to a simple random sampling design, we consider extreme-phenotype designs that are of increasing interest in genetic association analysis of QTs, especially in studies concerned with the detection of rare genetic variants. We also evaluate a novel sampling design in which all individuals have a nonzero probability of being selected into the sample but in which individuals with extreme phenotypes have a proportionately larger probability. We take differential sampling of individuals with informative trait values into account by inverse probability weighting using standard survey methods which thus generalizes to the source population. In replicate 1 data, we applied the designs in association analysis of Q1 with both rare and common variants in the FLT1 gene, based on knowledge of the generating model. Using all 200 replicate data sets, we similarly analyzed Q1 and Q4 (which is known to be free of association with FLT1) to evaluate relative efficiency, type I error, and power. Simulation study results suggest that the QT-dependent selection designs generally yield greater than 50% relative efficiency compared to using the entire cohort, implying cost-effectiveness of 50% sample selection and worthwhile reduction of sequencing costs.

  20. Clustervision: Visual Supervision of Unsupervised Clustering.

    Science.gov (United States)

    Kwon, Bum Chul; Eysenbach, Ben; Verma, Janu; Ng, Kenney; De Filippi, Christopher; Stewart, Walter F; Perer, Adam

    2018-01-01

    Clustering, the process of grouping together similar items into distinct partitions, is a common type of unsupervised machine learning that can be useful for summarizing and aggregating complex multi-dimensional data. However, data can be clustered in many ways, and there exist a large body of algorithms designed to reveal different patterns. While having access to a wide variety of algorithms is helpful, in practice, it is quite difficult for data scientists to choose and parameterize algorithms to get the clustering results relevant for their dataset and analytical tasks. To alleviate this problem, we built Clustervision, a visual analytics tool that helps ensure data scientists find the right clustering among the large amount of techniques and parameters available. Our system clusters data using a variety of clustering techniques and parameters and then ranks clustering results utilizing five quality metrics. In addition, users can guide the system to produce more relevant results by providing task-relevant constraints on the data. Our visual user interface allows users to find high quality clustering results, explore the clusters using several coordinated visualization techniques, and select the cluster result that best suits their task. We demonstrate this novel approach using a case study with a team of researchers in the medical domain and showcase that our system empowers users to choose an effective representation of their complex data.

  1. Ananke: temporal clustering reveals ecological dynamics of microbial communities

    Directory of Open Access Journals (Sweden)

    Michael W. Hall

    2017-09-01

    Full Text Available Taxonomic markers such as the 16S ribosomal RNA gene are widely used in microbial community analysis. A common first step in marker-gene analysis is grouping genes into clusters to reduce data sets to a more manageable size and potentially mitigate the effects of sequencing error. Instead of clustering based on sequence identity, marker-gene data sets collected over time can be clustered based on temporal correlation to reveal ecologically meaningful associations. We present Ananke, a free and open-source algorithm and software package that complements existing sequence-identity-based clustering approaches by clustering marker-gene data based on time-series profiles and provides interactive visualization of clusters, including highlighting of internal OTU inconsistencies. Ananke is able to cluster distinct temporal patterns from simulations of multiple ecological patterns, such as periodic seasonal dynamics and organism appearances/disappearances. We apply our algorithm to two longitudinal marker gene data sets: faecal communities from the human gut of an individual sampled over one year, and communities from a freshwater lake sampled over eleven years. Within the gut, the segregation of the bacterial community around a food-poisoning event was immediately clear. In the freshwater lake, we found that high sequence identity between marker genes does not guarantee similar temporal dynamics, and Ananke time-series clusters revealed patterns obscured by clustering based on sequence identity or taxonomy. Ananke is free and open-source software available at https://github.com/beiko-lab/ananke.

  2. THE GROWTH OF COOL CORES AND EVOLUTION OF COOLING PROPERTIES IN A SAMPLE OF 83 GALAXY CLUSTERS AT 0.3 < z < 1.2 SELECTED FROM THE SPT-SZ SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, M.; Bautz, M. W. [Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Benson, B. A.; Bleem, L. E.; Carlstrom, J. E.; Chang, C. L.; Crawford, T. M.; Crites, A. T. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Vikhlinin, A.; Stalder, B.; Ashby, M. L. N.; Bayliss, M. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); De Haan, T. [Department of Physics, McGill University, 3600 Rue University, Montreal, Quebec H3A 2T8 (Canada); Lin, H. W. [Caddo Parish Magnet High School, Shrevport, LA 71101 (United States); Aird, K. A. [University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Bocquet, S.; Desai, S. [Department of Physics, Ludwig-Maximilians-Universitaet, Scheinerstr. 1, D-81679 Muenchen (Germany); Brodwin, M. [Department of Physics and Astronomy, University of Missouri, 5110 Rockhill Road, Kansas City, MO 64110 (United States); Cho, H. M. [NIST Quantum Devices Group, 325 Broadway Mailcode 817.03, Boulder, CO 80305 (United States); Clocchiatti, A., E-mail: mcdonald@space.mit.edu [Departamento de Astronomia y Astrosifica, Pontificia Universidad Catolica (Chile); and others

    2013-09-01

    We present first results on the cooling properties derived from Chandra X-ray observations of 83 high-redshift (0.3 < z < 1.2) massive galaxy clusters selected by their Sunyaev-Zel'dovich signature in the South Pole Telescope data. We measure each cluster's central cooling time, central entropy, and mass deposition rate, and compare these properties to those for local cluster samples. We find no significant evolution from z {approx} 0 to z {approx} 1 in the distribution of these properties, suggesting that cooling in cluster cores is stable over long periods of time. We also find that the average cool core entropy profile in the inner {approx}100 kpc has not changed dramatically since z {approx} 1, implying that feedback must be providing nearly constant energy injection to maintain the observed ''entropy floor'' at {approx}10 keV cm{sup 2}. While the cooling properties appear roughly constant over long periods of time, we observe strong evolution in the gas density profile, with the normalized central density ({rho}{sub g,0}/{rho}{sub crit}) increasing by an order of magnitude from z {approx} 1 to z {approx} 0. When using metrics defined by the inner surface brightness profile of clusters, we find an apparent lack of classical, cuspy, cool-core clusters at z > 0.75, consistent with earlier reports for clusters at z > 0.5 using similar definitions. Our measurements indicate that cool cores have been steadily growing over the 8 Gyr spanned by our sample, consistent with a constant, {approx}150 M{sub Sun} yr{sup -1} cooling flow that is unable to cool below entropies of 10 keV cm{sup 2} and, instead, accumulates in the cluster center. We estimate that cool cores began to assemble in these massive systems at z{sub cool}=1.0{sup +1.0}{sub -0.2}, which represents the first constraints on the onset of cooling in galaxy cluster cores. At high redshift (z {approx}> 0.75), galaxy clusters may be classified as ''cooling flows

  3. Role of Anions Associated with the Formation and Properties of Silver Clusters.

    Science.gov (United States)

    Wang, Quan-Ming; Lin, Yu-Mei; Liu, Kuan-Guan

    2015-06-16

    Metal clusters have been very attractive due to their aesthetic structures and fascinating properties. Different from nanoparticles, each cluster of a macroscopic sample has a well-defined structure with identical composition, size, and shape. As the disadvantages of polydispersity are ruled out, informative structure-property relationships of metal clusters can be established. The formation of a high-nuclearity metal cluster involves the organization of metal ions into a complex entity in an ordered way. To achieve controllable preparation of metal clusters, it is helpful to introduce a directing agent in the formation process of a cluster. To this end, anion templates have been used to direct the formation of high nuclearity clusters. In this Account, the role of anions played in the formation of a variety of silver clusters has been reviewed. Silver ions are positively charged, so anionic species could be utilized to control the formation of silver clusters on the basis of electrostatic interactions, and the size and shape of the resulted clusters can be dictated by the templating anions. In addition, since the anion is an integral component in the silver clusters described, the physical properties of the clusters can be modulated by functional anions. The templating effects of simple inorganic anions and polyoxometales are shown in silver alkynyl clusters and silver thiolate clusters. Intercluster compounds are also described regarding the importance of anions in determining the packing of the ion pairs and making contribution to electron communications between the positive and negative counterparts. The role of the anions is threefold: (a) an anion is advantageous in stabilizing a cluster via balancing local positive charges of the metal cations; (b) an anion template could help control the size and shape of a cluster product; (c) an anion can be a key factor in influencing the function of a cluster through bringing in its intrinsic properties. Properties

  4. RELICS: Strong Lens Models for Five Galaxy Clusters from the Reionization Lensing Cluster Survey

    Science.gov (United States)

    Cerny, Catherine; Sharon, Keren; Andrade-Santos, Felipe; Avila, Roberto J.; Bradač, Maruša; Bradley, Larry D.; Carrasco, Daniela; Coe, Dan; Czakon, Nicole G.; Dawson, William A.; Frye, Brenda L.; Hoag, Austin; Huang, Kuang-Han; Johnson, Traci L.; Jones, Christine; Lam, Daniel; Lovisari, Lorenzo; Mainali, Ramesh; Oesch, Pascal A.; Ogaz, Sara; Past, Matthew; Paterno-Mahler, Rachel; Peterson, Avery; Riess, Adam G.; Rodney, Steven A.; Ryan, Russell E.; Salmon, Brett; Sendra-Server, Irene; Stark, Daniel P.; Strolger, Louis-Gregory; Trenti, Michele; Umetsu, Keiichi; Vulcani, Benedetta; Zitrin, Adi

    2018-06-01

    Strong gravitational lensing by galaxy clusters magnifies background galaxies, enhancing our ability to discover statistically significant samples of galaxies at {\\boldsymbol{z}}> 6, in order to constrain the high-redshift galaxy luminosity functions. Here, we present the first five lens models out of the Reionization Lensing Cluster Survey (RELICS) Hubble Treasury Program, based on new HST WFC3/IR and ACS imaging of the clusters RXC J0142.9+4438, Abell 2537, Abell 2163, RXC J2211.7–0349, and ACT-CLJ0102–49151. The derived lensing magnification is essential for estimating the intrinsic properties of high-redshift galaxy candidates, and properly accounting for the survey volume. We report on new spectroscopic redshifts of multiply imaged lensed galaxies behind these clusters, which are used as constraints, and detail our strategy to reduce systematic uncertainties due to lack of spectroscopic information. In addition, we quantify the uncertainty on the lensing magnification due to statistical and systematic errors related to the lens modeling process, and find that in all but one cluster, the magnification is constrained to better than 20% in at least 80% of the field of view, including statistical and systematic uncertainties. The five clusters presented in this paper span the range of masses and redshifts of the clusters in the RELICS program. We find that they exhibit similar strong lensing efficiencies to the clusters targeted by the Hubble Frontier Fields within the WFC3/IR field of view. Outputs of the lens models are made available to the community through the Mikulski Archive for Space Telescopes.

  5. Cluster Correlation in Mixed Models

    Science.gov (United States)

    Gardini, A.; Bonometto, S. A.; Murante, G.; Yepes, G.

    2000-10-01

    We evaluate the dependence of the cluster correlation length, rc, on the mean intercluster separation, Dc, for three models with critical matter density, vanishing vacuum energy (Λ=0), and COBE normalization: a tilted cold dark matter (tCDM) model (n=0.8) and two blue mixed models with two light massive neutrinos, yielding Ωh=0.26 and 0.14 (MDM1 and MDM2, respectively). All models approach the observational value of σ8 (and hence the observed cluster abundance) and are consistent with the observed abundance of damped Lyα systems. Mixed models have a motivation in recent results of neutrino physics; they also agree with the observed value of the ratio σ8/σ25, yielding the spectral slope parameter Γ, and nicely fit Las Campanas Redshift Survey (LCRS) reconstructed spectra. We use parallel AP3M simulations, performed in a wide box (of side 360 h-1 Mpc) and with high mass and distance resolution, enabling us to build artificial samples of clusters, whose total number and mass range allow us to cover the same Dc interval inspected through Automatic Plate Measuring Facility (APM) and Abell cluster clustering data. We find that the tCDM model performs substantially better than n=1 critical density CDM models. Our main finding, however, is that mixed models provide a surprisingly good fit to cluster clustering data.

  6. The Atacama Cosmology Telescope: Cosmology from Galaxy Clusters Detected via the Sunyaev-Zeldovich Effect

    International Nuclear Information System (INIS)

    Sehgal, N.

    2011-01-01

    We present constraints on cosmological parameters based on a sample of Sunyaev-Zeldovich-selected galaxy clusters detected in a millimeter-wave survey by the Atacama Cosmology Telescope. The cluster sample used in this analysis consists of 9 optically-confirmed high-mass clusters comprising the high-significance end of the total cluster sample identified in 455 square degrees of sky surveyed during 2008 at 148GHz. We focus on the most massive systems to reduce the degeneracy between unknown cluster astrophysics and cosmology derived from SZ surveys. We describe the scaling relation between cluster mass and SZ signal with a 4-parameter fit. Marginalizing over the values of the parameters in this fit with conservative priors gives σ 8 = 0.851 ± 0.115 and w = -1.14 ± 0.35 for a spatially-flat wCDM cosmological model with WMAP 7-year priors on cosmological parameters. This gives a modest improvement in statistical uncertainty over WMAP 7-year constraints alone. Fixing the scaling relation between cluster mass and SZ signal to a fiducial relation obtained from numerical simulations and calibrated by X-ray observations, we find σ 8 = 0.821 ± 0.044 and w = -1.05 ± 0.20. These results are consistent with constraints from WMAP 7 plus baryon acoustic oscillations plus type Ia supernoava which give σ 8 = 0.802 ± 0.038 and w = -0.98 ± 0.053. A stacking analysis of the clusters in this sample compared to clusters simulated assuming the fiducial model also shows good agreement. These results suggest that, given the sample of clusters used here, both the astrophysics of massive clusters and the cosmological parameters derived from them are broadly consistent with current models.

  7. Accounting for One-Group Clustering in Effect-Size Estimation

    Science.gov (United States)

    Citkowicz, Martyna; Hedges, Larry V.

    2013-01-01

    In some instances, intentionally or not, study designs are such that there is clustering in one group but not in the other. This paper describes methods for computing effect size estimates and their variances when there is clustering in only one group and the analysis has not taken that clustering into account. The authors provide the effect size…

  8. blockcluster: An R Package for Model-Based Co-Clustering

    Directory of Open Access Journals (Sweden)

    Parmeet Singh Bhatia

    2017-02-01

    Full Text Available Simultaneous clustering of rows and columns, usually designated by bi-clustering, coclustering or block clustering, is an important technique in two way data analysis. A new standard and efficient approach has been recently proposed based on the latent block model (Govaert and Nadif 2003 which takes into account the block clustering problem on both the individual and variable sets. This article presents our R package blockcluster for co-clustering of binary, contingency and continuous data based on these very models. In this document, we will give a brief review of the model-based block clustering methods, and we will show how the R package blockcluster can be used for co-clustering.

  9. X-ray aspects of the DAFT/FADA clusters

    Science.gov (United States)

    Guennou, L.; Durret, F.; Lima Neto, G. B.; Adami, C.

    2012-12-01

    We have undertaken the DAFT/FADA survey with the aim of applying constraints on dark energy based on weak lensing tomography as well as obtaining homogeneous and high quality data for a sample of 91 massive clusters in the redshift range [0.4,0.9] for which there are HST archive data. We have analysed the XMM-Newton data available for 42 of these clusters to derive their X-ray temperatures and luminosities and search for substructures. This study was coupled with a dynamical analysis for the 26 clusters having at least 30 spectroscopic galaxy redshifts in the cluster range. We present preliminary results on the coupled X-ray and dynamical analyses of these clusters.

  10. A cross-sectional, randomized cluster sample survey of household vulnerability to extreme heat among slum dwellers in ahmedabad, india.

    Science.gov (United States)

    Tran, Kathy V; Azhar, Gulrez S; Nair, Rajesh; Knowlton, Kim; Jaiswal, Anjali; Sheffield, Perry; Mavalankar, Dileep; Hess, Jeremy

    2013-06-18

    Extreme heat is a significant public health concern in India; extreme heat hazards are projected to increase in frequency and severity with climate change. Few of the factors driving population heat vulnerability are documented, though poverty is a presumed risk factor. To facilitate public health preparedness, an assessment of factors affecting vulnerability among slum dwellers was conducted in summer 2011 in Ahmedabad, Gujarat, India. Indicators of heat exposure, susceptibility to heat illness, and adaptive capacity, all of which feed into heat vulnerability, was assessed through a cross-sectional household survey using randomized multistage cluster sampling. Associations between heat-related morbidity and vulnerability factors were identified using multivariate logistic regression with generalized estimating equations to account for clustering effects. Age, preexisting medical conditions, work location, and access to health information and resources were associated with self-reported heat illness. Several of these variables were unique to this study. As sociodemographics, occupational heat exposure, and access to resources were shown to increase vulnerability, future interventions (e.g., health education) might target specific populations among Ahmedabad urban slum dwellers to reduce vulnerability to extreme heat. Surveillance and evaluations of future interventions may also be worthwhile.

  11. STAR CLUSTERS IN M31. IV. A COMPARATIVE ANALYSIS OF ABSORPTION LINE INDICES IN OLD M31 AND MILKY WAY CLUSTERS

    International Nuclear Information System (INIS)

    Schiavon, Ricardo P.; Caldwell, Nelson; Morrison, Heather; Harding, Paul; Courteau, Stéphane; MacArthur, Lauren A.; Graves, Genevieve J.

    2012-01-01

    We present absorption line indices measured in the integrated spectra of globular clusters both from the Galaxy and from M31. Our samples include 41 Galactic globular clusters, and more than 300 clusters in M31. The conversion of instrumental equivalent widths into the Lick system is described, and zero-point uncertainties are provided. Comparison of line indices of old M31 clusters and Galactic globular clusters suggests an absence of important differences in chemical composition between the two cluster systems. In particular, CN indices in the spectra of M31 and Galactic clusters are essentially consistent with each other, in disagreement with several previous works. We reanalyze some of the previous data, and conclude that reported CN differences between M31 and Galactic clusters were mostly due to data calibration uncertainties. Our data support the conclusion that the chemical compositions of Milky Way and M31 globular clusters are not substantially different, and that there is no need to resort to enhanced nitrogen abundances to account for the optical spectra of M31 globular clusters.

  12. STAR CLUSTERS IN M31. IV. A COMPARATIVE ANALYSIS OF ABSORPTION LINE INDICES IN OLD M31 AND MILKY WAY CLUSTERS

    Energy Technology Data Exchange (ETDEWEB)

    Schiavon, Ricardo P. [Gemini Observatory, Hilo, HI 96720 (United States); Caldwell, Nelson [Smithsonian Astrophysical Observatory, Cambridge, MA 02138 (United States); Morrison, Heather; Harding, Paul [Department of Astronomy, Case Western Reserve University, Cleveland, OH 44106-7215 (United States); Courteau, Stephane [Department of Physics, Engineering Physics and Astronomy, Queen' s University, Kingston, ON K7L 3N6 (Canada); MacArthur, Lauren A. [Herzberg Institute of Astrophysics, National Research Council of Canada/University of Victoria, Victoria, B.C. V9E 2E7 (Canada); Graves, Genevieve J., E-mail: rschiavon@gemini.edu, E-mail: caldwell@cfa.harvard.edu, E-mail: paul.harding@case.edu, E-mail: heather@vegemite.case.edu, E-mail: courteau@astro.queensu.ca, E-mail: Lauren.MacArthur@nrc-cnrc.gc.ca, E-mail: graves@astro.berkeley.edu [Department of Astronomy, University of California, Berkeley, CA 94720 (United States)

    2012-01-15

    We present absorption line indices measured in the integrated spectra of globular clusters both from the Galaxy and from M31. Our samples include 41 Galactic globular clusters, and more than 300 clusters in M31. The conversion of instrumental equivalent widths into the Lick system is described, and zero-point uncertainties are provided. Comparison of line indices of old M31 clusters and Galactic globular clusters suggests an absence of important differences in chemical composition between the two cluster systems. In particular, CN indices in the spectra of M31 and Galactic clusters are essentially consistent with each other, in disagreement with several previous works. We reanalyze some of the previous data, and conclude that reported CN differences between M31 and Galactic clusters were mostly due to data calibration uncertainties. Our data support the conclusion that the chemical compositions of Milky Way and M31 globular clusters are not substantially different, and that there is no need to resort to enhanced nitrogen abundances to account for the optical spectra of M31 globular clusters.

  13. THE DETECTION AND STATISTICS OF GIANT ARCS BEHIND CLASH CLUSTERS

    International Nuclear Information System (INIS)

    Xu, Bingxiao; Zheng, Wei; Postman, Marc; Bradley, Larry; Meneghetti, Massimo; Koekemoer, Anton; Seitz, Stella; Zitrin, Adi; Merten, Julian; Maoz, Dani; Frye, Brenda; Umetsu, Keiichi; Vega, Jesus

    2016-01-01

    We developed an algorithm to find and characterize gravitationally lensed galaxies (arcs) to perform a comparison of the observed and simulated arc abundance. Observations are from the Cluster Lensing And Supernova survey with Hubble (CLASH). Simulated CLASH images are created using the MOKA package and also clusters selected from the high-resolution, hydrodynamical simulations, MUSIC, over the same mass and redshift range as the CLASH sample. The algorithm's arc elongation accuracy, completeness, and false positive rate are determined and used to compute an estimate of the true arc abundance. We derive a lensing efficiency of 4 ± 1 arcs (with length ≥6″ and length-to-width ratio ≥7) per cluster for the X-ray-selected CLASH sample, 4 ± 1 arcs per cluster for the MOKA-simulated sample, and 3 ± 1 arcs per cluster for the MUSIC-simulated sample. The observed and simulated arc statistics are in full agreement. We measure the photometric redshifts of all detected arcs and find a median redshift z s = 1.9 with 33% of the detected arcs having z s  > 3. We find that the arc abundance does not depend strongly on the source redshift distribution but is sensitive to the mass distribution of the dark matter halos (e.g., the c–M relation). Our results show that consistency between the observed and simulated distributions of lensed arc sizes and axial ratios can be achieved by using cluster-lensing simulations that are carefully matched to the selection criteria used in the observations

  14. The Detection and Statistics of Giant Arcs behind CLASH Clusters

    Science.gov (United States)

    Xu, Bingxiao; Postman, Marc; Meneghetti, Massimo; Seitz, Stella; Zitrin, Adi; Merten, Julian; Maoz, Dani; Frye, Brenda; Umetsu, Keiichi; Zheng, Wei; Bradley, Larry; Vega, Jesus; Koekemoer, Anton

    2016-02-01

    We developed an algorithm to find and characterize gravitationally lensed galaxies (arcs) to perform a comparison of the observed and simulated arc abundance. Observations are from the Cluster Lensing And Supernova survey with Hubble (CLASH). Simulated CLASH images are created using the MOKA package and also clusters selected from the high-resolution, hydrodynamical simulations, MUSIC, over the same mass and redshift range as the CLASH sample. The algorithm's arc elongation accuracy, completeness, and false positive rate are determined and used to compute an estimate of the true arc abundance. We derive a lensing efficiency of 4 ± 1 arcs (with length ≥6″ and length-to-width ratio ≥7) per cluster for the X-ray-selected CLASH sample, 4 ± 1 arcs per cluster for the MOKA-simulated sample, and 3 ± 1 arcs per cluster for the MUSIC-simulated sample. The observed and simulated arc statistics are in full agreement. We measure the photometric redshifts of all detected arcs and find a median redshift zs = 1.9 with 33% of the detected arcs having zs > 3. We find that the arc abundance does not depend strongly on the source redshift distribution but is sensitive to the mass distribution of the dark matter halos (e.g., the c-M relation). Our results show that consistency between the observed and simulated distributions of lensed arc sizes and axial ratios can be achieved by using cluster-lensing simulations that are carefully matched to the selection criteria used in the observations.

  15. PHAT STELLAR CLUSTER SURVEY. I. YEAR 1 CATALOG AND INTEGRATED PHOTOMETRY

    International Nuclear Information System (INIS)

    Johnson, L. Clifton; Dalcanton, Julianne J.; Fouesneau, Morgan; Hodge, Paul W.; Weisz, Daniel R.; Williams, Benjamin F.; Beerman, Lori C.; Seth, Anil C.; Caldwell, Nelson; Gouliermis, Dimitrios A.; Larsen, Søren S.; Olsen, Knut A. G.; San Roman, Izaskun; Sarajedini, Ata; Bianchi, Luciana; Dolphin, Andrew E.; Girardi, Léo; Guhathakurta, Puragra; Kalirai, Jason; Lang, Dustin

    2012-01-01

    The Panchromatic Hubble Andromeda Treasury (PHAT) survey is an ongoing Hubble Space Telescope (HST) multi-cycle program to obtain high spatial resolution imaging of one-third of the M31 disk at ultraviolet through near-infrared wavelengths. In this paper, we present the first installment of the PHAT stellar cluster catalog. When completed, the PHAT cluster catalog will be among the largest and most comprehensive surveys of resolved star clusters in any galaxy. The exquisite spatial resolution achieved with HST has allowed us to identify hundreds of new clusters that were previously inaccessible with existing ground-based surveys. We identify 601 clusters in the Year 1 sample, representing more than a factor of four increase over previous catalogs within the current survey area (390 arcmin 2 ). This work presents results derived from the first ∼25% of the survey data; we estimate that the final sample will include ∼2500 clusters. For the Year 1 objects, we present a catalog with positions, radii, and six-band integrated photometry. Along with a general characterization of the cluster luminosities and colors, we discuss the cluster luminosity function, the cluster size distributions, and highlight a number of individually interesting clusters found in the Year 1 search.

  16. PHAT STELLAR CLUSTER SURVEY. I. YEAR 1 CATALOG AND INTEGRATED PHOTOMETRY

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, L. Clifton; Dalcanton, Julianne J.; Fouesneau, Morgan; Hodge, Paul W.; Weisz, Daniel R.; Williams, Benjamin F.; Beerman, Lori C. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Seth, Anil C. [Department of Physics and Astronomy, University of Utah, Salt Lake City, UT 84112 (United States); Caldwell, Nelson [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Gouliermis, Dimitrios A. [Institut fuer Theoretische Astrophysik, Zentrum fuer Astronomie der Universitaet Heidelberg, Albert-Ueberle-Strasse 2, D-69120 Heidelberg (Germany); Larsen, Soren S. [Department of Astrophysics, IMAPP, Radboud University Nijmegen, P.O. Box 9010, 6500 GL Nijmegen (Netherlands); Olsen, Knut A. G. [National Optical Astronomy Observatory, 950 North Cherry Avenue, Tucson, AZ 85719 (United States); San Roman, Izaskun; Sarajedini, Ata [Department of Astronomy, University of Florida, 211 Bryant Space Science Center, Gainesville, FL 32611-2055 (United States); Bianchi, Luciana [Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Dolphin, Andrew E. [Raytheon Company, 1151 East Hermans Road, Tucson, AZ 85756 (United States); Girardi, Leo [Osservatorio Astronomico di Padova-INAF, Vicolo dell' Osservatorio 5, I-35122 Padova (Italy); Guhathakurta, Puragra [Department of Astronomy and Astrophysics, University of California Observatories/Lick Observatory, University of California, 1156 High Street, Santa Cruz, CA 95064 (United States); Kalirai, Jason [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Lang, Dustin, E-mail: lcjohnso@astro.washington.edu [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); and others

    2012-06-20

    The Panchromatic Hubble Andromeda Treasury (PHAT) survey is an ongoing Hubble Space Telescope (HST) multi-cycle program to obtain high spatial resolution imaging of one-third of the M31 disk at ultraviolet through near-infrared wavelengths. In this paper, we present the first installment of the PHAT stellar cluster catalog. When completed, the PHAT cluster catalog will be among the largest and most comprehensive surveys of resolved star clusters in any galaxy. The exquisite spatial resolution achieved with HST has allowed us to identify hundreds of new clusters that were previously inaccessible with existing ground-based surveys. We identify 601 clusters in the Year 1 sample, representing more than a factor of four increase over previous catalogs within the current survey area (390 arcmin{sup 2}). This work presents results derived from the first {approx}25% of the survey data; we estimate that the final sample will include {approx}2500 clusters. For the Year 1 objects, we present a catalog with positions, radii, and six-band integrated photometry. Along with a general characterization of the cluster luminosities and colors, we discuss the cluster luminosity function, the cluster size distributions, and highlight a number of individually interesting clusters found in the Year 1 search.

  17. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  18. HOW TO FIND YOUNG MASSIVE CLUSTER PROGENITORS

    Energy Technology Data Exchange (ETDEWEB)

    Bressert, E.; Longmore, S.; Testi, L. [European Southern Observatory, Karl Schwarzschild Str. 2, D-85748 Garching bei Muenchen (Germany); Ginsburg, A.; Bally, J.; Battersby, C. [Center for Astrophysics and Space Astronomy, University of Colorado, Boulder, CO 80309 (United States)

    2012-10-20

    We propose that bound, young massive stellar clusters form from dense clouds that have escape speeds greater than the sound speed in photo-ionized gas. In these clumps, radiative feedback in the form of gas ionization is bottled up, enabling star formation to proceed to sufficiently high efficiency so that the resulting star cluster remains bound even after gas removal. We estimate the observable properties of the massive proto-clusters (MPCs) for existing Galactic plane surveys and suggest how they may be sought in recent and upcoming extragalactic observations. These surveys will potentially provide a significant sample of MPC candidates that will allow us to better understand extreme star-formation and massive cluster formation in the Local Universe.

  19. Dairy Herd Mastitis Program in Argentina: Farm Clusters and Effects on Bulk Milk Somatic Cell Counts

    Directory of Open Access Journals (Sweden)

    C Vissio1*, SA Dieser2, CG Raspanti2, JA Giraudo1, CI Bogni2, LM Odierno2 and AJ Larriestra1

    2013-01-01

    Full Text Available This research has been conducted to characterize dairy farm clusters according to mastitis control program practiced among small and medium dairy producer from Argentina, and also to evaluate the effect of such farm cluster patterns on bulk milk somatic cell count (BMSCC. Two samples of 51 (cross-sectional and 38 (longitudinal herds were selected to identify farm clusters and study the influence of management on monthly BMSCC, respectively. The cross-sectional sample involved the milking routine and facilities assessment of each herd visited. Hierarchical cluster analysis was used to find the most discriminating farm attributes in the cross sectional sample. Afterward, the herd cluster typologies were identified in the longitudinal sample. Herd monthly BMSCC average was evaluated during 12 months fitting a linear mixed model. Two clusters were identified, the farms in the Cluster I applied a comprehensive mastitis program in opposite to Cluster II. Post-dipping, dry cow therapy and milking machine test were routinely applied in Cluster I. In the longitudinal study, 14 out of 38 dairy herds were labeled as Cluster I and the rest were assigned to Cluster II. Significant difference in BMSCC was found between cluster I and II (60,000 cells/mL. The present study showed the relevance and potential impact of promoting mastitis control practices among small and medium sized dairy producers in Argentina.

  20. The Nature and Origin of UCDs in the Coma Cluster

    Science.gov (United States)

    Chiboucas, Kristin; Tully, R. Brent; Madrid, Juan; Phillipps, Steven; Carter, David; Peng, Eric

    2018-01-01

    UCDs are super massive star clusters found largely in dense regions but have also been found around individual galaxies and in smaller groups. Their origin is still under debate but currently favored scenarios include formation as giant star clusters, either as the brightest globular clusters or through mergers of super star clusters, themselves formed during major galaxy mergers, or as remnant nuclei from tidal stripping of nucleated dwarf ellipticals. Establishing the nature of these enigmatic objects has important implications for our understanding of star formation, star cluster formation, the missing satellite problem, and galaxy evolution. We are attempting to disentangle these competing formation scenarios with a large survey of UCDs in the Coma cluster. Using ACS two-passband imaging from the HST/ACS Coma Cluster Treasury Survey, we are using colors and sizes to identify the UCD cluster members. With a large size limited sample of the UCD population within the core region of the Coma cluster, we are investigating the population size, properties, and spatial distribution, and comparing that with the Coma globular cluster and nuclear star cluster populations to discriminate between the threshing and globular cluster scenarios. In previous work, we had found a possible correlation of UCD colors with host galaxy and a possible excess of UCDs around a non-central giant galaxy with an unusually large globular cluster population, both suggestive of a globular cluster origin. With a larger sample size and additional imaging fields that encompass the regions around these giant galaxies, we have found that the color correlation with host persists and the giant galaxy with unusually large globular cluster population does appear to host a large UCD population as well. We present the current status of the survey.

  1. The Atacama Cosmology Telescope: Cosmology from Galaxy Clusters Detected Via the Sunyaev-Zel'dovich Effect

    Science.gov (United States)

    Sehgal, Neelima; Trac, Hy; Acquaviva, Viviana; Ade, Peter A. R.; Aguirre, Paula; Amiri, Mandana; Appel, John W.; Barrientos, L. Felipe; Battistelli, Elia S.; Bond, J. Richard; hide

    2010-01-01

    We present constraints on cosmological parameters based on a sample of Sunyaev-Zel'dovich-selected galaxy clusters detected in a millimeter-wave survey by the Atacama Cosmology Telescope. The cluster sample used in this analysis consists of 9 optically-confirmed high-mass clusters comprising the high-significance end of the total cluster sample identified in 455 square degrees of sky surveyed during 2008 at 148 GHz. We focus on the most massive systems to reduce the degeneracy between unknown cluster astrophysics and cosmology derived from SZ surveys. We describe the scaling relation between cluster mass and SZ signal with a 4-parameter fit. Marginalizing over the values of the parameters in this fit with conservative priors gives (sigma)8 = 0.851 +/- 0.115 and w = -1.14 +/- 0.35 for a spatially-flat wCDM cosmological model with WMAP 7-year priors on cosmological parameters. This gives a modest improvement in statistical uncertainty over WMAP 7-year constraints alone. Fixing the scaling relation between cluster mass and SZ signal to a fiducial relation obtained from numerical simulations and calibrated by X-ray observations, we find (sigma)8 + 0.821 +/- 0.044 and w = -1.05 +/- 0.20. These results are consistent with constraints from WMAP 7 plus baryon acoustic oscillations plus type Ia supernova which give (sigma)8 = 0.802 +/- 0.038 and w = -0.98 +/- 0.053. A stacking analysis of the clusters in this sample compared to clusters simulated assuming the fiducial model also shows good agreement. These results suggest that, given the sample of clusters used here, both the astrophysics of massive clusters and the cosmological parameters derived from them are broadly consistent with current models.

  2. Vacancy clustering and acceptor activation in nitrogen-implanted ZnO

    Science.gov (United States)

    Børseth, Thomas Moe; Tuomisto, Filip; Christensen, Jens S.; Monakhov, Edouard V.; Svensson, Bengt G.; Kuznetsov, Andrej Yu.

    2008-01-01

    The role of vacancy clustering and acceptor activation on resistivity evolution in N ion-implanted n -type hydrothermally grown bulk ZnO has been investigated by positron annihilation spectroscopy, resistivity measurements, and chemical profiling. Room temperature 220keV N implantation using doses in the low 1015cm-2 range induces small and big vacancy clusters containing at least 2 and 3-4 Zn vacancies, respectively. The small clusters are present already in as-implanted samples and remain stable up to 1000°C with no significant effect on the resistivity evolution. In contrast, formation of the big clusters at 600°C is associated with a significant increase in the free electron concentration attributed to gettering of amphoteric Li impurities by these clusters. Further annealing at 800°C results in a dramatic decrease in the free electron concentration correlated with activation of 1016-1017cm-3 acceptors likely to be N and/or Li related. The samples remain n type, however, and further annealing at 1000°C results in passivation of the acceptor states while the big clusters dissociate.

  3. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  4. Kappa statistic for clustered matched-pair data.

    Science.gov (United States)

    Yang, Zhao; Zhou, Ming

    2014-07-10

    Kappa statistic is widely used to assess the agreement between two procedures in the independent matched-pair data. For matched-pair data collected in clusters, on the basis of the delta method and sampling techniques, we propose a nonparametric variance estimator for the kappa statistic without within-cluster correlation structure or distributional assumptions. The results of an extensive Monte Carlo simulation study demonstrate that the proposed kappa statistic provides consistent estimation and the proposed variance estimator behaves reasonably well for at least a moderately large number of clusters (e.g., K ≥50). Compared with the variance estimator ignoring dependence within a cluster, the proposed variance estimator performs better in maintaining the nominal coverage probability when the intra-cluster correlation is fair (ρ ≥0.3), with more pronounced improvement when ρ is further increased. To illustrate the practical application of the proposed estimator, we analyze two real data examples of clustered matched-pair data. Copyright © 2014 John Wiley & Sons, Ltd.

  5. 4C radio sources in clusters of galaxies

    International Nuclear Information System (INIS)

    McHardy, I.M.

    1979-01-01

    Observations of a complete sample of 4C and 4CT radio sources in Abell clusters with the Cambridge One-Mile telescope are analysed. It is concluded that radio sources are strongly concentrated towards the cluster centres and are equally likely to be found in clusters of any richness. The probability of a galaxy of a given absolute magnitude producing a source above a given luminosity does not depend on cluster membership. 4C and 4CT radio sources in clusters, selected at 178 MHz, occur preferentially in Bautz-Morgan (BM) class 1 clusters, whereas those selected at 1.4 GHz do not. The most powerful radio source in the cluster is almost always associated with the optically brightest galaxy. The average spectrum of 4C sources in the range 408 to 1407 MHz is steeper in BM class 1 than in other classes. Spectra also steepen with cluster richness. the morphology of 4C sources in clusters depends strongly on BM class and, in particular, radio-trail sources occur only in BM classes II, II-III and III. (author)

  6. THE EXTENDED VIRGO CLUSTER CATALOG

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Suk; Rey, Soo-Chang; Lee, Youngdae; Chung, Jiwon; Pak, Mina; Yi, Wonhyeong; Lee, Woong [Department of Astronomy and Space Science, Chungnam National University, 99 Daehak-ro, Daejeon 305-764 (Korea, Republic of); Jerjen, Helmut [Research School of Astronomy and Astrophysics, The Australian National University, Cotter Road, Weston, ACT 2611 (Australia); Lisker, Thorsten [Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität Heidelberg (ZAH), Mönchhofstraße 12-14, D-69120 Heidelberg (Germany); Sung, Eon-Chang [Korea Astronomy and Space Science institute, 776 Daedeokdae-ro, Daejeon 305-348 (Korea, Republic of)

    2015-01-01

    We present a new catalog of galaxies in the wider region of the Virgo cluster, based on the Sloan Digital Sky Survey (SDSS) Data Release 7. The Extended Virgo Cluster Catalog (EVCC) covers an area of 725 deg{sup 2} or 60.1 Mpc{sup 2}. It is 5.2 times larger than the footprint of the classical Virgo Cluster Catalog (VCC) and reaches out to 3.5 times the virial radius of the Virgo cluster. We selected 1324 spectroscopically targeted galaxies with radial velocities less than 3000 km s{sup –1}. In addition, 265 galaxies that have been overlooked in the SDSS spectroscopic survey but have available redshifts in the NASA Extragalactic Database are also included. Our selection process secured a total of 1589 galaxies, 676 of which are not included in the VCC. The certain and possible cluster members are defined by means of redshift comparison with a cluster infall model. We employed two independent and complementary galaxy classification schemes: the traditional morphological classification based on the visual inspection of optical images and a characterization of galaxies from their spectroscopic features. SDSS u, g, r, i, and z passband photometry of all EVCC galaxies was performed using Source Extractor. We compare the EVCC galaxies with the VCC in terms of morphology, spatial distribution, and luminosity function. The EVCC defines a comprehensive galaxy sample covering a wider range in galaxy density that is significantly different from the inner region of the Virgo cluster. It will be the foundation for forthcoming galaxy evolution studies in the extended Virgo cluster region, complementing ongoing and planned Virgo cluster surveys at various wavelengths.

  7. Combining cluster number counts and galaxy clustering

    Energy Technology Data Exchange (ETDEWEB)

    Lacasa, Fabien; Rosenfeld, Rogerio, E-mail: fabien@ift.unesp.br, E-mail: rosenfel@ift.unesp.br [ICTP South American Institute for Fundamental Research, Instituto de Física Teórica, Universidade Estadual Paulista, São Paulo (Brazil)

    2016-08-01

    The abundance of clusters and the clustering of galaxies are two of the important cosmological probes for current and future large scale surveys of galaxies, such as the Dark Energy Survey. In order to combine them one has to account for the fact that they are not independent quantities, since they probe the same density field. It is important to develop a good understanding of their correlation in order to extract parameter constraints. We present a detailed modelling of the joint covariance matrix between cluster number counts and the galaxy angular power spectrum. We employ the framework of the halo model complemented by a Halo Occupation Distribution model (HOD). We demonstrate the importance of accounting for non-Gaussianity to produce accurate covariance predictions. Indeed, we show that the non-Gaussian covariance becomes dominant at small scales, low redshifts or high cluster masses. We discuss in particular the case of the super-sample covariance (SSC), including the effects of galaxy shot-noise, halo second order bias and non-local bias. We demonstrate that the SSC obeys mathematical inequalities and positivity. Using the joint covariance matrix and a Fisher matrix methodology, we examine the prospects of combining these two probes to constrain cosmological and HOD parameters. We find that the combination indeed results in noticeably better constraints, with improvements of order 20% on cosmological parameters compared to the best single probe, and even greater improvement on HOD parameters, with reduction of error bars by a factor 1.4-4.8. This happens in particular because the cross-covariance introduces a synergy between the probes on small scales. We conclude that accounting for non-Gaussian effects is required for the joint analysis of these observables in galaxy surveys.

  8. Sampling procedures for inventory of commercial volume tree species in Amazon Forest.

    Science.gov (United States)

    Netto, Sylvio P; Pelissari, Allan L; Cysneiros, Vinicius C; Bonazza, Marcelo; Sanquetta, Carlos R

    2017-01-01

    The spatial distribution of tropical tree species can affect the consistency of the estimators in commercial forest inventories, therefore, appropriate sampling procedures are required to survey species with different spatial patterns in the Amazon Forest. For this, the present study aims to evaluate the conventional sampling procedures and introduce the adaptive cluster sampling for volumetric inventories of Amazonian tree species, considering the hypotheses that the density, the spatial distribution and the zero-plots affect the consistency of the estimators, and that the adaptive cluster sampling allows to obtain more accurate volumetric estimation. We use data from a census carried out in Jamari National Forest, Brazil, where trees with diameters equal to or higher than 40 cm were measured in 1,355 plots. Species with different spatial patterns were selected and sampled with simple random sampling, systematic sampling, linear cluster sampling and adaptive cluster sampling, whereby the accuracy of the volumetric estimation and presence of zero-plots were evaluated. The sampling procedures applied to species were affected by the low density of trees and the large number of zero-plots, wherein the adaptive clusters allowed concentrating the sampling effort in plots with trees and, thus, agglutinating more representative samples to estimate the commercial volume.

  9. Weighing galaxy clusters with gas. II. On the origin of hydrostatic mass bias in ΛCDM galaxy clusters

    International Nuclear Information System (INIS)

    Nelson, Kaylea; Nagai, Daisuke; Yu, Liang; Lau, Erwin T.; Rudd, Douglas H.

    2014-01-01

    The use of galaxy clusters as cosmological probes hinges on our ability to measure their masses accurately and with high precision. Hydrostatic mass is one of the most common methods for estimating the masses of individual galaxy clusters, which suffer from biases due to departures from hydrostatic equilibrium. Using a large, mass-limited sample of massive galaxy clusters from a high-resolution hydrodynamical cosmological simulation, in this work we show that in addition to turbulent and bulk gas velocities, acceleration of gas introduces biases in the hydrostatic mass estimate of galaxy clusters. In unrelaxed clusters, the acceleration bias is comparable to the bias due to non-thermal pressure associated with merger-induced turbulent and bulk gas motions. In relaxed clusters, the mean mass bias due to acceleration is small (≲ 3%), but the scatter in the mass bias can be reduced by accounting for gas acceleration. Additionally, this acceleration bias is greater in the outskirts of higher redshift clusters where mergers are more frequent and clusters are accreting more rapidly. Since gas acceleration cannot be observed directly, it introduces an irreducible bias for hydrostatic mass estimates. This acceleration bias places limits on how well we can recover cluster masses from future X-ray and microwave observations. We discuss implications for cluster mass estimates based on X-ray, Sunyaev-Zel'dovich effect, and gravitational lensing observations and their impact on cluster cosmology.

  10. Probing BL Lac and Cluster Evolution via a Wide-angle, Deep X-ray Selected Sample

    Science.gov (United States)

    Perlman, E.; Jones, L.; White, N.; Angelini, L.; Giommi, P.; McHardy, I.; Wegner, G.

    1994-12-01

    The WARPS survey (Wide-Angle ROSAT Pointed Survey) has been constructed from the archive of all public ROSAT PSPC observations, and is a subset of the WGACAT catalog. WARPS will include a complete sample of >= 100 BL Lacs at F_x >= 10(-13) erg s(-1) cm(-2) . A second selection technique will identify ~ 100 clusters at 0.15 = 0.304 +/- 0.062 for XBLs but = 0.60 +/- 0.05 for RBLs. Models of the X-ray luminosity function (XLF) are also poorly constrained. WARPS will allow us to compute an accurate XLF, decreasing the error bars above by over a factor of two. We will also test for low-luminosity BL Lacs, whose non-thermal nuclear sources are dim compared to the host galaxy. Browne and Marcha (1993) claim the EMSS missed most of these objects and is incomplete. If their predictions are correct, 20-40% of the BL Lacs we find will fall in this category, enabling us to probe the evolution and internal workings of BL Lacs at lower luminosities than ever before. By removing likely QSOs before optical spectroscopy, WARPS requires only modest amounts of telescope time. It will extend measurement of the cluster XLF both to higher redshifts (z>0.5) and lower luminosities (LX<1x10(44) erg s(-1) ) than previous measurements, confirming or rejecting the 3sigma detection of negative evolution found in the EMSS, and constraining Cold Dark Matter cosmologies. Faint NELGs are a recently discovered major contributor to the X-ray background. They are a mixture of Sy2s, starbursts and galaxies of unknown type. Detailed classification and evolution of their XLF will be determined for the first time.

  11. Recognition of genetically modified product based on affinity propagation clustering and terahertz spectroscopy

    Science.gov (United States)

    Liu, Jianjun; Kan, Jianquan

    2018-04-01

    In this paper, based on the terahertz spectrum, a new identification method of genetically modified material by support vector machine (SVM) based on affinity propagation clustering is proposed. This algorithm mainly uses affinity propagation clustering algorithm to make cluster analysis and labeling on unlabeled training samples, and in the iterative process, the existing SVM training data are continuously updated, when establishing the identification model, it does not need to manually label the training samples, thus, the error caused by the human labeled samples is reduced, and the identification accuracy of the model is greatly improved.

  12. Improved multi-objective clustering algorithm using particle swarm optimization.

    Science.gov (United States)

    Gong, Congcong; Chen, Haisong; He, Weixiong; Zhang, Zhanliang

    2017-01-01

    Multi-objective clustering has received widespread attention recently, as it can obtain more accurate and reasonable solution. In this paper, an improved multi-objective clustering framework using particle swarm optimization (IMCPSO) is proposed. Firstly, a novel particle representation for clustering problem is designed to help PSO search clustering solutions in continuous space. Secondly, the distribution of Pareto set is analyzed. The analysis results are applied to the leader selection strategy, and make algorithm avoid trapping in local optimum. Moreover, a clustering solution-improved method is proposed, which can increase the efficiency in searching clustering solution greatly. In the experiments, 28 datasets are used and nine state-of-the-art clustering algorithms are compared, the proposed method is superior to other approaches in the evaluation index ARI.

  13. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    Science.gov (United States)

    Pavlacky, David C; Lukacs, Paul M; Blakesley, Jennifer A; Skorkowsky, Robert C; Klute, David S; Hahn, Beth A; Dreitz, Victoria J; George, T Luke; Hanni, David J

    2017-01-01

    Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer's sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical

  14. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    Directory of Open Access Journals (Sweden)

    David C Pavlacky

    Full Text Available Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1 coordination across organizations and regions, 2 meaningful management and conservation objectives, and 3 rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17. We provide two examples for the Brewer's sparrow (Spizella breweri in BCR 17 demonstrating the ability of the design to 1 determine hierarchical population responses to landscape change and 2 estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous

  15. Design for mosquito abundance, diversity, and phenology sampling within the National Ecological Observatory Network

    Science.gov (United States)

    Hoekman, D.; Springer, Yuri P.; Barker, C.M.; Barrera, R.; Blackmore, M.S.; Bradshaw, W.E.; Foley, D. H.; Ginsberg, Howard; Hayden, M. H.; Holzapfel, C. M.; Juliano, S. A.; Kramer, L. D.; LaDeau, S. L.; Livdahl, T. P.; Moore, C. G.; Nasci, R.S.; Reisen, W.K.; Savage, H. M.

    2016-01-01

    The National Ecological Observatory Network (NEON) intends to monitor mosquito populations across its broad geographical range of sites because of their prevalence in food webs, sensitivity to abiotic factors and relevance for human health. We describe the design of mosquito population sampling in the context of NEON’s long term continental scale monitoring program, emphasizing the sampling design schedule, priorities and collection methods. Freely available NEON data and associated field and laboratory samples, will increase our understanding of how mosquito abundance, demography, diversity and phenology are responding to land use and climate change.

  16. Design tool for offshore wind farm clusters

    DEFF Research Database (Denmark)

    Hasager, Charlotte Bay; Giebel, Gregor; Waldl, Igor

    2015-01-01

    . The software includes wind farm wake models, energy yield models, inter-array and long cable and grid component models, grid code compliance and ancillary services models. The common score for evaluation in order to compare different layouts is levelized cost of energy (LCoE). The integrated DTOC software...... Research Alliance (EERA) and a number of industrial partners. The approach has been to develop a robust, efficient, easy to use and flexible tool, which integrates software relevant for planning offshore wind farms and wind farm clusters and supports the user with a clear optimization work flow...... is developed within the project using open interface standards and is now available as the commercial software product Wind&Economy....

  17. MULTI-K: accurate classification of microarray subtypes using ensemble k-means clustering

    Directory of Open Access Journals (Sweden)

    Ashlock Daniel

    2009-08-01

    Full Text Available Abstract Background Uncovering subtypes of disease from microarray samples has important clinical implications such as survival time and sensitivity of individual patients to specific therapies. Unsupervised clustering methods have been used to classify this type of data. However, most existing methods focus on clusters with compact shapes and do not reflect the geometric complexity of the high dimensional microarray clusters, which limits their performance. Results We present a cluster-number-based ensemble clustering algorithm, called MULTI-K, for microarray sample classification, which demonstrates remarkable accuracy. The method amalgamates multiple k-means runs by varying the number of clusters and identifies clusters that manifest the most robust co-memberships of elements. In addition to the original algorithm, we newly devised the entropy-plot to control the separation of singletons or small clusters. MULTI-K, unlike the simple k-means or other widely used methods, was able to capture clusters with complex and high-dimensional structures accurately. MULTI-K outperformed other methods including a recently developed ensemble clustering algorithm in tests with five simulated and eight real gene-expression data sets. Conclusion The geometric complexity of clusters should be taken into account for accurate classification of microarray data, and ensemble clustering applied to the number of clusters tackles the problem very well. The C++ code and the data sets tested are available from the authors.

  18. MULTI-K: accurate classification of microarray subtypes using ensemble k-means clustering.

    Science.gov (United States)

    Kim, Eun-Youn; Kim, Seon-Young; Ashlock, Daniel; Nam, Dougu

    2009-08-22

    Uncovering subtypes of disease from microarray samples has important clinical implications such as survival time and sensitivity of individual patients to specific therapies. Unsupervised clustering methods have been used to classify this type of data. However, most existing methods focus on clusters with compact shapes and do not reflect the geometric complexity of the high dimensional microarray clusters, which limits their performance. We present a cluster-number-based ensemble clustering algorithm, called MULTI-K, for microarray sample classification, which demonstrates remarkable accuracy. The method amalgamates multiple k-means runs by varying the number of clusters and identifies clusters that manifest the most robust co-memberships of elements. In addition to the original algorithm, we newly devised the entropy-plot to control the separation of singletons or small clusters. MULTI-K, unlike the simple k-means or other widely used methods, was able to capture clusters with complex and high-dimensional structures accurately. MULTI-K outperformed other methods including a recently developed ensemble clustering algorithm in tests with five simulated and eight real gene-expression data sets. The geometric complexity of clusters should be taken into account for accurate classification of microarray data, and ensemble clustering applied to the number of clusters tackles the problem very well. The C++ code and the data sets tested are available from the authors.

  19. Charge exchange in galaxy clusters

    Science.gov (United States)

    Gu, Liyi; Mao, Junjie; de Plaa, Jelle; Raassen, A. J. J.; Shah, Chintan; Kaastra, Jelle S.

    2018-03-01

    Context. Though theoretically expected, the charge exchange emission from galaxy clusters has never been confidently detected. Accumulating hints were reported recently, including a rather marginal detection with the Hitomi data of the Perseus cluster. As previously suggested, a detection of charge exchange line emission from galaxy clusters would not only impact the interpretation of the newly discovered 3.5 keV line, but also open up a new research topic on the interaction between hot and cold matter in clusters. Aim. We aim to perform the most systematic search for the O VIII charge exchange line in cluster spectra using the RGS on board XMM-Newton. Methods: We introduce a sample of 21 clusters observed with the RGS. In order to search for O VIII charge exchange, the sample selection criterion is a >35σ detection of the O VIII Lyα line in the archival RGS spectra. The dominating thermal plasma emission is modeled and subtracted with a two-temperature thermal component, and the residuals are stacked for the line search. The systematic uncertainties in the fits are quantified by refitting the spectra with a varying continuum and line broadening. Results: By the residual stacking, we do find a hint of a line-like feature at 14.82 Å, the characteristic wavelength expected for oxygen charge exchange. This feature has a marginal significance of 2.8σ, and the average equivalent width is 2.5 × 10-4 keV. We further demonstrate that the putative feature can be barely affected by the systematic errors from continuum modeling and instrumental effects, or the atomic uncertainties of the neighboring thermal lines. Conclusions: Assuming a realistic temperature and abundance pattern, the physical model implied by the possible oxygen line agrees well with the theoretical model proposed previously to explain the reported 3.5 keV line. If the charge exchange source indeed exists, we expect that the oxygen abundance could have been overestimated by 8-22% in previous X

  20. Topology in two dimensions. II - The Abell and ACO cluster catalogues

    Science.gov (United States)

    Plionis, Manolis; Valdarnini, Riccardo; Coles, Peter

    1992-09-01

    We apply a method for quantifying the topology of projected galaxy clustering to the Abell and ACO catalogues of rich clusters. We use numerical simulations to quantify the statistical bias involved in using high peaks to define the large-scale structure, and we use the results obtained to correct our observational determinations for this known selection effect and also for possible errors introduced by boundary effects. We find that the Abell cluster sample is consistent with clusters being identified with high peaks of a Gaussian random field, but that the ACO shows a slight meatball shift away from the Gaussian behavior over and above that expected purely from the high-peak selection. The most conservative explanation of this effect is that it is caused by some artefact of the procedure used to select the clusters in the two samples.

  1. Evolution of the cluster X-ray luminosity function

    DEFF Research Database (Denmark)

    Mullis, C.R.; Vikhlinin, A.; Henry, J.P.

    2004-01-01

    We report measurements of the cluster X-ray luminosity function out to z = 0.8 based on the final sample of 201 galaxy systems from the 160 Square Degree ROSAT Cluster Survey. There is little evidence for any measurable change in cluster abundance out to z similar to 0.6 at luminosities of less...... than a few times 10(44) h(50)(-2) ergs s(-1) (0.5 - 2.0 keV). However, for 0.6 cluster deficit using integrated number counts...... independently confirm the presence of evolution. Whereas the bulk of the cluster population does not evolve, the most luminous and presumably most massive structures evolve appreciably between z = 0.8 and the present. Interpreted in the context of hierarchical structure formation, we are probing sufficiently...

  2. THE EFFECTIVENESS OF USING CLUSTER CONNECTION TOWARDS STUDENTS’ VOCABULARY MASTERY AT THE EIGHTH GRADE OF MTs DARUL IHSAN DURI

    Directory of Open Access Journals (Sweden)

    Setiawati

    2017-12-01

    Full Text Available This study aimed to know The Effectiveness of Using Cluster Connection Towards The Students’ VocabularyMastery at The Eighth Graders at MTs Darul Ihsan Duri. Related to the object of the research, the researcher used experimental method. The design of the research was control and experiment group; pretest – posttest design. The research was conducted at MTs. Daru lIhsan Duri in the academic year 2015/2016. The population of this research was the eighth grade students of MTs Darul Ihsan Duri. The total population was 62 students. The sample of the research was class VIII.A as control class and class VIII.B as experimental class. Class VIII.A consisted of 31 students. Class VIII.B consisted of 31 students. In analyzing the research data, the researcher used Independent sampleT-Test. T-Test was used to know whether cluster connection technique is effective toward the students’ vocabulary mastery. The result of the research showed that using cluster connection technique was effective toward the students’ vocabulary mastery. Based on statistical calculation in data analysis, the researcher gave interpretation of posttest score in experiment class and the control class. From the calculation, t-test value was 2.627 and t-table was 2.00. Because t-test value (2.627 was higher than t-table (2.00, it could be concluded that alternative hypothesis (Ha was accepted and the null hypothesis (Ho was rejected. It means that teaching vocabulary by using cluster connection is effective

  3. Cluster-based global firms' use of local capabilities

    DEFF Research Database (Denmark)

    Andersen, Poul Houman; Bøllingtoft, Anne

    2011-01-01

    Purpose – Despite growing interest in clusters role for the global competitiveness of firms, there has been little research into how globalization affects cluster-based firms’ (CBFs) use of local knowledge resources and the combination of local and global knowledge used. Using the cluster......’s knowledge base as a mediating variable, the purpose of this paper is to examine how globalization affected the studied firms’ use of local cluster-based knowledge, integration of local and global knowledge, and networking capabilities. Design/methodology/approach – Qualitative case studies of nine firms...... in three clusters strongly affected by increasing global division of labour. Findings – The paper suggests that globalization has affected how firms use local resources and combine local and global knowledge. Unexpectedly, clustered firms with explicit procedures and established global fora for exchanging...

  4. Brightest Cluster Galaxies in REXCESS Clusters

    Science.gov (United States)

    Haarsma, Deborah B.; Leisman, L.; Bruch, S.; Donahue, M.

    2009-01-01

    Most galaxy clusters contain a Brightest Cluster Galaxy (BCG) which is larger than the other cluster ellipticals and has a more extended profile. In the hierarchical model, the BCG forms through many galaxy mergers in the crowded center of the cluster, and thus its properties give insight into the assembly of the cluster as a whole. In this project, we are working with the Representative XMM-Newton Cluster Structure Survey (REXCESS) team (Boehringer et al 2007) to study BCGs in 33 X-ray luminous galaxy clusters, 0.055 < z < 0.183. We are imaging the BCGs in R band at the Southern Observatory for Astrophysical Research (SOAR) in Chile. In this poster, we discuss our methods and give preliminary measurements of the BCG magnitudes, morphology, and stellar mass. We compare these BCG properties with the properties of their host clusters, particularly of the X-ray emitting gas.

  5. REVISITING SCALING RELATIONS FOR GIANT RADIO HALOS IN GALAXY CLUSTERS

    Energy Technology Data Exchange (ETDEWEB)

    Cassano, R.; Brunetti, G.; Venturi, T.; Kale, R. [INAF/IRA, via Gobetti 101, I-40129 Bologna (Italy); Ettori, S. [INAF/Osservatorio Astronomico di Bologna, via Ranzani 1, I-40127 Bologna (Italy); Giacintucci, S. [Department of Astronomy, University of Maryland, College Park, MD 20742-2421 (United States); Pratt, G. W. [Laboratoire AIM, IRFU/Service dAstrophysique-CEA/DSM-CNRS-Université Paris Diderot, Bât. 709, CEA-Saclay, F-91191 Gif-sur-Yvette Cedex (France); Dolag, K. [University Observatory Munich, Scheinerstr. 1, D-81679 Munich (Germany); Markevitch, M. [Astrophysics Science Division, NASA/Goddard Space Flight Center, Greenbelt, MD 20771 (United States)

    2013-11-10

    Many galaxy clusters host megaparsec-scale radio halos, generated by ultrarelativistic electrons in the magnetized intracluster medium. Correlations between the synchrotron power of radio halos and the thermal properties of the hosting clusters were established in the last decade, including the connection between the presence of a halo and cluster mergers. The X-ray luminosity and redshift-limited Extended GMRT Radio Halo Survey provides a rich and unique dataset for statistical studies of the halos. We uniformly analyze the radio and X-ray data for the GMRT cluster sample, and use the new Planck Sunyaev-Zel'dovich (SZ) catalog to revisit the correlations between the power of radio halos and the thermal properties of galaxy clusters. We find that the radio power at 1.4 GHz scales with the cluster X-ray (0.1-2.4 keV) luminosity computed within R{sub 500} as P{sub 1.4}∼L{sup 2.1±0.2}{sub 500}. Our bigger and more homogenous sample confirms that the X-ray luminous (L{sub 500} > 5 × 10{sup 44} erg s{sup –1}) clusters branch into two populations—radio halos lie on the correlation, while clusters without radio halos have their radio upper limits well below that correlation. This bimodality remains if we excise cool cores from the X-ray luminosities. We also find that P{sub 1.4} scales with the cluster integrated SZ signal within R{sub 500}, measured by Planck, as P{sub 1.4}∼Y{sup 2.05±0.28}{sub 500}, in line with previous findings. However, contrary to previous studies that were limited by incompleteness and small sample size, we find that 'SZ-luminous' Y{sub 500} > 6 × 10{sup –5} Mpc{sup 2} clusters show a bimodal behavior for the presence of radio halos, similar to that in the radio-X-ray diagram. Bimodality of both correlations can be traced to clusters dynamics, with radio halos found exclusively in merging clusters. These results confirm the key role of mergers for the origin of giant radio halos, suggesting that they trigger the

  6. Three-Dimensional Scaffold Chip with Thermosensitive Coating for Capture and Reversible Release of Individual and Cluster of Circulating Tumor Cells.

    Science.gov (United States)

    Cheng, Shi-Bo; Xie, Min; Chen, Yan; Xiong, Jun; Liu, Ya; Chen, Zhen; Guo, Shan; Shu, Ying; Wang, Ming; Yuan, Bi-Feng; Dong, Wei-Guo; Huang, Wei-Hua

    2017-08-01

    Tumor metastasis is attributed to circulating tumor cells (CTC) or CTC clusters. Many strategies have hitherto been designed to isolate CTCs, but there are few methods that can capture and gently release CTC clusters as efficient as single CTCs. Herein, we developed a three-dimensional (3D) scaffold chip with thermosensitive coating for high-efficiency capture and release of individual and cluster CTCs. The 3D scaffold chip successfully combines the specific recognition and physically obstructed effect of 3D scaffold structure to significantly improve cell clusters capture efficiency. Thermosensitive gelatin hydrogel uniformly coated on the scaffold dissolves at 37 °C quickly, and the captured cells are gently released from chip with high viability. Notably, this platform was applied to isolate CTCs from cancer patients' blood samples. This allows global DNA and RNA methylation analysis of collected single CTC and CTC clusters, indicating the great potential of this platform in cancer diagnosis and downstream analysis at the molecular level.

  7. Multi-saline sample distillation apparatus for hydrogen isotope analyses : design and accuracy

    Science.gov (United States)

    Hassan, Afifa Afifi

    1981-01-01

    A distillation apparatus for saline water samples was designed and tested. Six samples may be distilled simultaneously. The temperature was maintained at 400 C to ensure complete dehydration of the precipitating salts. Consequently, the error in the measured ratio of stable hydrogen isotopes resulting from incomplete dehydration of hydrated salts during distillation was eliminated. (USGS)

  8. Data clustering in C++ an object-oriented approach

    CERN Document Server

    Gan, Guojun

    2011-01-01

    Data clustering is a highly interdisciplinary field, the goal of which is to divide a set of objects into homogeneous groups such that objects in the same group are similar and objects in different groups are quite distinct. Thousands of theoretical papers and a number of books on data clustering have been published over the past 50 years. However, few books exist to teach people how to implement data clustering algorithms. This book was written for anyone who wants to implement or improve their data clustering algorithms. Using object-oriented design and programming techniques, Data Clusterin

  9. Intra Cluster Light properties in the CLASH-VLT cluster MACS J1206.2-0847

    CERN Document Server

    Presotto, V; Nonino, M; Mercurio, A; Grillo, C; Rosati, P; Biviano, A; Annunziatella, M; Balestra, I; Cui, W; Sartoris, B; Lemze, D; Ascaso, B; Moustakas, J; Ford, H; Fritz, A; Czoske, O; Ettori, S; Kuchner, U; Lombardi, M; Maier, C; Medezinski, E; Molino, A; Scodeggio, M; Strazzullo, V; Tozzi, P; Ziegler, B; Bartelmann, M; Benitez, N; Bradley, L; Brescia, M; Broadhurst, T; Coe, D; Donahue, M; Gobat, R; Graves, G; Kelson, D; Koekemoer, A; Melchior, P; Meneghetti, M; Merten, J; Moustakas, L; Munari, E; Postman, M; Regős, E; Seitz, S; Umetsu, K; Zheng, W; Zitrin, A

    2014-01-01

    We aim at constraining the assembly history of clusters by studying the intra cluster light (ICL) properties, estimating its contribution to the fraction of baryons in stars, f*, and understanding possible systematics/bias using different ICL detection techniques. We developed an automated method, GALtoICL, based on the software GALAPAGOS to obtain a refined version of typical BCG+ICL maps. We applied this method to our test case MACS J1206.2-0847, a massive cluster located at z=0.44, that is part of the CLASH sample. Using deep multi-band SUBARU images, we extracted the surface brightness (SB) profile of the BCG+ICL and we studied the ICL morphology, color, and contribution to f* out to R500. We repeated the same analysis using a different definition of the ICL, SBlimit method, i.e. a SB cut-off level, to compare the results. The most peculiar feature of the ICL in MACS1206 is its asymmetric radial distribution, with an excess in the SE direction and extending towards the 2nd brightest cluster galaxy which i...

  10. THE DETECTION AND STATISTICS OF GIANT ARCS BEHIND CLASH CLUSTERS

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Bingxiao; Zheng, Wei [Department of Physics and Astronomy, The Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Postman, Marc; Bradley, Larry [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21208 (United States); Meneghetti, Massimo; Koekemoer, Anton [INAF, Osservatorio Astronomico di Bologna, and INFN, Sezione di Bologna, Via Ranzani 1, I-40127 Bologna (Italy); Seitz, Stella [Universitaets-Sternwarte, Fakultaet fuer Physik, Ludwig-Maximilians Universitaet Muenchen, Scheinerstr. 1, D-81679 Muenchen (Germany); Zitrin, Adi [California Institute of Technology, MC 249-17, Pasadena, CA 91125 (United States); Merten, Julian [University of Oxford, Department of Physics, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH (United Kingdom); Maoz, Dani [School of Physics and Astronomy, Tel Aviv University, Tel-Aviv 69978 (Israel); Frye, Brenda [Steward Observatory/Department of Astronomy, University of Arizona, 933 N. Cherry Ave., Tucson, AZ 85721 (United States); Umetsu, Keiichi [Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 10617, Taiwan (China); Vega, Jesus, E-mail: bxu6@jhu.edu [Universidad Autonoma de Madrid, Ciudad Universitaria de Cantoblanco, E-28049 Madrid (Spain)

    2016-02-01

    We developed an algorithm to find and characterize gravitationally lensed galaxies (arcs) to perform a comparison of the observed and simulated arc abundance. Observations are from the Cluster Lensing And Supernova survey with Hubble (CLASH). Simulated CLASH images are created using the MOKA package and also clusters selected from the high-resolution, hydrodynamical simulations, MUSIC, over the same mass and redshift range as the CLASH sample. The algorithm's arc elongation accuracy, completeness, and false positive rate are determined and used to compute an estimate of the true arc abundance. We derive a lensing efficiency of 4 ± 1 arcs (with length ≥6″ and length-to-width ratio ≥7) per cluster for the X-ray-selected CLASH sample, 4 ± 1 arcs per cluster for the MOKA-simulated sample, and 3 ± 1 arcs per cluster for the MUSIC-simulated sample. The observed and simulated arc statistics are in full agreement. We measure the photometric redshifts of all detected arcs and find a median redshift z{sub s} = 1.9 with 33% of the detected arcs having z{sub s} > 3. We find that the arc abundance does not depend strongly on the source redshift distribution but is sensitive to the mass distribution of the dark matter halos (e.g., the c–M relation). Our results show that consistency between the observed and simulated distributions of lensed arc sizes and axial ratios can be achieved by using cluster-lensing simulations that are carefully matched to the selection criteria used in the observations.

  11. The reflection of hierarchical cluster analysis of co-occurrence matrices in SPSS

    NARCIS (Netherlands)

    Zhou, Q.; Leng, F.; Leydesdorff, L.

    2015-01-01

    Purpose: To discuss the problems arising from hierarchical cluster analysis of co-occurrence matrices in SPSS, and the corresponding solutions. Design/methodology/approach: We design different methods of using the SPSS hierarchical clustering module for co-occurrence matrices in order to compare

  12. The Swift/BAT AGN Spectroscopic Survey. IX. The Clustering Environments of an Unbiased Sample of Local AGNs

    Science.gov (United States)

    Powell, M. C.; Cappelluti, N.; Urry, C. M.; Koss, M.; Finoguenov, A.; Ricci, C.; Trakhtenbrot, B.; Allevato, V.; Ajello, M.; Oh, K.; Schawinski, K.; Secrest, N.

    2018-05-01

    We characterize the environments of local accreting supermassive black holes by measuring the clustering of AGNs in the Swift/BAT Spectroscopic Survey (BASS). With 548 AGN in the redshift range 0.01 2MASS galaxies, and interpreting it via halo occupation distribution and subhalo-based models, we constrain the occupation statistics of the full sample, as well as in bins of absorbing column density and black hole mass. We find that AGNs tend to reside in galaxy group environments, in agreement with previous studies of AGNs throughout a large range of luminosity and redshift, and that on average they occupy their dark matter halos similar to inactive galaxies of comparable stellar mass. We also find evidence that obscured AGNs tend to reside in denser environments than unobscured AGNs, even when samples were matched in luminosity, redshift, stellar mass, and Eddington ratio. We show that this can be explained either by significantly different halo occupation distributions or statistically different host halo assembly histories. Lastly, we see that massive black holes are slightly more likely to reside in central galaxies than black holes of smaller mass.

  13. STAR FORMATION AND RELAXATION IN 379 NEARBY GALAXY CLUSTERS

    International Nuclear Information System (INIS)

    Cohen, Seth A.; Hickox, Ryan C.; Wegner, Gary A.

    2015-01-01

    We investigate the relationship between star formation (SF) and level of relaxation in a sample of 379 galaxy clusters at z < 0.2. We use data from the Sloan Digital Sky Survey to measure cluster membership and level of relaxation, and to select star-forming galaxies based on mid-infrared emission detected with the Wide-Field Infrared Survey Explorer. For galaxies with absolute magnitudes M r < −19.5, we find an inverse correlation between SF fraction and cluster relaxation: as a cluster becomes less relaxed, its SF fraction increases. Furthermore, in general, the subtracted SF fraction in all unrelaxed clusters (0.117 ± 0.003) is higher than that in all relaxed clusters (0.097 ± 0.005). We verify the validity of our SF calculation methods and membership criteria through analysis of previous work. Our results agree with previous findings that a weak correlation exists between cluster SF and dynamical state, possibly because unrelaxed clusters are less evolved relative to relaxed clusters

  14. NanoClusters Enhance Drug Delivery in Mechanical Ventilation

    Science.gov (United States)

    Pornputtapitak, Warangkana

    The overall goal of this thesis was to develop a dry powder delivery system for patients on mechanical ventilation. The studies were divided into two parts: the formulation development and the device design. The pulmonary system is an attractive route for drug delivery since the lungs have a large accessible surface area for treatment or drug absorption. For ventilated patients, inhaled drugs have to successfully navigate ventilator tubing and an endotracheal tube. Agglomerates of drug nanoparticles (also known as 'NanoClusters') are fine dry powder aerosols that were hypothesized to enable drug delivery through ventilator circuits. This Thesis systematically investigated formulations of NanoClusters and their aerosol performance in a conventional inhaler and a device designed for use during mechanical ventilation. These engineered powders of budesonide (NC-Bud) were delivered via a MonodoseRTM inhaler or a novel device through commercial endotracheal tubes, and analyzed by cascade impaction. NC-Bud had a higher efficiency of aerosol delivery compared to micronized stock budesonide. The delivery efficiency was independent of ventilator parameters such as inspiration patterns, inspiration volumes, and inspiration flow rates. A novel device designed to fit directly to the ventilator and endotracheal tubing connections and the MonodoseRTM inhaler showed the same efficiency of drug delivery. The new device combined with NanoCluster formulation technology, therefore, allowed convenient and efficient drug delivery through endotracheal tubes. Furthermore, itraconazole (ITZ), a triazole antifungal agent, was formulated as a NanoCluster powder via milling (top-down process) or precipitation (bottom-up process) without using any excipients. ITZ NanoClusters prepared by wet milling showed better aerosol performance compared to micronized stock ITZ and ITZ NanoClusters prepared by precipitation. ITZ NanoClusters prepared by precipitation methods also showed an amorphous state

  15. Design and implementation of streaming media server cluster based on FFMpeg.

    Science.gov (United States)

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.

  16. Design and Implementation of Streaming Media Server Cluster Based on FFMpeg

    Science.gov (United States)

    Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao

    2015-01-01

    Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187

  17. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    International Nuclear Information System (INIS)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B.

    2015-01-01

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging

  18. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    Energy Technology Data Exchange (ETDEWEB)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B. [Radiation Impact Assessment Section, Radiological Safety Division, Indira Gandhi Centre for Atomic Research, Kalpakkam 603 102 (India)

    2015-07-15

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.

  19. Clustering Vehicle Temporal and Spatial Travel Behavior Using License Plate Recognition Data

    OpenAIRE

    Huiyu Chen; Chao Yang; Xiangdong Xu

    2017-01-01

    Understanding travel patterns of vehicle can support the planning and design of better services. In addition, vehicle clustering can improve management efficiency through more targeted access to groups of interest and facilitate planning by more specific survey design. This paper clustered 854,712 vehicles in a week using K-means clustering algorithm based on license plate recognition (LPR) data obtained in Shenzhen, China. Firstly, several travel characteristics related to temporal and spati...

  20. Shielding design of highly activated sample storage at reactor TRIGA PUSPATI

    International Nuclear Information System (INIS)

    Naim Syauqi Hamzah; Julia Abdul Karim; Mohamad Hairie Rabir; Muhd Husamuddin Abdul Khalil; Mohd Amin Sharifuldin Salleh

    2010-01-01

    Radiation protection has always been one of the most important things considered in Reaktor Triga PUSPATI (RTP) management. Currently, demands on sample activation were increased from variety of applicant in different research field area. Radiological hazard may occur if the samples evaluation done were misjudge or miscalculated. At present, there is no appropriate storage for highly activated samples. For that purpose, special irradiated samples storage box should be provided in order to segregate highly activated samples that produce high dose level and typical activated samples that produce lower dose level (1 - 2 mR/ hr). In this study, thickness required by common shielding material such as lead and concrete to reduce highly activated radiotracer sample (potassium bromide) with initial exposure dose of 5 R/ hr to background level (0.05 mR/ hr) were determined. Analyses were done using several methods including conventional shielding equation, half value layer calculation and Micro shield computer code. Design of new irradiated samples storage box for RTP that capable to contain high level gamma radioactivity were then proposed. (author)

  1. M31 GLOBULAR CLUSTER STRUCTURES AND THE PRESENCE OF X-RAY BINARIES

    International Nuclear Information System (INIS)

    Agar, J. R. R.; Barmby, P.

    2013-01-01

    The Andromeda galaxy, M31, has several times the number of globular clusters found in the Milky Way. It contains a correspondingly larger number of low-mass X-ray binaries (LMXBs) associated with globular clusters, and as such can be used to investigate the cluster properties that lead to X-ray binary formation. The best tracer of the spatial structure of M31 globulars is the high-resolution imaging available from the Hubble Space Telescope (HST), and we have used HST data to derive structural parameters for 29 LMXB-hosting M31 globular clusters. These measurements are combined with structural parameters from the literature for a total of 41 (of 50 known) LMXB clusters and a comparison sample of 65 non-LMXB clusters. Structural parameters measured in blue bandpasses are found to be slightly different (smaller core radii and higher concentrations) than those measured in red bandpasses; this difference is enhanced in LMXB clusters and could be related to stellar population differences. Clusters with LMXBs show higher collision rates for their mass compared to clusters without LMXBs, and collision rates estimated at the core radius show larger offsets than rates estimated at the half-light radius. These results are consistent with the dynamical formation scenario for LMXBs. A logistic regression analysis finds that, as expected, the probability of a cluster hosting an LMXB increases with increasing collision rate and proximity to the galaxy center. The same analysis finds that probability of a cluster hosting an LMXB decreases with increasing cluster mass at a fixed collision rate, although we caution that this could be due to sample selection effects. Metallicity is found to be a less important predictor of LMXB probability than collision rate, mass, or distance, even though LMXB clusters have a higher metallicity on average. This may be due to the interaction of location and metallicity: a sample of M31 LMXBs with a greater range in galactocentric distance would

  2. Molecular Clusters: Nanoscale Building Blocks for Solid-State Materials.

    Science.gov (United States)

    Pinkard, Andrew; Champsaur, Anouck M; Roy, Xavier

    2018-04-17

    The programmed assembly of nanoscale building blocks into multicomponent hierarchical structures is a powerful strategy for the bottom-up construction of functional materials. To develop this concept, our team has explored the use of molecular clusters as superatomic building blocks to fabricate new classes of materials. The library of molecular clusters is rich with exciting properties, including diverse functionalization, redox activity, and magnetic ordering, so the resulting cluster-assembled solids, which we term superatomic crystals (SACs), hold the promise of high tunability, atomic precision, and robust architectures among a diverse range of other material properties. Molecular clusters have only seldom been used as precursors for functional materials. Our team has been at the forefront of new developments in this exciting research area, and this Account focuses on our progress toward designing materials from cluster-based precursors. In particular, this Account discusses (1) the design and synthesis of molecular cluster superatomic building blocks, (2) their self-assembly into SACs, and (3) their resulting collective properties. The set of molecular clusters discussed herein is diverse, with different cluster cores and ligand arrangements to create an impressive array of solids. The cluster cores include octahedral M 6 E 8 and cubane M 4 E 4 (M = metal; E = chalcogen), which are typically passivated by a shell of supporting ligands, a feature upon which we have expanded upon by designing and synthesizing more exotic ligands that can be used to direct solid-state assembly. Building from this library, we have designed whole families of binary SACs where the building blocks are held together through electrostatic, covalent, or van der Waals interactions. Using single-crystal X-ray diffraction (SCXRD) to determine the atomic structure, a remarkable range of compositional variability is accessible. We can also use this technique, in tandem with vibrational

  3. Metal cluster compounds - chemistry and importance; clusters containing isolated main group element atoms, large metal cluster compounds, cluster fluxionality

    International Nuclear Information System (INIS)

    Walther, B.

    1988-01-01

    This part of the review on metal cluster compounds deals with clusters containing isolated main group element atoms, with high nuclearity clusters and metal cluster fluxionality. It will be obvious that main group element atoms strongly influence the geometry, stability and reactivity of the clusters. High nuclearity clusters are of interest in there own due to the diversity of the structures adopted, but their intermediate position between molecules and the metallic state makes them a fascinating research object too. These both sites of the metal cluster chemistry as well as the frequently observed ligand and core fluxionality are related to the cluster metal and surface analogy. (author)

  4. The x-ray luminous galaxy cluster population at 0.9 < z ≲ 1.6 as revealed by the XMM-Newton Distant Cluster Project

    International Nuclear Information System (INIS)

    Fassbender, R; Böhringer, H; Nastasi, A; Šuhada, R; Mühlegger, M; Mohr, J J; Pierini, D; De Hoon, A; Kohnert, J; Lamer, G; Schwope, A D; Pratt, G W; Quintana, H; Rosati, P; Santos, J S

    2011-01-01

    We present the largest sample to date of spectroscopically confirmed x-ray luminous high-redshift galaxy clusters comprising 22 systems in the range 0.9 2 of non-contiguous deep archival XMM-Newton coverage, of which 49.4 deg 2 are part of the core survey with a quantifiable selection function and 17.7 deg 2 are classified as ‘gold’ coverage as the starting point for upcoming cosmological applications. Distant cluster candidates were followed up with moderately deep optical and near-infrared imaging in at least two bands to photometrically identify the cluster galaxy populations and obtain redshift estimates based on the colors of simple stellar population models. We test and calibrate the most promising redshift estimation techniques based on the R-z and z-H colors for efficient distant cluster identifications and find a good redshift accuracy performance of the z-H color out to at least z ∼ 1.5, while the redshift evolution of the R-z color leads to increasingly large uncertainties at z ≳ 0.9. Photometrically identified high-z systems are spectroscopically confirmed with VLT/FORS 2 with a minimum of three concordant cluster member redshifts. We present first details of two newly identified clusters, XDCP J0338.5+0029 at z = 0.916 and XDCP J0027.2+1714 at z = 0.959, and investigate the x-ray properties of SpARCS J003550-431224 at z = 1.335, which shows evidence for ongoing major merger activity along the line-of-sight. We provide x-ray properties and luminosity-based total mass estimates for the full sample of 22 high-z clusters, of which 17 are at z ⩾ 1.0 and seven populate the highest redshift bin at z > 1.3. The median system mass of the sample is M 200 ≃ 2 × 10 14 M ⊙ , while the probed mass range for the distant clusters spans approximately (0.7-7) × 10 14 M ⊙ . The majority (>70%) of the x-ray selected clusters show rather regular x-ray morphologies, albeit in most cases with a discernible elongation along one axis. In contrast to

  5. QCS: a system for querying, clustering and summarizing documents.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M.; Schlesinger, Judith D. (Center for Computing Sciences, Bowie, MD); O' Leary, Dianne P. (University of Maryland, College Park, MD); Conroy, John M. (Center for Computing Sciences, Bowie, MD)

    2006-10-01

    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence 'trimming', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of the design

  6. QCS : a system for querying, clustering, and summarizing documents.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M.

    2006-08-01

    Information retrieval systems consist of many complicated components. Research and development of such systems is often hampered by the difficulty in evaluating how each particular component would behave across multiple systems. We present a novel hybrid information retrieval system--the Query, Cluster, Summarize (QCS) system--which is portable, modular, and permits experimentation with different instantiations of each of the constituent text analysis components. Most importantly, the combination of the three types of components in the QCS design improves retrievals by providing users more focused information organized by topic. We demonstrate the improved performance by a series of experiments using standard test sets from the Document Understanding Conferences (DUC) along with the best known automatic metric for summarization system evaluation, ROUGE. Although the DUC data and evaluations were originally designed to test multidocument summarization, we developed a framework to extend it to the task of evaluation for each of the three components: query, clustering, and summarization. Under this framework, we then demonstrate that the QCS system (end-to-end) achieves performance as good as or better than the best summarization engines. Given a query, QCS retrieves relevant documents, separates the retrieved documents into topic clusters, and creates a single summary for each cluster. In the current implementation, Latent Semantic Indexing is used for retrieval, generalized spherical k-means is used for the document clustering, and a method coupling sentence ''trimming'', and a hidden Markov model, followed by a pivoted QR decomposition, is used to create a single extract summary for each cluster. The user interface is designed to provide access to detailed information in a compact and useful format. Our system demonstrates the feasibility of assembling an effective IR system from existing software libraries, the usefulness of the modularity of

  7. Searching for the 3.5 keV Line in the Stacked Suzaku Observations of Galaxy Clusters

    Science.gov (United States)

    Bulbul, Esra; Markevitch, Maxim; Foster, Adam; Miller, Eric; Bautz, Mark; Lowenstein, Mike; Randall, Scott W.; Smith, Randall K.

    2016-01-01

    We perform a detailed study of the stacked Suzaku observations of 47 galaxy clusters, spanning a redshift range of 0.01-0.45, to search for the unidentified 3.5 keV line. This sample provides an independent test for the previously detected line. We detect a 2sigma-significant spectral feature at 3.5 keV in the spectrum of the full sample. When the sample is divided into two subsamples (cool-core and non-cool core clusters), the cool-core subsample shows no statistically significant positive residuals at the line energy. A very weak (approx. 2sigma confidence) spectral feature at 3.5 keV is permitted by the data from the non-cool-core clusters sample. The upper limit on a neutrino decay mixing angle of sin(sup 2)(2theta) = 6.1 x 10(exp -11) from the full Suzaku sample is consistent with the previous detections in the stacked XMM-Newton sample of galaxy clusters (which had a higher statistical sensitivity to faint lines), M31, and Galactic center, at a 90% confidence level. However, the constraint from the present sample, which does not include the Perseus cluster, is in tension with previously reported line flux observed in the core of the Perseus cluster with XMM-Newton and Suzaku.

  8. Sampling Methods in Cardiovascular Nursing Research: An Overview.

    Science.gov (United States)

    Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie

    2014-01-01

    Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.

  9. VizieR Online Data Catalog: Star clusters distances and extinctions. II. (Buckner+, 2014)

    Science.gov (United States)

    Buckner, A. S. M.; Froebrich, D.

    2015-04-01

    Until now, it has been impossible to observationally measure how star cluster scaleheight evolves beyond 1Gyr as only small samples have been available. Here, we establish a novel method to determine the scaleheight of a cluster sample using modelled distributions and Kolmogorov-Smirnov tests. This allows us to determine the scaleheight with a 25% accuracy for samples of 38 clusters or more. We apply our method to investigate the temporal evolution of cluster scaleheight, using homogeneously selected sub-samples of Kharchenko et al. (MWSC, 2012, Cat. J/A+A/543/A156, 2013, J/A+A/558/A53 ), Dias et al. (DAML02, 2002A&A...389..871D, Cat. B/ocl), WEBDA, and Froebrich et al. (FSR, 2007MNRAS.374..399F, Cat. J/MNRAS/374/399). We identify a linear relationship between scaleheight and log(age/yr) of clusters, considerably different from field stars. The scaleheight increases from about 40pc at 1Myr to 75pc at 1Gyr, most likely due to internal evolution and external scattering events. After 1Gyr, there is a marked change of the behaviour, with the scaleheight linearly increasing with log(age/yr) to about 550pc at 3.5Gyr. The most likely interpretation is that the surviving clusters are only observable because they have been scattered away from the mid-plane in their past. A detailed understanding of this observational evidence can only be achieved with numerical simulations of the evolution of cluster samples in the Galactic disc. Furthermore, we find a weak trend of an age-independent increase in scaleheight with Galactocentric distance. There are no significant temporal or spatial variations of the cluster distribution zero-point. We determine the Sun's vertical displacement from the Galactic plane as Z⊙=18.5+/-1.2pc. (1 data file).

  10. Assessment of surface water quality using hierarchical cluster analysis

    Directory of Open Access Journals (Sweden)

    Dheeraj Kumar Dabgerwal

    2016-02-01

    Full Text Available This study was carried out to assess the physicochemical quality river Varuna inVaranasi,India. Water samples were collected from 10 sites during January-June 2015. Pearson correlation analysis was used to assess the direction and strength of relationship between physicochemical parameters. Hierarchical Cluster analysis was also performed to determine the sources of pollution in the river Varuna. The result showed quite high value of DO, Nitrate, BOD, COD and Total Alkalinity, above the BIS permissible limit. The results of correlation analysis identified key water parameters as pH, electrical conductivity, total alkalinity and nitrate, which influence the concentration of other water parameters. Cluster analysis identified three major clusters of sampling sites out of total 10 sites, according to the similarity in water quality. This study illustrated the usefulness of correlation and cluster analysis for getting better information about the river water quality.International Journal of Environment Vol. 5 (1 2016,  pp: 32-44

  11. Improved multi-objective clustering algorithm using particle swarm optimization.

    Directory of Open Access Journals (Sweden)

    Congcong Gong

    Full Text Available Multi-objective clustering has received widespread attention recently, as it can obtain more accurate and reasonable solution. In this paper, an improved multi-objective clustering framework using particle swarm optimization (IMCPSO is proposed. Firstly, a novel particle representation for clustering problem is designed to help PSO search clustering solutions in continuous space. Secondly, the distribution of Pareto set is analyzed. The analysis results are applied to the leader selection strategy, and make algorithm avoid trapping in local optimum. Moreover, a clustering solution-improved method is proposed, which can increase the efficiency in searching clustering solution greatly. In the experiments, 28 datasets are used and nine state-of-the-art clustering algorithms are compared, the proposed method is superior to other approaches in the evaluation index ARI.

  12. Two specialized delayed-neutron detector designs for assays of fissionable elements in water and sediment samples

    International Nuclear Information System (INIS)

    Balestrini, S.J.; Balagna, J.P.; Menlove, H.O.

    1976-01-01

    Two specialized neutron-sensitive detectors are described which are employed for rapid assays of fissionable elements by sensing for delayed neutrons emitted by samples after they have been irradiated in a nuclear reactor. The more sensitive of the two detectors, designed to assay for uranium in water samples, is 40% efficient; the other, designed for sediment sample assays, is 27% efficient. These detectors are also designed to operate under water as an inexpensive shielding against neutron leakage from the reactor and neutrons from cosmic rays. (Auth.)

  13. Identifying seizure clusters in patients with psychogenic nonepileptic seizures.

    Science.gov (United States)

    Baird, Grayson L; Harlow, Lisa L; Machan, Jason T; Thomas, Dave; LaFrance, W C

    2017-08-01

    The present study explored how seizure clusters may be defined for those with psychogenic nonepileptic seizures (PNES), a topic for which there is a paucity of literature. The sample was drawn from a multisite randomized clinical trial for PNES; seizure data are from participants' seizure diaries. Three possible cluster definitions were examined: 1) common clinical definition, where ≥3 seizures in a day is considered a cluster, along with two novel statistical definitions, where ≥3 seizures in a day are considered a cluster if the observed number of seizures statistically exceeds what would be expected relative to a patient's: 1) average seizure rate prior to the trial, 2) observed seizure rate for the previous seven days. Prevalence of clusters was 62-68% depending on cluster definition used, and occurrence rate of clusters was 6-19% depending on cluster definition. Based on these data, clusters seem to be common in patients with PNES, and more research is needed to identify if clusters are related to triggers and outcomes. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. WIYN OPEN CLUSTER STUDY. XXIV. STELLAR RADIAL-VELOCITY MEASUREMENTS IN NGC 6819

    International Nuclear Information System (INIS)

    Tabetha Hole, K.; Geller, Aaron M.; Mathieu, Robert D.; Meibom, Soeren; Platais, Imants; Latham, David W.

    2009-01-01

    We present the current results from our ongoing radial-velocity (RV) survey of the intermediate-age (2.4 Gyr) open cluster NGC 6819. Using both newly observed and other available photometry and astrometry, we define a primary target sample of 1454 stars that includes main-sequence, subgiant, giant, and blue straggler stars, spanning a magnitude range of 11 ≤V≤ 16.5 and an approximate mass range of 1.1-1.6 M sun . Our sample covers a 23 arcminute (13 pc) square field of view centered on the cluster. We have measured 6571 radial velocities for an unbiased sample of 1207 stars in the direction of the open cluster NGC 6819, with a single-measurement precision of 0.4 km s -1 for most narrow-lined stars. We use our RV data to calculate membership probabilities for stars with ≥3 measurements, providing the first comprehensive membership study of the cluster core that includes stars from the giant branch through the upper main sequence. We identify 480 cluster members. Additionally, we identify velocity-variable systems, all of which are likely hard binaries that dynamically power the cluster. Using our single cluster members, we find a cluster average RV of 2.34 ± 0.05 km s -1 . We use our kinematic cluster members to construct a cleaned color-magnitude diagram from which we identify rich giant, subgiant, and blue straggler populations and a well defined red clump. The cluster displays a morphology near the cluster turnoff clearly indicative of core convective overshoot. Finally, we discuss a few stars of note, one of which is a short-period red-clump binary that we suggest may be the product of a dynamical encounter.

  15. The kinematic properties of dwarf early-type galaxies in the Virgo cluster

    NARCIS (Netherlands)

    Toloba, E.; Boselli, A.; Peletier, R. F.; Gorgas, J.; Zapatero Osorio, M.R.; Gorgas, J.; Maíz Apellániz, J.; Pardo, J.R.; Gil de Paz, A.

    We present new medium resolution kinematic data for a sample of 21 dwarf early-type galaxies (dEs) mainly in the Virgo cluster. These data are used to study the origin of dEs inhabiting clusters. Within them we detect two populations: half of the sample (52%) are rotationally supported and the other

  16. The kinematic properties of dwarf early-type galaxies in the Virgo cluster

    NARCIS (Netherlands)

    Toloba, E.; Boselli, A.; Peletier, R. F.; Gorgas, J.; Zapatero Osorio, M.R.; Gorgas, J.; Maíz Apellániz, J.; Pardo, J.R.; Gil de Paz, A.

    2011-01-01

    We present new medium resolution kinematic data for a sample of 21 dwarf early-type galaxies (dEs) mainly in the Virgo cluster. These data are used to study the origin of dEs inhabiting clusters. Within them we detect two populations: half of the sample (52%) are rotationally supported and the other

  17. Elliptical shape of the coma cluster

    International Nuclear Information System (INIS)

    Schipper, L.; King, I.R.

    1978-01-01

    The elliptical shape of the Coma cluster is examined quantitatively. The degree of ellipticity is high and depends to some extent on the radial distance of the sample from the Coma center as well as on the brightness of the sample. The elliptical shape does not appear to be caused by rotation; other possible causes are briefly discussed

  18. Globular Cluster Candidates for Hosting a Central Black Hole

    Science.gov (United States)

    Noyola, Eva

    2009-07-01

    We are continuing our study of the dynamical properties of globular clusters and we propose to obtain surface brightness profiles for high concentration clusters. Our results to date show that the distribution of central surface brightness slopes do not conform to standard models. This has important implications for how they form and evolve, and suggest the possible presence of central intermediate-mass black holes. From our previous archival proposals {AR-9542 and AR-10315}, we find that many high concentration globular clusters do not have flat cores or steep central cusps, instead they show weak cusps. Numerical simulations suggest that clusters with weak cusps may harbor intermediate-mass black holes and we have one confirmation of this connection with omega Centauri. This cluster shows a shallow cusp in its surface brightness profile, while kinematical measurements suggest the presence of a black hole in its center. Our goal is to extend these studies to a sample containing 85% of the Galactic globular clusters with concentrations higher than 1.7 and look for objects departing from isothermal behavior. The ACS globular cluster survey {GO-10775} provides enough objects to have an excellent coverage of a wide range of galactic clusters, but it contains only a couple of the ones with high concentration. The proposed sample consists of clusters whose light profile can only be adequately measured from space-based imaging. This would take us close to completeness for the high concentration cases and therefore provide a more complete list of candidates for containing a central black hole. The dataset will also be combined with our existing kinematic measurements and enhanced with future kinematic studies to perform detailed dynamical modeling.

  19. A phoswich detector design for improved spatial sampling in PET

    Science.gov (United States)

    Thiessen, Jonathan D.; Koschan, Merry A.; Melcher, Charles L.; Meng, Fang; Schellenberg, Graham; Goertzen, Andrew L.

    2018-02-01

    Block detector designs, utilizing a pixelated scintillator array coupled to a photosensor array in a light-sharing design, are commonly used for positron emission tomography (PET) imaging applications. In practice, the spatial sampling of these designs is limited by the crystal pitch, which must be large enough for individual crystals to be resolved in the detector flood image. Replacing the conventional 2D scintillator array with an array of phoswich elements, each consisting of an optically coupled side-by-side scintillator pair, may improve spatial sampling in one direction of the array without requiring resolving smaller crystal elements. To test the feasibility of this design, a 4 × 4 phoswich array was constructed, with each phoswich element consisting of two optically coupled, 3 . 17 × 1 . 58 × 10mm3 LSO crystals co-doped with cerium and calcium. The amount of calcium doping was varied to create a 'fast' LSO crystal with decay time of 32.9 ns and a 'slow' LSO crystal with decay time of 41.2 ns. Using a Hamamatsu R8900U-00-C12 position-sensitive photomultiplier tube (PS-PMT) and a CAEN V1720 250 MS/s waveform digitizer, we were able to show effective discrimination of the fast and slow LSO crystals in the phoswich array. Although a side-by-side phoswich array is feasible, reflections at the crystal boundary due to a mismatch between the refractive index of the optical adhesive (n = 1 . 5) and LSO (n = 1 . 82) caused it to behave optically as an 8 × 4 array rather than a 4 × 4 array. Direct coupling of each phoswich element to individual photodetector elements may be necessary with the current phoswich array design. Alternatively, in order to implement this phoswich design with a conventional light sharing PET block detector, a high refractive index optical adhesive is necessary to closely match the refractive index of LSO.

  20. PREFACE: Nuclear Cluster Conference; Cluster'07

    Science.gov (United States)

    Freer, Martin

    2008-05-01

    The Cluster Conference is a long-running conference series dating back to the 1960's, the first being initiated by Wildermuth in Bochum, Germany, in 1969. The most recent meeting was held in Nara, Japan, in 2003, and in 2007 the 9th Cluster Conference was held in Stratford-upon-Avon, UK. As the name suggests the town of Stratford lies upon the River Avon, and shortly before the conference, due to unprecedented rainfall in the area (approximately 10 cm within half a day), lay in the River Avon! Stratford is the birthplace of the `Bard of Avon' William Shakespeare, and this formed an intriguing conference backdrop. The meeting was attended by some 90 delegates and the programme contained 65 70 oral presentations, and was opened by a historical perspective presented by Professor Brink (Oxford) and closed by Professor Horiuchi (RCNP) with an overview of the conference and future perspectives. In between, the conference covered aspects of clustering in exotic nuclei (both neutron and proton-rich), molecular structures in which valence neutrons are exchanged between cluster cores, condensates in nuclei, neutron-clusters, superheavy nuclei, clusters in nuclear astrophysical processes and exotic cluster decays such as 2p and ternary cluster decay. The field of nuclear clustering has become strongly influenced by the physics of radioactive beam facilities (reflected in the programme), and by the excitement that clustering may have an important impact on the structure of nuclei at the neutron drip-line. It was clear that since Nara the field had progressed substantially and that new themes had emerged and others had crystallized. Two particular topics resonated strongly condensates and nuclear molecules. These topics are thus likely to be central in the next cluster conference which will be held in 2011 in the Hungarian city of Debrechen. Martin Freer Participants and Cluster'07