WorldWideScience

Sample records for two-stage random cluster-sampling

  1. Don't spin the pen: two alternative methods for second-stage sampling in urban cluster surveys

    Directory of Open Access Journals (Sweden)

    Rose Angela MC

    2007-06-01

    Full Text Available Abstract In two-stage cluster surveys, the traditional method used in second-stage sampling (in which the first household in a cluster is selected is time-consuming and may result in biased estimates of the indicator of interest. Firstly, a random direction from the center of the cluster is selected, usually by spinning a pen. The houses along that direction are then counted out to the boundary of the cluster, and one is then selected at random to be the first household surveyed. This process favors households towards the center of the cluster, but it could easily be improved. During a recent meningitis vaccination coverage survey in Maradi, Niger, we compared this method of first household selection to two alternatives in urban zones: 1 using a superimposed grid on the map of the cluster area and randomly selecting an intersection; and 2 drawing the perimeter of the cluster area using a Global Positioning System (GPS and randomly selecting one point within the perimeter. Although we only compared a limited number of clusters using each method, we found the sampling grid method to be the fastest and easiest for field survey teams, although it does require a map of the area. Selecting a random GPS point was also found to be a good method, once adequate training can be provided. Spinning the pen and counting households to the boundary was the most complicated and time-consuming. The two methods tested here represent simpler, quicker and potentially more robust alternatives to spinning the pen for cluster surveys in urban areas. However, in rural areas, these alternatives would favor initial household selection from lower density (or even potentially empty areas. Bearing in mind these limitations, as well as available resources and feasibility, investigators should choose the most appropriate method for their particular survey context.

  2. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  3. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    Science.gov (United States)

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure

  4. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    Science.gov (United States)

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with

  5. Extending cluster Lot Quality Assurance Sampling designs for surveillance programs

    OpenAIRE

    Hund, Lauren; Pagano, Marcello

    2014-01-01

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than ...

  6. Study of shallow junction formation by boron-containing cluster ion implantation of silicon and two-stage annealing

    Science.gov (United States)

    Lu, Xin-Ming

    Shallow junction formation made by low energy ion implantation and rapid thermal annealing is facing a major challenge for ULSI (ultra large scale integration) as the line width decreases down to the sub micrometer region. The issues include low beam current, the channeling effect in low energy ion implantation and TED (transient enhanced diffusion) during annealing after ion implantation. In this work, boron containing small cluster ions, such as GeB, SiB and SiB2, was generated by using the SNICS (source of negative ion by cesium sputtering) ion source to implant into Si substrates to form shallow junctions. The use of boron containing cluster ions effectively reduces the boron energy while keeping the energy of the cluster ion beam at a high level. At the same time, it reduces the channeling effect due to amorphization by co-implanted heavy atoms like Ge and Si. Cluster ions have been used to produce 0.65--2keV boron for low energy ion implantation. Two stage annealing, which is a combination of low temperature (550°C) preannealing and high temperature annealing (1000°C), was carried out to anneal the Si sample implanted by GeB, SiBn clusters. The key concept of two-step annealing, that is, the separation of crystal regrowth, point defects removal with dopant activation from dopant diffusion, is discussed in detail. The advantages of the two stage annealing include better lattice structure, better dopant activation and retarded boron diffusion. The junction depth of the two stage annealed GeB sample was only half that of the one-step annealed sample, indicating that TED was suppressed by two stage annealing. Junction depths as small as 30 nm have been achieved by two stage annealing of sample implanted with 5 x 10-4/cm2 of 5 keV GeB at 1000°C for 1 second. The samples were evaluated by SIMS (secondary ion mass spectrometry) profiling, TEM (transmission electron microscopy) and RBS (Rutherford Backscattering Spectrometry)/channeling. Cluster ion implantation

  7. Extending cluster lot quality assurance sampling designs for surveillance programs.

    Science.gov (United States)

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.

  8. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations.

    Science.gov (United States)

    NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel

    2017-08-01

    Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.

  9. A two-stage cluster sampling method using gridded population data, a GIS, and Google EarthTM imagery in a population-based mortality survey in Iraq

    Directory of Open Access Journals (Sweden)

    Galway LP

    2012-04-01

    Full Text Available Abstract Background Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. Results We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Conclusion Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings.

  10. A two-stage method for microcalcification cluster segmentation in mammography by deformable models

    International Nuclear Information System (INIS)

    Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.; Karahaliou, A.; Costaridou, L.; Vassiou, K.

    2015-01-01

    Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods are applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST cluster , average of minimum distance—AMINDIST cluster ) and the area overlap measure (AOM cluster ). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing tenfold cross

  11. Two-Stage Variable Sample-Rate Conversion System

    Science.gov (United States)

    Tkacenko, Andre

    2009-01-01

    A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.

  12. A two-stage approach to estimate spatial and spatio-temporal disease risks in the presence of local discontinuities and clusters.

    Science.gov (United States)

    Adin, A; Lee, D; Goicoa, T; Ugarte, María Dolores

    2018-01-01

    Disease risk maps for areal unit data are often estimated from Poisson mixed models with local spatial smoothing, for example by incorporating random effects with a conditional autoregressive prior distribution. However, one of the limitations is that local discontinuities in the spatial pattern are not usually modelled, leading to over-smoothing of the risk maps and a masking of clusters of hot/coldspot areas. In this paper, we propose a novel two-stage approach to estimate and map disease risk in the presence of such local discontinuities and clusters. We propose approaches in both spatial and spatio-temporal domains, where for the latter the clusters can either be fixed or allowed to vary over time. In the first stage, we apply an agglomerative hierarchical clustering algorithm to training data to provide sets of potential clusters, and in the second stage, a two-level spatial or spatio-temporal model is applied to each potential cluster configuration. The superiority of the proposed approach with regard to a previous proposal is shown by simulation, and the methodology is applied to two important public health problems in Spain, namely stomach cancer mortality across Spain and brain cancer incidence in the Navarre and Basque Country regions of Spain.

  13. Cluster-randomized Studies in Educational Research: Principles and Methodological Aspects.

    Science.gov (United States)

    Dreyhaupt, Jens; Mayer, Benjamin; Keis, Oliver; Öchsner, Wolfgang; Muche, Rainer

    2017-01-01

    An increasing number of studies are being performed in educational research to evaluate new teaching methods and approaches. These studies could be performed more efficiently and deliver more convincing results if they more strictly applied and complied with recognized standards of scientific studies. Such an approach could substantially increase the quality in particular of prospective, two-arm (intervention) studies that aim to compare two different teaching methods. A key standard in such studies is randomization, which can minimize systematic bias in study findings; such bias may result if the two study arms are not structurally equivalent. If possible, educational research studies should also achieve this standard, although this is not yet generally the case. Some difficulties and concerns exist, particularly regarding organizational and methodological aspects. An important point to consider in educational research studies is that usually individuals cannot be randomized, because of the teaching situation, and instead whole groups have to be randomized (so-called "cluster randomization"). Compared with studies with individual randomization, studies with cluster randomization normally require (significantly) larger sample sizes and more complex methods for calculating sample size. Furthermore, cluster-randomized studies require more complex methods for statistical analysis. The consequence of the above is that a competent expert with respective special knowledge needs to be involved in all phases of cluster-randomized studies. Studies to evaluate new teaching methods need to make greater use of randomization in order to achieve scientifically convincing results. Therefore, in this article we describe the general principles of cluster randomization and how to implement these principles, and we also outline practical aspects of using cluster randomization in prospective, two-arm comparative educational research studies.

  14. Cluster-randomized Studies in Educational Research: Principles and Methodological Aspects

    Directory of Open Access Journals (Sweden)

    Dreyhaupt, Jens

    2017-05-01

    Full Text Available An increasing number of studies are being performed in educational research to evaluate new teaching methods and approaches. These studies could be performed more efficiently and deliver more convincing results if they more strictly applied and complied with recognized standards of scientific studies. Such an approach could substantially increase the quality in particular of prospective, two-arm (intervention studies that aim to compare two different teaching methods. A key standard in such studies is randomization, which can minimize systematic bias in study findings; such bias may result if the two study arms are not structurally equivalent. If possible, educational research studies should also achieve this standard, although this is not yet generally the case. Some difficulties and concerns exist, particularly regarding organizational and methodological aspects. An important point to consider in educational research studies is that usually individuals cannot be randomized, because of the teaching situation, and instead whole groups have to be randomized (so-called “cluster randomization”. Compared with studies with individual randomization, studies with cluster randomization normally require (significantly larger sample sizes and more complex methods for calculating sample size. Furthermore, cluster-randomized studies require more complex methods for statistical analysis. The consequence of the above is that a competent expert with respective special knowledge needs to be involved in all phases of cluster-randomized studies.Studies to evaluate new teaching methods need to make greater use of randomization in order to achieve scientifically convincing results. Therefore, in this article we describe the general principles of cluster randomization and how to implement these principles, and we also outline practical aspects of using cluster randomization in prospective, two-arm comparative educational research studies.

  15. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    Science.gov (United States)

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  16. Hierarchical Cluster Analysis of Three-Dimensional Reconstructions of Unbiased Sampled Microglia Shows not Continuous Morphological Changes from Stage 1 to 2 after Multiple Dengue Infections in Callithrix penicillata

    Science.gov (United States)

    Diniz, Daniel G.; Silva, Geane O.; Naves, Thaís B.; Fernandes, Taiany N.; Araújo, Sanderson C.; Diniz, José A. P.; de Farias, Luis H. S.; Sosthenes, Marcia C. K.; Diniz, Cristovam G.; Anthony, Daniel C.; da Costa Vasconcelos, Pedro F.; Picanço Diniz, Cristovam W.

    2016-01-01

    It is known that microglial morphology and function are related, but few studies have explored the subtleties of microglial morphological changes in response to specific pathogens. In the present report we quantitated microglia morphological changes in a monkey model of dengue disease with virus CNS invasion. To mimic multiple infections that usually occur in endemic areas, where higher dengue infection incidence and abundant mosquito vectors carrying different serotypes coexist, subjects received once a week subcutaneous injections of DENV3 (genotype III)-infected culture supernatant followed 24 h later by an injection of anti-DENV2 antibody. Control animals received either weekly anti-DENV2 antibodies, or no injections. Brain sections were immunolabeled for DENV3 antigens and IBA-1. Random and systematic microglial samples were taken from the polymorphic layer of dentate gyrus for 3-D reconstructions, where we found intense immunostaining for TNFα and DENV3 virus antigens. We submitted all bi- or multimodal morphological parameters of microglia to hierarchical cluster analysis and found two major morphological phenotypes designated types I and II. Compared to type I (stage 1), type II microglia were more complex; displaying higher number of nodes, processes and trees and larger surface area and volumes (stage 2). Type II microglia were found only in infected monkeys, whereas type I microglia was found in both control and infected subjects. Hierarchical cluster analysis of morphological parameters of 3-D reconstructions of random and systematic selected samples in control and ADE dengue infected monkeys suggests that microglia morphological changes from stage 1 to stage 2 may not be continuous. PMID:27047345

  17. A random cluster survey and a convenience sample give comparable estimates of immunity to vaccine preventable diseases in children of school age in Victoria, Australia.

    Science.gov (United States)

    Kelly, Heath; Riddell, Michaela A; Gidding, Heather F; Nolan, Terry; Gilbert, Gwendolyn L

    2002-08-19

    We compared estimates of the age-specific population immunity to measles, mumps, rubella, hepatitis B and varicella zoster viruses in Victorian school children obtained by a national sero-survey, using a convenience sample of residual sera from diagnostic laboratories throughout Australia, with those from a three-stage random cluster survey. When grouped according to school age (primary or secondary school) there was no significant difference in the estimates of immunity to measles, mumps, hepatitis B or varicella. Compared with the convenience sample, the random cluster survey estimated higher immunity to rubella in samples from both primary (98.7% versus 93.6%, P = 0.002) and secondary school students (98.4% versus 93.2%, P = 0.03). Despite some limitations, this study suggests that the collection of a convenience sample of sera from diagnostic laboratories is an appropriate sampling strategy to provide population immunity data that will inform Australia's current and future immunisation policies. Copyright 2002 Elsevier Science Ltd.

  18. Efficient sampling of complex network with modified random walk strategies

    Science.gov (United States)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  19. A simple sample size formula for analysis of covariance in cluster randomized trials.

    NARCIS (Netherlands)

    Teerenstra, S.; Eldridge, S.; Graff, M.J.; Hoop, E. de; Borm, G.F.

    2012-01-01

    For cluster randomized trials with a continuous outcome, the sample size is often calculated as if an analysis of the outcomes at the end of the treatment period (follow-up scores) would be performed. However, often a baseline measurement of the outcome is available or feasible to obtain. An

  20. Re-estimating sample size in cluster randomized trials with active recruitment within clusters

    NARCIS (Netherlands)

    van Schie, Sander; Moerbeek, Mirjam

    2014-01-01

    Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster

  1. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    Science.gov (United States)

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  2. TWO-STAGE CHARACTER CLASSIFICATION : A COMBINED APPROACH OF CLUSTERING AND SUPPORT VECTOR CLASSIFIERS

    NARCIS (Netherlands)

    Vuurpijl, L.; Schomaker, L.

    2000-01-01

    This paper describes a two-stage classification method for (1) classification of isolated characters and (2) verification of the classification result. Character prototypes are generated using hierarchical clustering. For those prototypes known to sometimes produce wrong classification results, a

  3. Cluster randomization and political philosophy.

    Science.gov (United States)

    Chwang, Eric

    2012-11-01

    In this paper, I will argue that, while the ethical issues raised by cluster randomization can be challenging, they are not new. My thesis divides neatly into two parts. In the first, easier part I argue that many of the ethical challenges posed by cluster randomized human subjects research are clearly present in other types of human subjects research, and so are not novel. In the second, more difficult part I discuss the thorniest ethical challenge for cluster randomized research--cases where consent is genuinely impractical to obtain. I argue that once again these cases require no new analytic insight; instead, we should look to political philosophy for guidance. In other words, the most serious ethical problem that arises in cluster randomized research also arises in political philosophy. © 2011 Blackwell Publishing Ltd.

  4. Quantification of physical activity using the QAPACE Questionnaire: a two stage cluster sample design survey of children and adolescents attending urban school.

    Science.gov (United States)

    Barbosa, Nicolas; Sanchez, Carlos E; Patino, Efrain; Lozano, Benigno; Thalabard, Jean C; LE Bozec, Serge; Rieu, Michel

    2016-05-01

    Quantification of physical activity as energy expenditure is important since youth for the prevention of chronic non communicable diseases in adulthood. It is necessary to quantify physical activity expressed in daily energy expenditure (DEE) in school children and adolescents between 8-16 years, by age, gender and socioeconomic level (SEL) in Bogotá. This is a Two Stage Cluster Survey Sample. From a universe of 4700 schools and 760000 students from three existing socioeconomic levels in Bogotá (low, medium and high). The random sample was 20 schools and 1840 students (904 boys and 936 girls). Foreshadowing desertion of participants and inconsistency in the questionnaire responses, the sample size was increased. Thus, 6 individuals of each gender for each of the nine age groups were selected, resulting in a total sample of 2160 individuals. Selected students filled the QAPACE questionnaire under supervision. The data was analyzed comparing means with multivariate general linear model. Fixed factors used were: gender (boys and girls), age (8 to 16 years old) and tri-strata SEL (low, medium and high); as independent variables were assessed: height, weight, leisure time, expressed in hours/day and dependent variable: daily energy expenditure DEE (kJ.kg-1.day-1): during leisure time (DEE-LT), during school time (DEE-ST), during vacation time (DEE-VT), and total mean DEE per year (DEEm-TY) RESULTS: Differences in DEE by gender, in boys, LT and all DEE, with the SEL all variables were significant; but age-SEL was only significant in DEE-VT. In girls, with the SEL all variables were significant. The post hoc multiple comparisons tests were significant with age using Fisher's Least Significant Difference (LSD) test in all variables. For both genders and for all SELs the values in girls had the higher value except SEL high (5-6) The boys have higher values in DEE-LT, DEE-ST, DEE-VT; except in DEEm-TY in SEL (5-6) In SEL (5-6) all DEEs for both genders are highest. For SEL

  5. Multiobjective Two-Stage Stochastic Programming Problems with Interval Discrete Random Variables

    Directory of Open Access Journals (Sweden)

    S. K. Barik

    2012-01-01

    Full Text Available Most of the real-life decision-making problems have more than one conflicting and incommensurable objective functions. In this paper, we present a multiobjective two-stage stochastic linear programming problem considering some parameters of the linear constraints as interval type discrete random variables with known probability distribution. Randomness of the discrete intervals are considered for the model parameters. Further, the concepts of best optimum and worst optimum solution are analyzed in two-stage stochastic programming. To solve the stated problem, first we remove the randomness of the problem and formulate an equivalent deterministic linear programming model with multiobjective interval coefficients. Then the deterministic multiobjective model is solved using weighting method, where we apply the solution procedure of interval linear programming technique. We obtain the upper and lower bound of the objective function as the best and the worst value, respectively. It highlights the possible risk involved in the decision-making tool. A numerical example is presented to demonstrate the proposed solution procedure.

  6. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  7. Effect of Behavior Modification on Outcome in Early- to Moderate-Stage Chronic Kidney Disease: A Cluster-Randomized Trial.

    Science.gov (United States)

    Yamagata, Kunihiro; Makino, Hirofumi; Iseki, Kunitoshi; Ito, Sadayoshi; Kimura, Kenjiro; Kusano, Eiji; Shibata, Takanori; Tomita, Kimio; Narita, Ichiei; Nishino, Tomoya; Fujigaki, Yoshihide; Mitarai, Tetsuya; Watanabe, Tsuyoshi; Wada, Takashi; Nakamura, Teiji; Matsuo, Seiichi

    2016-01-01

    Owing to recent changes in our understanding of the underlying cause of chronic kidney disease (CKD), the importance of lifestyle modification for preventing the progression of kidney dysfunction and complications has become obvious. In addition, effective cooperation between general physicians (GPs) and nephrologists is essential to ensure a better care system for CKD treatment. In this cluster-randomized study, we studied the effect of behavior modification on the outcome of early- to moderate-stage CKD. Stratified open cluster-randomized trial. A total of 489 GPs belonging to 49 local medical associations (clusters) in Japan. A total of 2,379 patients (1,195 in group A (standard intervention) and 1,184 in group B (advanced intervention)) aged between 40 and 74 years, who had CKD and were under consultation with GPs. All patients were managed in accordance with the current CKD guidelines. The group B clusters received three additional interventions: patients received both educational intervention for lifestyle modification and a CKD status letter, attempting to prevent their withdrawal from treatment, and the group B GPs received data sheets to facilitate reducing the gap between target and practice. The primary outcome measures were 1) the non-adherence rate of accepting continuous medical follow-up of the patients, 2) the collaboration rate between GPs and nephrologists, and 3) the progression of CKD. The rate of discontinuous clinical visits was significantly lower in group B (16.2% in group A vs. 11.5% in group B, p = 0.01). Significantly higher referral and co-treatment rates were observed in group B (pbehavior modification of CKD patients, namely, significantly lower discontinuous clinical visits, and behavior modification of both GPs and nephrologists, namely significantly higher referral and co-treatment rates, resulting in the retardation of CKD progression, especially in patients with proteinuric Stage 3 CKD. The University Hospital Medical Information

  8. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Evaluating the Validity of a Two-stage Sample in a Birth Cohort Established from Administrative Databases.

    Science.gov (United States)

    El-Zein, Mariam; Conus, Florence; Benedetti, Andrea; Parent, Marie-Elise; Rousseau, Marie-Claude

    2016-01-01

    When using administrative databases for epidemiologic research, a subsample of subjects can be interviewed, eliciting information on undocumented confounders. This article presents a thorough investigation of the validity of a two-stage sample encompassing an assessment of nonparticipation and quantification of the extent of bias. Established through record linkage of administrative databases, the Québec Birth Cohort on Immunity and Health (n = 81,496) aims to study the association between Bacillus Calmette-Guérin vaccination and asthma. Among 76,623 subjects classified in four Bacillus Calmette-Guérin-asthma strata, a two-stage sampling strategy with a balanced design was used to randomly select individuals for interviews. We compared stratum-specific sociodemographic characteristics and healthcare utilization of stage 2 participants (n = 1,643) with those of eligible nonparticipants (n = 74,980) and nonrespondents (n = 3,157). We used logistic regression to determine whether participation varied across strata according to these characteristics. The effect of nonparticipation was described by the relative odds ratio (ROR = ORparticipants/ORsource population) for the association between sociodemographic characteristics and asthma. Parental age at childbirth, area of residence, family income, and healthcare utilization were comparable between groups. Participants were slightly more likely to be women and have a mother born in Québec. Participation did not vary across strata by sex, parental birthplace, or material and social deprivation. Estimates were not biased by nonparticipation; most RORs were below one and bias never exceeded 20%. Our analyses evaluate and provide a detailed demonstration of the validity of a two-stage sample for researchers assembling similar research infrastructures.

  10. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  11. A simple and efficient alternative to implementing systematic random sampling in stereological designs without a motorized microscope stage.

    Science.gov (United States)

    Melvin, Neal R; Poda, Daniel; Sutherland, Robert J

    2007-10-01

    When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.

  12. Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?

    Science.gov (United States)

    Morris, Tim P; Fisher, David J; Kenward, Michael G; Carpenter, James R

    2018-04-30

    Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  13. Comparing the performance of cluster random sampling and integrated threshold mapping for targeting trachoma control, using computer simulation.

    Directory of Open Access Journals (Sweden)

    Jennifer L Smith

    Full Text Available Implementation of trachoma control strategies requires reliable district-level estimates of trachomatous inflammation-follicular (TF, generally collected using the recommended gold-standard cluster randomized surveys (CRS. Integrated Threshold Mapping (ITM has been proposed as an integrated and cost-effective means of rapidly surveying trachoma in order to classify districts according to treatment thresholds. ITM differs from CRS in a number of important ways, including the use of a school-based sampling platform for children aged 1-9 and a different age distribution of participants. This study uses computerised sampling simulations to compare the performance of these survey designs and evaluate the impact of varying key parameters.Realistic pseudo gold standard data for 100 districts were generated that maintained the relative risk of disease between important sub-groups and incorporated empirical estimates of disease clustering at the household, village and district level. To simulate the different sampling approaches, 20 clusters were selected from each district, with individuals sampled according to the protocol for ITM and CRS. Results showed that ITM generally under-estimated the true prevalence of TF over a range of epidemiological settings and introduced more district misclassification according to treatment thresholds than did CRS. However, the extent of underestimation and resulting misclassification was found to be dependent on three main factors: (i the district prevalence of TF; (ii the relative risk of TF between enrolled and non-enrolled children within clusters; and (iii the enrollment rate in schools.Although in some contexts the two methodologies may be equivalent, ITM can introduce a bias-dependent shift as prevalence of TF increases, resulting in a greater risk of misclassification around treatment thresholds. In addition to strengthening the evidence base around choice of trachoma survey methodologies, this study illustrates

  14. Text Clustering Algorithm Based on Random Cluster Core

    Directory of Open Access Journals (Sweden)

    Huang Long-Jun

    2016-01-01

    Full Text Available Nowadays clustering has become a popular text mining algorithm, but the huge data can put forward higher requirements for the accuracy and performance of text mining. In view of the performance bottleneck of traditional text clustering algorithm, this paper proposes a text clustering algorithm with random features. This is a kind of clustering algorithm based on text density, at the same time using the neighboring heuristic rules, the concept of random cluster is introduced, which effectively reduces the complexity of the distance calculation.

  15. Effect of Behavior Modification on Outcome in Early- to Moderate-Stage Chronic Kidney Disease: A Cluster-Randomized Trial.

    Directory of Open Access Journals (Sweden)

    Kunihiro Yamagata

    Full Text Available Owing to recent changes in our understanding of the underlying cause of chronic kidney disease (CKD, the importance of lifestyle modification for preventing the progression of kidney dysfunction and complications has become obvious. In addition, effective cooperation between general physicians (GPs and nephrologists is essential to ensure a better care system for CKD treatment. In this cluster-randomized study, we studied the effect of behavior modification on the outcome of early- to moderate-stage CKD.Stratified open cluster-randomized trial.A total of 489 GPs belonging to 49 local medical associations (clusters in Japan.A total of 2,379 patients (1,195 in group A (standard intervention and 1,184 in group B (advanced intervention aged between 40 and 74 years, who had CKD and were under consultation with GPs.All patients were managed in accordance with the current CKD guidelines. The group B clusters received three additional interventions: patients received both educational intervention for lifestyle modification and a CKD status letter, attempting to prevent their withdrawal from treatment, and the group B GPs received data sheets to facilitate reducing the gap between target and practice.The primary outcome measures were 1 the non-adherence rate of accepting continuous medical follow-up of the patients, 2 the collaboration rate between GPs and nephrologists, and 3 the progression of CKD.The rate of discontinuous clinical visits was significantly lower in group B (16.2% in group A vs. 11.5% in group B, p = 0.01. Significantly higher referral and co-treatment rates were observed in group B (p<0.01. The average eGFR deterioration rate tended to be lower in group B (group A: 2.6±5.8 ml/min/1.73 m2/year, group B: 2.4±5.1 ml/min/1.73 m2/year, p = 0.07. A significant difference in eGFR deterioration rate was observed in subjects with Stage 3 CKD (group A: 2.4±5.9 ml/min/1.73 m2/year, group B: 1.9±4.4 ml/min/1.73 m2/year, p = 0.03.Our care

  16. Adjuvant therapy in stage I and stage II epithelial ovarian cancer. Results of two prospective randomized trials

    International Nuclear Information System (INIS)

    Young, R.C.; Walton, L.A.; Ellenberg, S.S.; Homesley, H.D.; Wilbanks, G.D.; Decker, D.G.; Miller, A.; Park, R.; Major, F. Jr.

    1990-01-01

    About a third of patients with ovarian cancer present with localized disease; despite surgical resection, up to half the tumors recur. Since it has not been established whether adjuvant treatment can benefit such patients, we conducted two prospective, randomized national cooperative trials of adjuvant therapy in patients with localized ovarian carcinoma. All patients underwent surgical resection plus comprehensive staging and, 18 months later, surgical re-exploration. In the first trial, 81 patients with well-differentiated or moderately well differentiated cancers confined to the ovaries (Stages Iai and Ibi) were assigned to receive either no chemotherapy or melphalan (0.2 mg per kilogram of body weight per day for five days, repeated every four to six weeks for up to 12 cycles). After a median follow-up of more than six years, there were no significant differences between the patients given no chemotherapy and those treated with melphalan with respect to either five-year disease-free survival or overall survival. In the second trial, 141 patients with poorly differentiated Stage I tumors or with cancer outside the ovaries but limited to the pelvis (Stage II) were randomly assigned to treatment with either melphalan (in the same regimen as above) or a single intraperitoneal dose of 32P (15 mCi) at the time of surgery. In this trial (median follow-up, greater than 6 years) the outcomes for the two treatment groups were similar with respect to five-year disease-free survival (80 percent in both groups) and overall survival (81 percent with melphalan vs. 78 percent with 32P; P = 0.48). We conclude that in patients with localized ovarian cancer, comprehensive staging at the time of surgical resection can serve to identify those patients (as defined by the first trial) who can be followed without adjuvant chemotherapy

  17. Topology in two dimensions. II - The Abell and ACO cluster catalogues

    Science.gov (United States)

    Plionis, Manolis; Valdarnini, Riccardo; Coles, Peter

    1992-09-01

    We apply a method for quantifying the topology of projected galaxy clustering to the Abell and ACO catalogues of rich clusters. We use numerical simulations to quantify the statistical bias involved in using high peaks to define the large-scale structure, and we use the results obtained to correct our observational determinations for this known selection effect and also for possible errors introduced by boundary effects. We find that the Abell cluster sample is consistent with clusters being identified with high peaks of a Gaussian random field, but that the ACO shows a slight meatball shift away from the Gaussian behavior over and above that expected purely from the high-peak selection. The most conservative explanation of this effect is that it is caused by some artefact of the procedure used to select the clusters in the two samples.

  18. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    Science.gov (United States)

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  19. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2017-01-01

    Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.

  20. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  1. Relative efficiency of unequal versus equal cluster sizes in cluster randomized trials using generalized estimating equation models.

    Science.gov (United States)

    Liu, Jingxia; Colditz, Graham A

    2018-05-01

    There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the "working correlation structure" is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs-exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. A comparison of two sampling approaches for assessing the urban forest canopy cover from aerial photography.

    Science.gov (United States)

    Ucar Zennure; Pete Bettinger; Krista Merry; Jacek Siry; J.M. Bowker

    2016-01-01

    Two different sampling approaches for estimating urban tree canopy cover were applied to two medium-sized cities in the United States, in conjunction with two freely available remotely sensed imagery products. A random point-based sampling approach, which involved 1000 sample points, was compared against a plot/grid sampling (cluster sampling) approach that involved a...

  3. Single-cluster dynamics for the random-cluster model

    NARCIS (Netherlands)

    Deng, Y.; Qian, X.; Blöte, H.W.J.

    2009-01-01

    We formulate a single-cluster Monte Carlo algorithm for the simulation of the random-cluster model. This algorithm is a generalization of the Wolff single-cluster method for the q-state Potts model to noninteger values q>1. Its results for static quantities are in a satisfactory agreement with those

  4. Analysis of cost data in a cluster-randomized, controlled trial: comparison of methods

    DEFF Research Database (Denmark)

    Sokolowski, Ineta; Ørnbøl, Eva; Rosendal, Marianne

    studies have used non-valid analysis of skewed data. We propose two different methods to compare mean cost in two groups. Firstly, we use a non-parametric bootstrap method where the re-sampling takes place on two levels in order to take into account the cluster effect. Secondly, we proceed with a log......-transformation of the cost data and apply the normal theory on these data. Again we try to account for the cluster effect. The performance of these two methods is investigated in a simulation study. The advantages and disadvantages of the different approaches are discussed.......  We consider health care data from a cluster-randomized intervention study in primary care to test whether the average health care costs among study patients differ between the two groups. The problems of analysing cost data are that most data are severely skewed. Median instead of mean...

  5. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample selection by random number... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each for...

  6. Non-ideal magnetohydrodynamic simulations of the two-stage fragmentation model for cluster formation

    International Nuclear Information System (INIS)

    Bailey, Nicole D.; Basu, Shantanu

    2014-01-01

    We model molecular cloud fragmentation with thin-disk, non-ideal magnetohydrodynamic simulations that include ambipolar diffusion and partial ionization that transitions from primarily ultraviolet-dominated to cosmic-ray-dominated regimes. These simulations are used to determine the conditions required for star clusters to form through a two-stage fragmentation scenario. Recent linear analyses have shown that the fragmentation length scales and timescales can undergo a dramatic drop across the column density boundary that separates the ultraviolet- and cosmic-ray-dominated ionization regimes. As found in earlier studies, the absence of an ionization drop and regular perturbations leads to a single-stage fragmentation on pc scales in transcritical clouds, so that the nonlinear evolution yields the same fragment sizes as predicted by linear theory. However, we find that a combination of initial transcritical mass-to-flux ratio, evolution through a column density regime in which the ionization drop takes place, and regular small perturbations to the mass-to-flux ratio is sufficient to cause a second stage of fragmentation during the nonlinear evolution. Cores of size ∼0.1 pc are formed within an initial fragment of ∼pc size. Regular perturbations to the mass-to-flux ratio also accelerate the onset of runaway collapse.

  7. Two-stage clustering (TSC: a pipeline for selecting operational taxonomic units for the high-throughput sequencing of PCR amplicons.

    Directory of Open Access Journals (Sweden)

    Xiao-Tao Jiang

    Full Text Available Clustering 16S/18S rRNA amplicon sequences into operational taxonomic units (OTUs is a critical step for the bioinformatic analysis of microbial diversity. Here, we report a pipeline for selecting OTUs with a relatively low computational demand and a high degree of accuracy. This pipeline is referred to as two-stage clustering (TSC because it divides tags into two groups according to their abundance and clusters them sequentially. The more abundant group is clustered using a hierarchical algorithm similar to that in ESPRIT, which has a high degree of accuracy but is computationally costly for large datasets. The rarer group, which includes the majority of tags, is then heuristically clustered to improve efficiency. To further improve the computational efficiency and accuracy, two preclustering steps are implemented. To maintain clustering accuracy, all tags are grouped into an OTU depending on their pairwise Needleman-Wunsch distance. This method not only improved the computational efficiency but also mitigated the spurious OTU estimation from 'noise' sequences. In addition, OTUs clustered using TSC showed comparable or improved performance in beta-diversity comparisons compared to existing OTU selection methods. This study suggests that the distribution of sequencing datasets is a useful property for improving the computational efficiency and increasing the clustering accuracy of the high-throughput sequencing of PCR amplicons. The software and user guide are freely available at http://hwzhoulab.smu.edu.cn/paperdata/.

  8. Random matrix improved subspace clustering

    KAUST Repository

    Couillet, Romain; Kammoun, Abla

    2017-01-01

    This article introduces a spectral method for statistical subspace clustering. The method is built upon standard kernel spectral clustering techniques, however carefully tuned by theoretical understanding arising from random matrix findings. We show

  9. Two-sample discrimination of Poisson means

    Science.gov (United States)

    Lampton, M.

    1994-01-01

    This paper presents a statistical test for detecting significant differences between two random count accumulations. The null hypothesis is that the two samples share a common random arrival process with a mean count proportional to each sample's exposure. The model represents the partition of N total events into two counts, A and B, as a sequence of N independent Bernoulli trials whose partition fraction, f, is determined by the ratio of the exposures of A and B. The detection of a significant difference is claimed when the background (null) hypothesis is rejected, which occurs when the observed sample falls in a critical region of (A, B) space. The critical region depends on f and the desired significance level, alpha. The model correctly takes into account the fluctuations in both the signals and the background data, including the important case of small numbers of counts in the signal, the background, or both. The significance can be exactly determined from the cumulative binomial distribution, which in turn can be inverted to determine the critical A(B) or B(A) contour. This paper gives efficient implementations of these tests, based on lookup tables. Applications include the detection of clustering of astronomical objects, the detection of faint emission or absorption lines in photon-limited spectroscopy, the detection of faint emitters or absorbers in photon-limited imaging, and dosimetry.

  10. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Directory of Open Access Journals (Sweden)

    Lauren Hund

    Full Text Available Lot quality assurance sampling (LQAS surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  11. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Science.gov (United States)

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  12. Relative Efficiencies of a Three-Stage Versus a Two-Stage Sample Design For a New NLS Cohort Study. 22U-884-38.

    Science.gov (United States)

    Folsom, R. E.; Weber, J. H.

    Two sampling designs were compared for the planned 1978 national longitudinal survey of high school seniors with respect to statistical efficiency and cost. The 1972 survey used a stratified two-stage sample of high schools and seniors within schools. In order to minimize interviewer travel costs, an alternate sampling design was proposed,…

  13. Exploring biological network structure with clustered random networks

    Directory of Open Access Journals (Sweden)

    Bansal Shweta

    2009-12-01

    Full Text Available Abstract Background Complex biological systems are often modeled as networks of interacting units. Networks of biochemical interactions among proteins, epidemiological contacts among hosts, and trophic interactions in ecosystems, to name a few, have provided useful insights into the dynamical processes that shape and traverse these systems. The degrees of nodes (numbers of interactions and the extent of clustering (the tendency for a set of three nodes to be interconnected are two of many well-studied network properties that can fundamentally shape a system. Disentangling the interdependent effects of the various network properties, however, can be difficult. Simple network models can help us quantify the structure of empirical networked systems and understand the impact of various topological properties on dynamics. Results Here we develop and implement a new Markov chain simulation algorithm to generate simple, connected random graphs that have a specified degree sequence and level of clustering, but are random in all other respects. The implementation of the algorithm (ClustRNet: Clustered Random Networks provides the generation of random graphs optimized according to a local or global, and relative or absolute measure of clustering. We compare our algorithm to other similar methods and show that ours more successfully produces desired network characteristics. Finding appropriate null models is crucial in bioinformatics research, and is often difficult, particularly for biological networks. As we demonstrate, the networks generated by ClustRNet can serve as random controls when investigating the impacts of complex network features beyond the byproduct of degree and clustering in empirical networks. Conclusion ClustRNet generates ensembles of graphs of specified edge structure and clustering. These graphs allow for systematic study of the impacts of connectivity and redundancies on network function and dynamics. This process is a key step in

  14. A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin

    Science.gov (United States)

    Blaschek, Michael; Duttmann, Rainer

    2015-04-01

    The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using

  15. Random matrix improved subspace clustering

    KAUST Repository

    Couillet, Romain

    2017-03-06

    This article introduces a spectral method for statistical subspace clustering. The method is built upon standard kernel spectral clustering techniques, however carefully tuned by theoretical understanding arising from random matrix findings. We show in particular that our method provides high clustering performance while standard kernel choices provably fail. An application to user grouping based on vector channel observations in the context of massive MIMO wireless communication networks is provided.

  16. Systematic Sampling and Cluster Sampling of Packet Delays

    OpenAIRE

    Lindh, Thomas

    2006-01-01

    Based on experiences of a traffic flow performance meter this papersuggests and evaluates cluster sampling and systematic sampling as methods toestimate average packet delays. Systematic sampling facilitates for exampletime analysis, frequency analysis and jitter measurements. Cluster samplingwith repeated trains of periodically spaced sampling units separated by randomstarting periods, and systematic sampling are evaluated with respect to accuracyand precision. Packet delay traces have been ...

  17. Percolation and epidemics in random clustered networks

    Science.gov (United States)

    Miller, Joel C.

    2009-08-01

    The social networks that infectious diseases spread along are typically clustered. Because of the close relation between percolation and epidemic spread, the behavior of percolation in such networks gives insight into infectious disease dynamics. A number of authors have studied percolation or epidemics in clustered networks, but the networks often contain preferential contacts in high degree nodes. We introduce a class of random clustered networks and a class of random unclustered networks with the same preferential mixing. Percolation in the clustered networks reduces the component sizes and increases the epidemic threshold compared to the unclustered networks.

  18. A pilot cluster randomized controlled trial of structured goal-setting following stroke.

    Science.gov (United States)

    Taylor, William J; Brown, Melanie; William, Levack; McPherson, Kathryn M; Reed, Kirk; Dean, Sarah G; Weatherall, Mark

    2012-04-01

    To determine the feasibility, the cluster design effect and the variance and minimal clinical importance difference in the primary outcome in a pilot study of a structured approach to goal-setting. A cluster randomized controlled trial. Inpatient rehabilitation facilities. People who were admitted to inpatient rehabilitation following stroke who had sufficient cognition to engage in structured goal-setting and complete the primary outcome measure. Structured goal elicitation using the Canadian Occupational Performance Measure. Quality of life at 12 weeks using the Schedule for Individualised Quality of Life (SEIQOL-DW), Functional Independence Measure, Short Form 36 and Patient Perception of Rehabilitation (measuring satisfaction with rehabilitation). Assessors were blinded to the intervention. Four rehabilitation services and 41 patients were randomized. We found high values of the intraclass correlation for the outcome measures (ranging from 0.03 to 0.40) and high variance of the SEIQOL-DW (SD 19.6) in relation to the minimally importance difference of 2.1, leading to impractically large sample size requirements for a cluster randomized design. A cluster randomized design is not a practical means of avoiding contamination effects in studies of inpatient rehabilitation goal-setting. Other techniques for coping with contamination effects are necessary.

  19. [Automatic Sleep Stage Classification Based on an Improved K-means Clustering Algorithm].

    Science.gov (United States)

    Xiao, Shuyuan; Wang, Bei; Zhang, Jian; Zhang, Qunfeng; Zou, Junzhong

    2016-10-01

    Sleep stage scoring is a hotspot in the field of medicine and neuroscience.Visual inspection of sleep is laborious and the results may be subjective to different clinicians.Automatic sleep stage classification algorithm can be used to reduce the manual workload.However,there are still limitations when it encounters complicated and changeable clinical cases.The purpose of this paper is to develop an automatic sleep staging algorithm based on the characteristics of actual sleep data.In the proposed improved K-means clustering algorithm,points were selected as the initial centers by using a concept of density to avoid the randomness of the original K-means algorithm.Meanwhile,the cluster centers were updated according to the‘Three-Sigma Rule’during the iteration to abate the influence of the outliers.The proposed method was tested and analyzed on the overnight sleep data of the healthy persons and patients with sleep disorders after continuous positive airway pressure(CPAP)treatment.The automatic sleep stage classification results were compared with the visual inspection by qualified clinicians and the averaged accuracy reached 76%.With the analysis of morphological diversity of sleep data,it was proved that the proposed improved K-means algorithm was feasible and valid for clinical practice.

  20. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  1. Personalized PageRank Clustering: A graph clustering algorithm based on random walks

    Science.gov (United States)

    A. Tabrizi, Shayan; Shakery, Azadeh; Asadpour, Masoud; Abbasi, Maziar; Tavallaie, Mohammad Ali

    2013-11-01

    Graph clustering has been an essential part in many methods and thus its accuracy has a significant effect on many applications. In addition, exponential growth of real-world graphs such as social networks, biological networks and electrical circuits demands clustering algorithms with nearly-linear time and space complexity. In this paper we propose Personalized PageRank Clustering (PPC) that employs the inherent cluster exploratory property of random walks to reveal the clusters of a given graph. We combine random walks and modularity to precisely and efficiently reveal the clusters of a graph. PPC is a top-down algorithm so it can reveal inherent clusters of a graph more accurately than other nearly-linear approaches that are mainly bottom-up. It also gives a hierarchy of clusters that is useful in many applications. PPC has a linear time and space complexity and has been superior to most of the available clustering algorithms on many datasets. Furthermore, its top-down approach makes it a flexible solution for clustering problems with different requirements.

  2. Genome-wide association data classification and SNPs selection using two-stage quality-based Random Forests.

    Science.gov (United States)

    Nguyen, Thanh-Tung; Huang, Joshua; Wu, Qingyao; Nguyen, Thuy; Li, Mark

    2015-01-01

    Single-nucleotide polymorphisms (SNPs) selection and identification are the most important tasks in Genome-wide association data analysis. The problem is difficult because genome-wide association data is very high dimensional and a large portion of SNPs in the data is irrelevant to the disease. Advanced machine learning methods have been successfully used in Genome-wide association studies (GWAS) for identification of genetic variants that have relatively big effects in some common, complex diseases. Among them, the most successful one is Random Forests (RF). Despite of performing well in terms of prediction accuracy in some data sets with moderate size, RF still suffers from working in GWAS for selecting informative SNPs and building accurate prediction models. In this paper, we propose to use a new two-stage quality-based sampling method in random forests, named ts-RF, for SNP subspace selection for GWAS. The method first applies p-value assessment to find a cut-off point that separates informative and irrelevant SNPs in two groups. The informative SNPs group is further divided into two sub-groups: highly informative and weak informative SNPs. When sampling the SNP subspace for building trees for the forest, only those SNPs from the two sub-groups are taken into account. The feature subspaces always contain highly informative SNPs when used to split a node at a tree. This approach enables one to generate more accurate trees with a lower prediction error, meanwhile possibly avoiding overfitting. It allows one to detect interactions of multiple SNPs with the diseases, and to reduce the dimensionality and the amount of Genome-wide association data needed for learning the RF model. Extensive experiments on two genome-wide SNP data sets (Parkinson case-control data comprised of 408,803 SNPs and Alzheimer case-control data comprised of 380,157 SNPs) and 10 gene data sets have demonstrated that the proposed model significantly reduced prediction errors and outperformed

  3. A two-arm cluster randomized control trial to determine the effectiveness of a pressure ulcer prevention bundle for critically ill patients.

    Science.gov (United States)

    Tayyib, Nahla; Coyer, Fiona; Lewis, Peter A

    2015-05-01

    This study tested the effectiveness of a pressure ulcer (PU) prevention bundle in reducing the incidence of PUs in critically ill patients in two Saudi intensive care units (ICUs). A two-arm cluster randomized experimental control trial. Participants in the intervention group received the PU prevention bundle, while the control group received standard skin care as per the local ICU policies. Data collected included demographic variables (age, diagnosis, comorbidities, admission trajectory, length of stay) and clinical variables (Braden Scale score, severity of organ function score, mechanical ventilation, PU presence, and staging). All patients were followed every two days from admission through to discharge, death, or up to a maximum of 28 days. Data were analyzed with descriptive correlation statistics, Kaplan-Meier survival analysis, and Poisson regression. The total number of participants recruited was 140: 70 control participants (with a total of 728 days of observation) and 70 intervention participants (784 days of observation). PU cumulative incidence was significantly lower in the intervention group (7.14%) compared to the control group (32.86%). Poisson regression revealed the likelihood of PU development was 70% lower in the intervention group. The intervention group had significantly less Stage I (p = .002) and Stage II PU development (p = .026). Significant improvements were observed in PU-related outcomes with the implementation of the PU prevention bundle in the ICU; PU incidence, severity, and total number of PUs per patient were reduced. Utilizing a bundle approach and standardized nursing language through skin assessment and translation of the knowledge to practice has the potential to impact positively on the quality of care and patient outcome. © 2015 Sigma Theta Tau International.

  4. Representing Degree Distributions, Clustering, and Homophily in Social Networks With Latent Cluster Random Effects Models.

    Science.gov (United States)

    Krivitsky, Pavel N; Handcock, Mark S; Raftery, Adrian E; Hoff, Peter D

    2009-07-01

    Social network data often involve transitivity, homophily on observed attributes, clustering, and heterogeneity of actor degrees. We propose a latent cluster random effects model to represent all of these features, and we describe a Bayesian estimation method for it. The model is applicable to both binary and non-binary network data. We illustrate the model using two real datasets. We also apply it to two simulated network datasets with the same, highly skewed, degree distribution, but very different network behavior: one unstructured and the other with transitivity and clustering. Models based on degree distributions, such as scale-free, preferential attachment and power-law models, cannot distinguish between these very different situations, but our model does.

  5. Generating Random Samples of a Given Size Using Social Security Numbers.

    Science.gov (United States)

    Erickson, Richard C.; Brauchle, Paul E.

    1984-01-01

    The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)

  6. Evaluation of cluster-randomized trials on maternal and child health research in developing countries

    DEFF Research Database (Denmark)

    Handlos, Line Neerup; Chakraborty, Hrishikesh; Sen, Pranab Kumar

    2009-01-01

    To summarize and evaluate all publications including cluster-randomized trials used for maternal and child health research in developing countries during the last 10 years. METHODS: All cluster-randomized trials published between 1998 and 2008 were reviewed, and those that met our criteria...... for inclusion were evaluated further. The criteria for inclusion were that the trial should have been conducted in maternal and child health care in a developing country and that the conclusions should have been made on an individual level. Methods of accounting for clustering in design and analysis were......, and the trials generally improved in quality. CONCLUSIONS: Shortcomings exist in the sample-size calculations and in the analysis of cluster-randomized trials conducted during maternal and child health research in developing countries. Even though there has been improvement over time, further progress in the way...

  7. Core condensation in heavy halos: a two-stage theory for galaxy formation and clustering

    Energy Technology Data Exchange (ETDEWEB)

    White, S D.M.; Rees, M J [Cambridge Univ. Inst. of Astronomy (UK)

    1978-05-01

    It is suggested that most of the material in the Universe condensed at an early epoch into small 'dark' objects. Irrespective of their nature, these objects must subsequently have undergone hierarchical clustering, whose present scale is inferred from the large-scale distribution of galaxies. As each stage of the hierarchy forms and collapses, relaxation effects wipe out its substructure, and to a self-similar distribution of bound masses. The entire luminous content of galaxies, however, results from the cooling and fragmentation of residual gas within the transient potential wells provided by the dark matter. Every galaxy thus forms as a concentrated luminous core embedded in an extensive dark halo. The observed sizes of galaxies and their survival through later stages of the hierarchy seem inexplicable without invoking substantial dissipation; this dissipation allows the galaxies to become sufficiently concentrated to survive the disruption of their halos in groups and clusters of galaxies. A specific model is proposed in which ..cap omega.. approximately equals 0.2, the dark matter makes up 80 per cent of the total mass, and half the residual gas has been converted into luminous galaxies by the present time. This model is consistent with the inferred proportions of dark matter and gas in rich clusters, with the observed luminosity density of the Universe and with the observed radii of galaxies; further, it predicts the characteristic luminosities of bright galaxies can give a luminosity function of the observed shape.

  8. The covariance matrix of the Potts model: A random cluster analysis

    International Nuclear Information System (INIS)

    Borgs, C.; Chayes, J.T.

    1996-01-01

    We consider the covariance matrix, G mn = q 2 x ,m); δ(σ y ,n)>, of the d-dimensional q-states Potts model, rewriting it in the random cluster representation of Fortuin and Kasteleyn. In many of the q ordered phases, we identify the eigenvalues of this matrix both in terms of representations of the unbroken symmetry group of the model and in terms of random cluster connectivities and covariances, thereby attributing algebraic significance to these stochastic geometric quantities. We also show that the correlation length and the correlation length corresponding to the decay rate of one on the eigenvalues in the same as the inverse decay rate of the diameter of finite clusters. For dimension of d=2, we show that this correlation length and the correlation length of two-point function with free boundary conditions at the corresponding dual temperature are equal up to a factor of two. For systems with first-order transitions, this relation helps to resolve certain inconsistencies between recent exact and numerical work on correlation lengths at the self-dual point β o . For systems with second order transitions, this relation implies the equality of the correlation length exponents from above below threshold, as well as an amplitude ratio of two. In the course of proving the above results, we establish several properties of independent interest, including left continuity of the inverse correlation length with free boundary conditions and upper semicontinuity of the decay rate for finite clusters in all dimensions, and left continuity of the two-dimensional free boundary condition percolation probability at β o . We also introduce DLR equations for the random cluster model and use them to establish ergodicity of the free measure. In order to prove these results, we introduce a new class of events which we call decoupling events and two inequalities for these events

  9. Spectral embedded clustering: a framework for in-sample and out-of-sample spectral clustering.

    Science.gov (United States)

    Nie, Feiping; Zeng, Zinan; Tsang, Ivor W; Xu, Dong; Zhang, Changshui

    2011-11-01

    Spectral clustering (SC) methods have been successfully applied to many real-world applications. The success of these SC methods is largely based on the manifold assumption, namely, that two nearby data points in the high-density region of a low-dimensional data manifold have the same cluster label. However, such an assumption might not always hold on high-dimensional data. When the data do not exhibit a clear low-dimensional manifold structure (e.g., high-dimensional and sparse data), the clustering performance of SC will be degraded and become even worse than K -means clustering. In this paper, motivated by the observation that the true cluster assignment matrix for high-dimensional data can be always embedded in a linear space spanned by the data, we propose the spectral embedded clustering (SEC) framework, in which a linearity regularization is explicitly added into the objective function of SC methods. More importantly, the proposed SEC framework can naturally deal with out-of-sample data. We also present a new Laplacian matrix constructed from a local regression of each pattern and incorporate it into our SEC framework to capture both local and global discriminative information for clustering. Comprehensive experiments on eight real-world high-dimensional datasets demonstrate the effectiveness and advantages of our SEC framework over existing SC methods and K-means-based clustering methods. Our SEC framework significantly outperforms SC using the Nyström algorithm on unseen data.

  10. Economic Design of Acceptance Sampling Plans in a Two-Stage Supply Chain

    Directory of Open Access Journals (Sweden)

    Lie-Fern Hsu

    2012-01-01

    Full Text Available Supply Chain Management, which is concerned with material and information flows between facilities and the final customers, has been considered the most popular operations strategy for improving organizational competitiveness nowadays. With the advanced development of computer technology, it is getting easier to derive an acceptance sampling plan satisfying both the producer's and consumer's quality and risk requirements. However, all the available QC tables and computer software determine the sampling plan on a noneconomic basis. In this paper, we design an economic model to determine the optimal sampling plan in a two-stage supply chain that minimizes the producer's and the consumer's total quality cost while satisfying both the producer's and consumer's quality and risk requirements. Numerical examples show that the optimal sampling plan is quite sensitive to the producer's product quality. The product's inspection, internal failure, and postsale failure costs also have an effect on the optimal sampling plan.

  11. How large are the consequences of covariate imbalance in cluster randomized trials: a simulation study with a continuous outcome and a binary covariate at the cluster level.

    Science.gov (United States)

    Moerbeek, Mirjam; van Schie, Sander

    2016-07-11

    The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.

  12. The Hubble Space Telescope Medium Deep Survey Cluster Sample: Methodology and Data

    Science.gov (United States)

    Ostrander, E. J.; Nichol, R. C.; Ratnatunga, K. U.; Griffiths, R. E.

    1998-12-01

    We present a new, objectively selected, sample of galaxy overdensities detected in the Hubble Space Telescope Medium Deep Survey (MDS). These clusters/groups were found using an automated procedure that involved searching for statistically significant galaxy overdensities. The contrast of the clusters against the field galaxy population is increased when morphological data are used to search around bulge-dominated galaxies. In total, we present 92 overdensities above a probability threshold of 99.5%. We show, via extensive Monte Carlo simulations, that at least 60% of these overdensities are likely to be real clusters and groups and not random line-of-sight superpositions of galaxies. For each overdensity in the MDS cluster sample, we provide a richness and the average of the bulge-to-total ratio of galaxies within each system. This MDS cluster sample potentially contains some of the most distant clusters/groups ever detected, with about 25% of the overdensities having estimated redshifts z > ~0.9. We have made this sample publicly available to facilitate spectroscopic confirmation of these clusters and help more detailed studies of cluster and galaxy evolution. We also report the serendipitous discovery of a new cluster close on the sky to the rich optical cluster Cl l0016+16 at z = 0.546. This new overdensity, HST 001831+16208, may be coincident with both an X-ray source and a radio source. HST 001831+16208 is the third cluster/group discovered near to Cl 0016+16 and appears to strengthen the claims of Connolly et al. of superclustering at high redshift.

  13. Teen Dating Violence Prevention: Cluster-Randomized Trial of Teen Choices, an Online, Stage-Based Program for Healthy, Nonviolent Relationships.

    Science.gov (United States)

    Levesque, Deborah A; Johnson, Janet L; Welch, Carol A; Prochaska, Janice M; Paiva, Andrea L

    2016-07-01

    Teen dating violence is a serious public health problem. A cluster-randomized trial was conducted to assess the efficacy of Teen Choices , a 3-session online program that delivers assessments and individualized guidance matched to dating history, dating violence experiences, and stage of readiness for using healthy relationship skills. For high risk victims of dating violence, the program addresses readiness to keep oneself safe in relationships. Twenty high schools were randomly assigned to the Teen Choices condition ( n =2,000) or a Comparison condition ( n =1,901). Emotional and physical dating violence victimization and perpetration were assessed at 6 and 12 months in the subset of participants (total n =2,605) who reported a past-year history of dating violence at baseline, and/or who dated during the study. The Teen Choices program was associated with significantly reduced odds of all four types of dating violence (adjusted ORs ranging from .45 to .63 at 12 months follow-up). For three of the four violence outcomes, participants with a past-year history of that type of violence benefited significantly more from the intervention than students without a past-year history. The Teen Choices program provides an effective and practicable strategy for intervention for teen dating violence prevention.

  14. Evaluation of stability of k-means cluster ensembles with respect to random initialization.

    Science.gov (United States)

    Kuncheva, Ludmila I; Vetrov, Dmitry P

    2006-11-01

    Many clustering algorithms, including cluster ensembles, rely on a random component. Stability of the results across different runs is considered to be an asset of the algorithm. The cluster ensembles considered here are based on k-means clusterers. Each clusterer is assigned a random target number of clusters, k and is started from a random initialization. Here, we use 10 artificial and 10 real data sets to study ensemble stability with respect to random k, and random initialization. The data sets were chosen to have a small number of clusters (two to seven) and a moderate number of data points (up to a few hundred). Pairwise stability is defined as the adjusted Rand index between pairs of clusterers in the ensemble, averaged across all pairs. Nonpairwise stability is defined as the entropy of the consensus matrix of the ensemble. An experimental comparison with the stability of the standard k-means algorithm was carried out for k from 2 to 20. The results revealed that ensembles are generally more stable, markedly so for larger k. To establish whether stability can serve as a cluster validity index, we first looked at the relationship between stability and accuracy with respect to the number of clusters, k. We found that such a relationship strongly depends on the data set, varying from almost perfect positive correlation (0.97, for the glass data) to almost perfect negative correlation (-0.93, for the crabs data). We propose a new combined stability index to be the sum of the pairwise individual and ensemble stabilities. This index was found to correlate better with the ensemble accuracy. Following the hypothesis that a point of stability of a clustering algorithm corresponds to a structure found in the data, we used the stability measures to pick the number of clusters. The combined stability index gave best results.

  15. Smoothing the redshift distributions of random samples for the baryon acoustic oscillations: applications to the SDSS-III BOSS DR12 and QPM mock samples

    Science.gov (United States)

    Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen

    2017-12-01

    We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.

  16. Systematic versus random sampling in stereological studies.

    Science.gov (United States)

    West, Mark J

    2012-12-01

    The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.

  17. A semi-supervised method to detect seismic random noise with fuzzy GK clustering

    International Nuclear Information System (INIS)

    Hashemi, Hosein; Javaherian, Abdolrahim; Babuska, Robert

    2008-01-01

    We present a new method to detect random noise in seismic data using fuzzy Gustafson–Kessel (GK) clustering. First, using an adaptive distance norm, a matrix is constructed from the observed seismic amplitudes. The next step is to find centres of ellipsoidal clusters and construct a partition matrix which determines the soft decision boundaries between seismic events and random noise. The GK algorithm updates the cluster centres in order to iteratively minimize the cluster variance. Multiplication of the fuzzy membership function with values of each sample yields new sections; we name them 'clustered sections'. The seismic amplitude values of the clustered sections are given in a way to decrease the level of noise in the original noisy seismic input. In pre-stack data, it is essential to study the clustered sections in a f–k domain; finding the quantitative index for weighting the post-stack data needs a similar approach. Using the knowledge of a human specialist together with the fuzzy unsupervised clustering, the method is a semi-supervised random noise detection. The efficiency of this method is investigated on synthetic and real seismic data for both pre- and post-stack data. The results show a significant improvement of the input noisy sections without harming the important amplitude and phase information of the original data. The procedure for finding the final weights of each clustered section should be carefully done in order to keep almost all the evident seismic amplitudes in the output section. The method interactively uses the knowledge of the seismic specialist in detecting the noise

  18. Stochastic coupled cluster theory: Efficient sampling of the coupled cluster expansion

    Science.gov (United States)

    Scott, Charles J. C.; Thom, Alex J. W.

    2017-09-01

    We consider the sampling of the coupled cluster expansion within stochastic coupled cluster theory. Observing the limitations of previous approaches due to the inherently non-linear behavior of a coupled cluster wavefunction representation, we propose new approaches based on an intuitive, well-defined condition for sampling weights and on sampling the expansion in cluster operators of different excitation levels. We term these modifications even and truncated selections, respectively. Utilising both approaches demonstrates dramatically improved calculation stability as well as reduced computational and memory costs. These modifications are particularly effective at higher truncation levels owing to the large number of terms within the cluster expansion that can be neglected, as demonstrated by the reduction of the number of terms to be sampled when truncating at triple excitations by 77% and hextuple excitations by 98%.

  19. A cluster randomized trial of strategies to increase uptake amongst young women invited for their first cervical screen: The STRATEGIC trial.

    Science.gov (United States)

    Kitchener, H; Gittins, M; Cruickshank, M; Moseley, C; Fletcher, S; Albrow, R; Gray, A; Brabin, L; Torgerson, D; Crosbie, E J; Sargent, A; Roberts, C

    2018-06-01

    Objectives To measure the feasibility and effectiveness of interventions to increase cervical screening uptake amongst young women. Methods A two-phase cluster randomized trial conducted in general practices in the NHS Cervical Screening Programme. In Phase 1, women in practices randomized to intervention due for their first invitation to cervical screening received a pre-invitation leaflet and, separately, access to online booking. In Phase 2, non-attenders at six months were randomized to one of: vaginal self-sample kits sent unrequested or offered; timed appointments; nurse navigator; or the choice between nurse navigator or self-sample kits. Primary outcome was uplift in intervention vs. control practices, at 3 and 12 months post invitation. Results Phase 1 randomized 20,879 women. Neither pre-invitation leaflet nor online booking increased screening uptake by three months (18.8% pre-invitation leaflet vs. 19.2% control and 17.8% online booking vs. 17.2% control). Uptake was higher amongst human papillomavirus vaccinees at three months (OR 2.07, 95% CI 1.69-2.53, p < 0.001). Phase 2 randomized 10,126 non-attenders, with 32-34 clusters for each intervention and 100 clusters as controls. Sending self-sample kits increased uptake at 12 months (OR 1.51, 95% CI 1.20-1.91, p = 0.001), as did timed appointments (OR 1.41, 95% CI 1.14-1.74, p = 0.001). The offer of a nurse navigator, a self-sample kits on request, and choice between timed appointments and nurse navigator were ineffective. Conclusions Amongst non-attenders, self-sample kits sent and timed appointments achieved an uplift in screening over the short term; longer term impact is less certain. Prior human papillomavirus vaccination was associated with increased screening uptake.

  20. The Design of Cluster Randomized Trials with Random Cross-Classifications

    Science.gov (United States)

    Moerbeek, Mirjam; Safarkhani, Maryam

    2018-01-01

    Data from cluster randomized trials do not always have a pure hierarchical structure. For instance, students are nested within schools that may be crossed by neighborhoods, and soldiers are nested within army units that may be crossed by mental health-care professionals. It is important that the random cross-classification is taken into account…

  1. An intervention to improve program implementation: findings from a two-year cluster randomized trial of Assets-Getting To Outcomes

    Science.gov (United States)

    2013-01-01

    Background Studies have shown that communities have not always been able to implement evidence-based prevention programs with quality and achieve outcomes demonstrated by prevention science. Implementation support interventions are needed to bridge this gap between science and practice. The purpose of this article is to present two-year outcomes from an evaluation of the Assets Getting To Outcomes (AGTO) intervention in 12 Maine communities engaged in promoting Developmental Assets, a positive youth development approach to prevention. AGTO is an implementation support intervention that consists of: a manual of text and tools; face-to-face training, and onsite technical assistance, focused on activities shown to be associated with obtaining positive results across any prevention program. Methods This study uses a nested and cross-sectional, cluster randomized controlled design. Participants were coalition members and program staff from 12 communities in Maine. Each coalition nominated up to five prevention programs to participate. At random, six coalitions and their respective 30 programs received the two-year AGTO intervention and the other six maintained routine operations. The study assessed prevention practitioner capacity (efficacy and behaviors), practitioner exposure to and use of AGTO, practitioner perceptions of AGTO, and prevention program performance. Capacity of coalition members and performance of their programs were compared between the two groups across the baseline, one-, and two-year time points. Results We found no significant differences between AGTO and control group’s prevention capacity. However, within the AGTO group, significant differences were found between those with greater exposure to and use of AGTO. Programs that received the highest number of technical assistance hours showed the most program improvement. Conclusions This study is the first of its kind to show that use of an implementation support intervention-AGTO -yielded

  2. Sampling in health geography: reconciling geographical objectives and probabilistic methods. An example of a health survey in Vientiane (Lao PDR

    Directory of Open Access Journals (Sweden)

    Bochaton Audrey

    2007-06-01

    Full Text Available Abstract Background Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. Methods We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. Application We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. Conclusion This paper describes the conceptual reasoning behind

  3. Sampling in health geography: reconciling geographical objectives and probabilistic methods. An example of a health survey in Vientiane (Lao PDR).

    Science.gov (United States)

    Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard

    2007-06-01

    Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be

  4. A combined heating cooling stage for cluster thermalization in the gas phase

    International Nuclear Information System (INIS)

    Ievlev, D.N.; Kuester, A.; Enders, A.; Malinowski, N.; Schaber, H.; Kern, K.

    2003-01-01

    We report on the design and performance of a combined heating/cooling stage for the thermalization of clusters in a gas phase time-of-flight mass spectrometer. With this setup the cluster temperature can sensitively be adjusted within the range from 100 up to 800 K and higher. The unique combination of a heating stage with a subsequent cooling stage allows us to perform thermodynamic investigations on clusters at very high temperatures without quality losses in the spectra due to delayed fragmentation in the drift tube of the mass spectrometer. The performance of the setup is demonstrated by the example of (C 60 ) n clusters

  5. Cluster Tails for Critical Power-Law Inhomogeneous Random Graphs

    Science.gov (United States)

    van der Hofstad, Remco; Kliem, Sandra; van Leeuwaarden, Johan S. H.

    2018-04-01

    Recently, the scaling limit of cluster sizes for critical inhomogeneous random graphs of rank-1 type having finite variance but infinite third moment degrees was obtained in Bhamidi et al. (Ann Probab 40:2299-2361, 2012). It was proved that when the degrees obey a power law with exponent τ \\in (3,4), the sequence of clusters ordered in decreasing size and multiplied through by n^{-(τ -2)/(τ -1)} converges as n→ ∞ to a sequence of decreasing non-degenerate random variables. Here, we study the tails of the limit of the rescaled largest cluster, i.e., the probability that the scaling limit of the largest cluster takes a large value u, as a function of u. This extends a related result of Pittel (J Combin Theory Ser B 82(2):237-269, 2001) for the Erdős-Rényi random graph to the setting of rank-1 inhomogeneous random graphs with infinite third moment degrees. We make use of delicate large deviations and weak convergence arguments.

  6. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

    Science.gov (United States)

    Fang, Yun; Wu, Hulin; Zhu, Li-Xing

    2011-07-01

    We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

  7. Two-stage revision of septic knee prosthesis with articulating knee spacers yields better infection eradication rate than one-stage or two-stage revision with static spacers.

    Science.gov (United States)

    Romanò, C L; Gala, L; Logoluso, N; Romanò, D; Drago, L

    2012-12-01

    The best method for treating chronic periprosthetic knee infection remains controversial. Randomized, comparative studies on treatment modalities are lacking. This systematic review of the literature compares the infection eradication rate after two-stage versus one-stage revision and static versus articulating spacers in two-stage procedures. We reviewed full-text papers and those with an abstract in English published from 1966 through 2011 that reported the success rate of infection eradication after one-stage or two-stage revision with two different types of spacers. In all, 6 original articles reporting the results after one-stage knee exchange arthoplasty (n = 204) and 38 papers reporting on two-stage revision (n = 1,421) were reviewed. The average success rate in the eradication of infection was 89.8% after a two-stage revision and 81.9% after a one-stage procedure at a mean follow-up of 44.7 and 40.7 months, respectively. The average infection eradication rate after a two-stage procedure was slightly, although significantly, higher when an articulating spacer rather than a static spacer was used (91.2 versus 87%). The methodological limitations of this study and the heterogeneous material in the studies reviewed notwithstanding, this systematic review shows that, on average, a two-stage procedure is associated with a higher rate of eradication of infection than one-stage revision for septic knee prosthesis and that articulating spacers are associated with a lower recurrence of infection than static spacers at a comparable mean duration of follow-up. IV.

  8. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    Science.gov (United States)

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  9. A cross-sectional, randomized cluster sample survey of household vulnerability to extreme heat among slum dwellers in ahmedabad, india.

    Science.gov (United States)

    Tran, Kathy V; Azhar, Gulrez S; Nair, Rajesh; Knowlton, Kim; Jaiswal, Anjali; Sheffield, Perry; Mavalankar, Dileep; Hess, Jeremy

    2013-06-18

    Extreme heat is a significant public health concern in India; extreme heat hazards are projected to increase in frequency and severity with climate change. Few of the factors driving population heat vulnerability are documented, though poverty is a presumed risk factor. To facilitate public health preparedness, an assessment of factors affecting vulnerability among slum dwellers was conducted in summer 2011 in Ahmedabad, Gujarat, India. Indicators of heat exposure, susceptibility to heat illness, and adaptive capacity, all of which feed into heat vulnerability, was assessed through a cross-sectional household survey using randomized multistage cluster sampling. Associations between heat-related morbidity and vulnerability factors were identified using multivariate logistic regression with generalized estimating equations to account for clustering effects. Age, preexisting medical conditions, work location, and access to health information and resources were associated with self-reported heat illness. Several of these variables were unique to this study. As sociodemographics, occupational heat exposure, and access to resources were shown to increase vulnerability, future interventions (e.g., health education) might target specific populations among Ahmedabad urban slum dwellers to reduce vulnerability to extreme heat. Surveillance and evaluations of future interventions may also be worthwhile.

  10. Poor Prognosis Indicated by Venous Circulating Tumor Cell Clusters in Early-Stage Lung Cancers.

    Science.gov (United States)

    Murlidhar, Vasudha; Reddy, Rishindra M; Fouladdel, Shamileh; Zhao, Lili; Ishikawa, Martin K; Grabauskiene, Svetlana; Zhang, Zhuo; Lin, Jules; Chang, Andrew C; Carrott, Philip; Lynch, William R; Orringer, Mark B; Kumar-Sinha, Chandan; Palanisamy, Nallasivam; Beer, David G; Wicha, Max S; Ramnath, Nithya; Azizi, Ebrahim; Nagrath, Sunitha

    2017-09-15

    Early detection of metastasis can be aided by circulating tumor cells (CTC), which also show potential to predict early relapse. Because of the limited CTC numbers in peripheral blood in early stages, we investigated CTCs in pulmonary vein blood accessed during surgical resection of tumors. Pulmonary vein (PV) and peripheral vein (Pe) blood specimens from patients with lung cancer were drawn during the perioperative period and assessed for CTC burden using a microfluidic device. From 108 blood samples analyzed from 36 patients, PV had significantly higher number of CTCs compared with preoperative Pe ( P ontology analysis revealed enrichment of cell migration and immune-related pathways in CTC clusters, suggesting survival advantage of clusters in circulation. Clusters display characteristics of therapeutic resistance, indicating the aggressive nature of these cells. Thus, CTCs isolated from early stages of lung cancer are predictive of poor prognosis and can be interrogated to determine biomarkers predictive of recurrence. Cancer Res; 77(18); 5194-206. ©2017 AACR . ©2017 American Association for Cancer Research.

  11. Grouped fuzzy SVM with EM-based partition of sample space for clustered microcalcification detection.

    Science.gov (United States)

    Wang, Huiya; Feng, Jun; Wang, Hongyu

    2017-07-20

    Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.

  12. A Random Walk Approach to Query Informative Constraints for Clustering.

    Science.gov (United States)

    Abin, Ahmad Ali

    2017-08-09

    This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.

  13. BWIP-RANDOM-SAMPLING, Random Sample Generation for Nuclear Waste Disposal

    International Nuclear Information System (INIS)

    Sagar, B.

    1989-01-01

    1 - Description of program or function: Random samples for different distribution types are generated. Distribution types as required for performance assessment modeling of geologic nuclear waste disposal are provided. These are: - Uniform, - Log-uniform (base 10 or natural), - Normal, - Lognormal (base 10 or natural), - Exponential, - Bernoulli, - User defined continuous distribution. 2 - Method of solution: A linear congruential generator is used for uniform random numbers. A set of functions is used to transform the uniform distribution to the other distributions. Stratified, rather than random, sampling can be chosen. Truncated limits can be specified on many distributions, whose usual definition has an infinite support. 3 - Restrictions on the complexity of the problem: Generation of correlated random variables is not included

  14. Sampling Methods in Cardiovascular Nursing Research: An Overview.

    Science.gov (United States)

    Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie

    2014-01-01

    Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.

  15. A Denoising Scheme for Randomly Clustered Noise Removal in ICCD Sensing Image

    Directory of Open Access Journals (Sweden)

    Fei Wang

    2017-01-01

    Full Text Available An Intensified Charge-Coupled Device (ICCD image is captured by the ICCD image sensor in extremely low-light conditions. Its noise has two distinctive characteristics. (a Different from the independent identically distributed (i.i.d. noise in natural image, the noise in the ICCD sensing image is spatially clustered, which induces unexpected structure information; (b The pattern of the clustered noise is formed randomly. In this paper, we propose a denoising scheme to remove the randomly clustered noise in the ICCD sensing image. First, we decompose the image into non-overlapped patches and classify them into flat patches and structure patches according to if real structure information is included. Then, two denoising algorithms are designed for them, respectively. For each flat patch, we simulate multiple similar patches for it in pseudo-time domain and remove its noise by averaging all the simulated patches, considering that the structure information induced by the noise varies randomly over time. For each structure patch, we design a structure-preserved sparse coding algorithm to reconstruct the real structure information. It reconstructs each patch by describing it as a weighted summation of its neighboring patches and incorporating the weights into the sparse representation of the current patch. Based on all the reconstructed patches, we generate a reconstructed image. After that, we repeat the whole process by changing relevant parameters, considering that blocking artifacts exist in a single reconstructed image. Finally, we obtain the reconstructed image by merging all the generated images into one. Experiments are conducted on an ICCD sensing image dataset, which verifies its subjective performance in removing the randomly clustered noise and preserving the real structure information in the ICCD sensing image.

  16. Randomized Comparison of Two Vaginal Self-Sampling Methods for Human Papillomavirus Detection: Dry Swab versus FTA Cartridge

    OpenAIRE

    Catarino, Rosa; Vassilakos, Pierre; Bilancioni, Aline; Vanden Eynde, Mathieu; Meyer-Hamme, Ulrike; Menoud, Pierre-Alain; Guerry, Fr?d?ric; Petignat, Patrick

    2015-01-01

    Background Human papillomavirus (HPV) self-sampling (self-HPV) is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab. Methods A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed f...

  17. A sero-survey of rinderpest in nomadic pastoral systems in central and southern Somalia from 2002 to 2003, using a spatially integrated random sampling approach.

    Science.gov (United States)

    Tempia, S; Salman, M D; Keefe, T; Morley, P; Freier, J E; DeMartini, J C; Wamwayi, H M; Njeumi, F; Soumaré, B; Abdi, A M

    2010-12-01

    A cross-sectional sero-survey, using a two-stage cluster sampling design, was conducted between 2002 and 2003 in ten administrative regions of central and southern Somalia, to estimate the seroprevalence and geographic distribution of rinderpest (RP) in the study area, as well as to identify potential risk factors for the observed seroprevalence distribution. The study was also used to test the feasibility of the spatially integrated investigation technique in nomadic and semi-nomadic pastoral systems. In the absence of a systematic list of livestock holdings, the primary sampling units were selected by generating random map coordinates. A total of 9,216 serum samples were collected from cattle aged 12 to 36 months at 562 sampling sites. Two apparent clusters of RP seroprevalence were detected. Four potential risk factors associated with the observed seroprevalence were identified: the mobility of cattle herds, the cattle population density, the proximity of cattle herds to cattle trade routes and cattle herd size. Risk maps were then generated to assist in designing more targeted surveillance strategies. The observed seroprevalence in these areas declined over time. In subsequent years, similar seroprevalence studies in neighbouring areas of Kenya and Ethiopia also showed a very low seroprevalence of RP or the absence of antibodies against RP. The progressive decline in RP antibody prevalence is consistent with virus extinction. Verification of freedom from RP infection in the Somali ecosystem is currently in progress.

  18. A systematic review of the usage of flow diagram in cluster randomized trials

    Directory of Open Access Journals (Sweden)

    Kostić M.

    2014-01-01

    Full Text Available Flow diagram represent an integral part of consolidated standards of reporting trials (CONSORT. Its use in reporting cluster randomization trials is highly recommended. The aim of this article is to present frequency of the use of flow diagram in cluster randomized trials in accordance with standards of reporting. The team has researched Medline database and singled-out 474 studies with cluster randomization for analysis. The studies were reviewed to identify the use of graphic representation, compliance with standards of reporting and the date when study was published. Depending from its duration, studies were divided on completed, and those still ongoing. Usage of CONSORT is recorded in 145 (31% literature units. Frequency of flow diagram was statistically much higher in studies which were in compliance with standards (86,2%, in comparison to those which did not use CONSORT guidelines (71,4%, as well as in completed studies (81,2% in comparison to pilot project studies (54,3%. Number of cluster randomized trials gathered through MEDLINE's search of key words 'cluster randomized trial [ti]' and 'cluster randomised trial [ti]', as well as the use of CONSORT in the reports of cluster randomized trials, are showing linear growth over time (p<0,001. Frequency of flow diagram is higher in the reports of cluster randomized trials that were done in accordance with the standards of reporting.

  19. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys

    OpenAIRE

    Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we comp...

  20. Ab initio random structure search for 13-atom clusters of fcc elements

    International Nuclear Information System (INIS)

    Chou, J P; Hsing, C R; Wei, C M; Cheng, C; Chang, C M

    2013-01-01

    The 13-atom metal clusters of fcc elements (Al, Rh, Ir, Ni, Pd, Pt, Cu, Ag, Au) were studied by density functional theory calculations. The global minima were searched for by the ab initio random structure searching method. In addition to some new lowest-energy structures for Pd 13 and Au 13 , we found that the effective coordination numbers of the lowest-energy clusters would increase with the ratio of the dimer-to-bulk bond length. This correlation, together with the electronic structures of the lowest-energy clusters, divides the 13-atom clusters of these fcc elements into two groups (except for Au 13 , which prefers a two-dimensional structure due to the relativistic effect). Compact-like clusters that are composed exclusively of triangular motifs are preferred for elements without d-electrons (Al) or with (nearly) filled d-band electrons (Ni, Pd, Cu, Ag). Non-compact clusters composed mainly of square motifs connected by some triangular motifs (Rh, Ir, Pt) are favored for elements with unfilled d-band electrons. (paper)

  1. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    Science.gov (United States)

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  2. The clustering of local maxima in random noise

    International Nuclear Information System (INIS)

    Coles, P.

    1989-01-01

    A mixture of analytic and numerical techniques is used to study the clustering properties of local maxima of random noise. Technical complexities restrict us to the case of 1D noise, but the results obtained should give a reasonably accurate picture of the behaviour of cosmological density peaks in noise defined on a 3D domain. We give estimates of the two-point correlation function of local maxima, for both Gaussian and non-Gaussian noise and show that previous approximations are not accurate. (author)

  3. Can group-based reassuring information alter low back pain behavior? A cluster-randomized controlled trial

    DEFF Research Database (Denmark)

    Frederiksen, Pernille; Indahl, Aage; Andersen, Lars L

    2017-01-01

    -randomized controlled trial. METHODS: Publically employed workers (n = 505) from 11 Danish municipality centers were randomized at center-level (cluster) to either intervention (two 1-hour group-based talks at the workplace) or control. The talks provided reassuring information together with a simple non...

  4. Clustering for high-dimension, low-sample size data using distance vectors

    OpenAIRE

    Terada, Yoshikazu

    2013-01-01

    In high-dimension, low-sample size (HDLSS) data, it is not always true that closeness of two objects reflects a hidden cluster structure. We point out the important fact that it is not the closeness, but the "values" of distance that contain information of the cluster structure in high-dimensional space. Based on this fact, we propose an efficient and simple clustering approach, called distance vector clustering, for HDLSS data. Under the assumptions given in the work of Hall et al. (2005), w...

  5. Combining evidence from multiple electronic health care databases: performances of one-stage and two-stage meta-analysis in matched case-control studies.

    Science.gov (United States)

    La Gamba, Fabiola; Corrao, Giovanni; Romio, Silvana; Sturkenboom, Miriam; Trifirò, Gianluca; Schink, Tania; de Ridder, Maria

    2017-10-01

    Clustering of patients in databases is usually ignored in one-stage meta-analysis of multi-database studies using matched case-control data. The aim of this study was to compare bias and efficiency of such a one-stage meta-analysis with a two-stage meta-analysis. First, we compared the approaches by generating matched case-control data under 5 simulated scenarios, built by varying: (1) the exposure-outcome association; (2) its variability among databases; (3) the confounding strength of one covariate on this association; (4) its variability; and (5) the (heterogeneous) confounding strength of two covariates. Second, we made the same comparison using empirical data from the ARITMO project, a multiple database study investigating the risk of ventricular arrhythmia following the use of medications with arrhythmogenic potential. In our study, we specifically investigated the effect of current use of promethazine. Bias increased for one-stage meta-analysis with increasing (1) between-database variance of exposure effect and (2) heterogeneous confounding generated by two covariates. The efficiency of one-stage meta-analysis was slightly lower than that of two-stage meta-analysis for the majority of investigated scenarios. Based on ARITMO data, there were no evident differences between one-stage (OR = 1.50, CI = [1.08; 2.08]) and two-stage (OR = 1.55, CI = [1.12; 2.16]) approaches. When the effect of interest is heterogeneous, a one-stage meta-analysis ignoring clustering gives biased estimates. Two-stage meta-analysis generates estimates at least as accurate and precise as one-stage meta-analysis. However, in a study using small databases and rare exposures and/or outcomes, a correct one-stage meta-analysis becomes essential. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Selection bias and subject refusal in a cluster-randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Rochelle Yang

    2017-07-01

    Full Text Available Abstract Background Selection bias and non-participation bias are major methodological concerns which impact external validity. Cluster-randomized controlled trials are especially prone to selection bias as it is impractical to blind clusters to their allocation into intervention or control. This study assessed the impact of selection bias in a large cluster-randomized controlled trial. Methods The Improved Cardiovascular Risk Reduction to Enhance Rural Primary Care (ICARE study examined the impact of a remote pharmacist-led intervention in twelve medical offices. To assess eligibility, a standardized form containing patient demographics and medical information was completed for each screened patient. Eligible patients were approached by the study coordinator for recruitment. Both the study coordinator and the patient were aware of the site’s allocation prior to consent. Patients who consented or declined to participate were compared across control and intervention arms for differing characteristics. Statistical significance was determined using a two-tailed, equal variance t-test and a chi-square test with adjusted Bonferroni p-values. Results were adjusted for random cluster variation. Results There were 2749 completed screening forms returned to research staff with 461 subjects who had either consented or declined participation. Patients with poorly controlled diabetes were found to be significantly more likely to decline participation in intervention sites compared to those in control sites. A higher mean diastolic blood pressure was seen in patients with uncontrolled hypertension who declined in the control sites compared to those who declined in the intervention sites. However, these findings were no longer significant after adjustment for random variation among the sites. After this adjustment, females were now found to be significantly more likely to consent than males (odds ratio = 1.41; 95% confidence interval = 1.03, 1

  7. Cluster emission at pre-equilibrium stage in Heavy Nuclear Reactions. A Model considering the Thermodynamics of Small Systems

    International Nuclear Information System (INIS)

    Bermudez Martinez, A.; Damiani, D.; Guzman Martinez, F.; Rodriguez Hoyos, O.; Rodriguez Manso, A.

    2015-01-01

    Cluster emission at pre-equilibrium stage, in heavy ion fusion reactions of 12 C and 16 O nuclei with 116 Sn, 208 Pb, 238 U are studied. the energy of the projectile nuclei was chosen at 0.25GeV, 0.5GeV and 1GeV. A cluster formation model is developed in order to calculate the cluster size. Thermodynamics of small systems was used in order to examine the cluster behavior inside the nuclear media. This model is based on considering two phases inside the compound nucleus, on one hand the nuclear media phase, and on the other hand the cluster itself. The cluster acts like an instability inside the compound nucleus, provoking an exchange of nucleons with the nuclear media through its surface. The processes were simulated using Monte Carlo methods. We obtained that the cluster emission probability shows great dependence on the cluster size. This project is aimed to implement cluster emission processes, during the pre-equilibrium stage, in the frame of CRISP code (Collaboration Rio-Sao Paulo). (Author)

  8. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    OpenAIRE

    Liu Yang; Yao Xiong; Xiao-jiao Tong

    2017-01-01

    We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD) constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA) method to approximate the expected values of the underlying r...

  9. Single-stage laparoscopic common bile duct exploration and cholecystectomy versus two-stage endoscopic stone extraction followed by laparoscopic cholecystectomy for patients with concomitant gallbladder stones and common bile duct stones: a randomized controlled trial.

    Science.gov (United States)

    Bansal, Virinder Kumar; Misra, Mahesh C; Rajan, Karthik; Kilambi, Ragini; Kumar, Subodh; Krishna, Asuri; Kumar, Atin; Pandav, Chandrakant S; Subramaniam, Rajeshwari; Arora, M K; Garg, Pramod Kumar

    2014-03-01

    The ideal method for managing concomitant gallbladder stones and common bile duct (CBD) stones is debatable. The currently preferred method is two-stage endoscopic stone extraction followed by laparoscopic cholecystectomy (LC). This prospective randomized trial compared the success and cost effectiveness of single- and two-stage management of patients with concomitant gallbladder and CBD stones. Consecutive patients with concomitant gallbladder and CBD stones were randomized to either single-stage laparoscopic CBD exploration and cholecystectomy (group 1) or endoscopic retrograde cholangiopancreatography (ERCP) for endoscopic extraction of CBD stones followed by LC (group 2). Success was defined as complete clearance of CBD and cholecystectomy by the intended method. Cost effectiveness was measured using the incremental cost-effectiveness ratio. Intention-to-treat analysis was performed to compare outcomes. From February 2009 to October 2012, 168 patients were randomized: 84 to the single-stage procedure (group 1) and 84 to the two-stage procedure (group 2). Both groups were matched with regard to demographic and clinical parameters. The success rates of laparoscopic CBD exploration and ERCP for clearance of CBD were similar (91.7 vs. 88.1 %). The overall success rate also was comparable: 88.1 % in group 1 and 79.8 % in group 2 (p = 0.20). Direct choledochotomy was performed in 83 of the 84 patients. The mean operative time was significantly longer in group 1 (135.7 ± 36.6 vs. 72.4 ± 27.6 min; p ≤ 0.001), but the overall hospital stay was significantly shorter (4.6 ± 2.4 vs. 5.3 ± 6.2 days; p = 0.03). Group 2 had a significantly greater number of procedures per patient (p gallbladder and CBD stones had similar success and complication rates, but the single-stage strategy was better in terms of shorter hospital stay, need for fewer procedures, and cost effectiveness.

  10. Two-stage electrolysis to enrich tritium in environmental water

    International Nuclear Information System (INIS)

    Shima, Nagayoshi; Muranaka, Takeshi

    2007-01-01

    We present a two-stage electrolyzing procedure to enrich tritium in environmental waters. Tritium is first enriched rapidly through a commercially-available electrolyser with a large 50A current, and then through a newly-designed electrolyser that avoids the memory effect, with a 6A current. Tritium recovery factor obtained by such a two-stage electrolysis was greater than that obtained when using the commercially-available device solely. Water samples collected in 2006 in lakes and along the Pacific coast of Aomori prefecture, Japan, were electrolyzed using the two-stage method. Tritium concentrations in these samples ranged from 0.2 to 0.9 Bq/L and were half or less, that in samples collected at the same sites in 1992. (author)

  11. On Angular Sampling Methods for 3-D Spatial Channel Models

    DEFF Research Database (Denmark)

    Fan, Wei; Jämsä, Tommi; Nielsen, Jesper Ødum

    2015-01-01

    This paper discusses generating three dimensional (3D) spatial channel models with emphasis on the angular sampling methods. Three angular sampling methods, i.e. modified uniform power sampling, modified uniform angular sampling, and random pairing methods are proposed and investigated in detail....... The random pairing method, which uses only twenty sinusoids in the ray-based model for generating the channels, presents good results if the spatial channel cluster is with a small elevation angle spread. For spatial clusters with large elevation angle spreads, however, the random pairing method would fail...... and the other two methods should be considered....

  12. Ethical and policy issues in cluster randomized trials: rationale and design of a mixed methods research study

    Directory of Open Access Journals (Sweden)

    Chaudhry Shazia H

    2009-07-01

    Full Text Available Abstract Background Cluster randomized trials are an increasingly important methodological tool in health research. In cluster randomized trials, intact social units or groups of individuals, such as medical practices, schools, or entire communities – rather than individual themselves – are randomly allocated to intervention or control conditions, while outcomes are then observed on individual cluster members. The substantial methodological differences between cluster randomized trials and conventional randomized trials pose serious challenges to the current conceptual framework for research ethics. The ethical implications of randomizing groups rather than individuals are not addressed in current research ethics guidelines, nor have they even been thoroughly explored. The main objectives of this research are to: (1 identify ethical issues arising in cluster trials and learn how they are currently being addressed; (2 understand how ethics reviews of cluster trials are carried out in different countries (Canada, the USA and the UK; (3 elicit the views and experiences of trial participants and cluster representatives; (4 develop well-grounded guidelines for the ethical conduct and review of cluster trials by conducting an extensive ethical analysis and organizing a consensus process; (5 disseminate the guidelines to researchers, research ethics boards (REBs, journal editors, and research funders. Methods We will use a mixed-methods (qualitative and quantitative approach incorporating both empirical and conceptual work. Empirical work will include a systematic review of a random sample of published trials, a survey and in-depth interviews with trialists, a survey of REBs, and in-depth interviews and focus group discussions with trial participants and gatekeepers. The empirical work will inform the concurrent ethical analysis which will lead to a guidance document laying out principles, policy options, and rationale for proposed guidelines. An

  13. Clustering problems for geochemical data

    International Nuclear Information System (INIS)

    Kane, V.E.; Larson, N.M.

    1977-01-01

    The Union Carbide Corporation, Nuclear Division, Uranium Resource Evaluation Project uses a two-stage sampling program to identify potential uranium districts. Cluster analysis techniques are used in locating high density sampling areas as well as in identifying potential uranium districts. Problems are considered involving the analysis of multivariate censored data, laboratory measurement error, and data standardization

  14. Random clustering ferns for multimodal object recognition

    OpenAIRE

    Villamizar Vergel, Michael Alejandro; Garrell Zulueta, Anais; Sanfeliu Cortés, Alberto; Moreno-Noguer, Francesc

    2017-01-01

    The final publication is available at link.springer.com We propose an efficient and robust method for the recognition of objects exhibiting multiple intra-class modes, where each one is associated with a particular object appearance. The proposed method, called random clustering ferns, combines synergically a single and real-time classifier, based on the boosted assembling of extremely randomized trees (ferns), with an unsupervised and probabilistic approach in order to recognize efficient...

  15. Measurement Error Correction Formula for Cluster-Level Group Differences in Cluster Randomized and Observational Studies

    Science.gov (United States)

    Cho, Sun-Joo; Preacher, Kristopher J.

    2016-01-01

    Multilevel modeling (MLM) is frequently used to detect cluster-level group differences in cluster randomized trial and observational studies. Group differences on the outcomes (posttest scores) are detected by controlling for the covariate (pretest scores) as a proxy variable for unobserved factors that predict future attributes. The pretest and…

  16. A Bayesian Justification for Random Sampling in Sample Survey

    Directory of Open Access Journals (Sweden)

    Glen Meeden

    2012-07-01

    Full Text Available In the usual Bayesian approach to survey sampling the sampling design, plays a minimal role, at best. Although a close relationship between exchangeable prior distributions and simple random sampling has been noted; how to formally integrate simple random sampling into the Bayesian paradigm is not clear. Recently it has been argued that the sampling design can be thought of as part of a Bayesian's prior distribution. We will show here that under this scenario simple random sample can be given a Bayesian justification in survey sampling.

  17. Cluster-randomized xylitol toothpaste trial for early childhood caries prevention.

    Science.gov (United States)

    Chi, Donald L; Tut, Ohnmar; Milgrom, Peter

    2014-01-01

    The purpose of this study was to assess the efficacy of supervised tooth-brushing with xylitol toothpaste to prevent early childhood caries (ECC) and reduce mutans streptococci. In this cluster-randomized efficacy trial, 196 four- to five-year-old children in four Head Start classrooms in the Marshall Islands were randomly assigned to supervised toothbrushing with 1,400 ppm/31 percent fluoride xylitol or 1,450 ppm fluoride sorbitol toothpaste. We hypothesized that there would be no difference in efficacy between the two types of toothpaste. The primary outcome was the surface-level primary molar caries increment (d(2-3)mfs) after six months. A single examiner was blinded to classroom assignments. Two classrooms were assigned to the fluoride-xylitol group (85 children), and two classrooms were assigned to the fluoride-sorbitol group (83 children). The child-level analyses accounted for clustering. There was no difference between the two groups in baseline or end-of-trial mean d(2-3)mfs. The mean d(2-3)mfs increment was greater in the fluoride-xylitol group compared to the fluoride-sorbitol group (2.5 and 1.4 d(2-3)mfs, respectively), but the difference was not significant (95% confidence interval: -0.17, 2.37; P=.07). No adverse effects were reported. After six months, brushing with a low-strength xylitol/fluoride tooth-paste is no more efficacious in reducing ECC than a fluoride-only toothpaste in a high caries-risk child population.

  18. Remote Sensing Based Two-Stage Sampling for Accuracy Assessment and Area Estimation of Land Cover Changes

    Directory of Open Access Journals (Sweden)

    Heinz Gallaun

    2015-09-01

    Full Text Available Land cover change processes are accelerating at the regional to global level. The remote sensing community has developed reliable and robust methods for wall-to-wall mapping of land cover changes; however, land cover changes often occur at rates below the mapping errors. In the current publication, we propose a cost-effective approach to complement wall-to-wall land cover change maps with a sampling approach, which is used for accuracy assessment and accurate estimation of areas undergoing land cover changes, including provision of confidence intervals. We propose a two-stage sampling approach in order to keep accuracy, efficiency, and effort of the estimations in balance. Stratification is applied in both stages in order to gain control over the sample size allocated to rare land cover change classes on the one hand and the cost constraints for very high resolution reference imagery on the other. Bootstrapping is used to complement the accuracy measures and the area estimates with confidence intervals. The area estimates and verification estimations rely on a high quality visual interpretation of the sampling units based on time series of satellite imagery. To demonstrate the cost-effective operational applicability of the approach we applied it for assessment of deforestation in an area characterized by frequent cloud cover and very low change rate in the Republic of Congo, which makes accurate deforestation monitoring particularly challenging.

  19. Effect of food additives on hyperphosphatemia among patients with end-stage renal disease: a randomized controlled trial.

    Science.gov (United States)

    Sullivan, Catherine; Sayre, Srilekha S; Leon, Janeen B; Machekano, Rhoderick; Love, Thomas E; Porter, David; Marbury, Marquisha; Sehgal, Ashwini R

    2009-02-11

    High dietary phosphorus intake has deleterious consequences for renal patients and is possibly harmful for the general public as well. To prevent hyperphosphatemia, patients with end-stage renal disease limit their intake of foods that are naturally high in phosphorus. However, phosphorus-containing additives are increasingly being added to processed and fast foods. The effect of such additives on serum phosphorus levels is unclear. To determine the effect of limiting the intake of phosphorus-containing food additives on serum phosphorus levels among patients with end-stage renal disease. Cluster randomized controlled trial at 14 long-term hemodialysis facilities in northeast Ohio. Two hundred seventy-nine patients with elevated baseline serum phosphorus levels (>5.5 mg/dL) were recruited between May and October 2007. Two shifts at each of 12 large facilities and 1 shift at each of 2 small facilities were randomly assigned to an intervention or control group. Intervention participants (n=145) received education on avoiding foods with phosphorus additives when purchasing groceries or visiting fast food restaurants. Control participants (n=134) continued to receive usual care. Change in serum phosphorus level after 3 months. At baseline, there was no significant difference in serum phosphorus levels between the 2 groups. After 3 months, the decline in serum phosphorus levels was 0.6 mg/dL larger among intervention vs control participants (95% confidence interval, -1.0 to -0.1 mg/dL). Intervention participants also had statistically significant increases in reading ingredient lists (Pfood knowledge scores (P = .13). Educating end-stage renal disease patients to avoid phosphorus-containing food additives resulted in modest improvements in hyperphosphatemia. clinicaltrials.gov Identifier: NCT00583570.

  20. Sleep stages identification in patients with sleep disorder using k-means clustering

    Science.gov (United States)

    Fadhlullah, M. U.; Resahya, A.; Nugraha, D. F.; Yulita, I. N.

    2018-05-01

    Data mining is a computational intelligence discipline where a large dataset processed using a certain method to look for patterns within the large dataset. This pattern then used for real time application or to develop some certain knowledge. This is a valuable tool to solve a complex problem, discover new knowledge, data analysis and decision making. To be able to get the pattern that lies inside the large dataset, clustering method is used to get the pattern. Clustering is basically grouping data that looks similar so a certain pattern can be seen in the large data set. Clustering itself has several algorithms to group the data into the corresponding cluster. This research used data from patients who suffer sleep disorders and aims to help people in the medical world to reduce the time required to classify the sleep stages from a patient who suffers from sleep disorders. This study used K-Means algorithm and silhouette evaluation to find out that 3 clusters are the optimal cluster for this dataset which means can be divided to 3 sleep stages.

  1. Ferromagnetic clusters induced by a nonmagnetic random disorder in diluted magnetic semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Bui, Dinh-Hoi [Institute of Research and Development, Duy Tan University, K7/25 Quang Trung, Danang (Viet Nam); Physics Department, Hue University’s College of Education, 34 Le Loi, Hue (Viet Nam); Phan, Van-Nham, E-mail: phanvannham@dtu.edu.vn [Institute of Research and Development, Duy Tan University, K7/25 Quang Trung, Danang (Viet Nam)

    2016-12-15

    In this work, we analyze the nonmagnetic random disorder leading to a formation of ferromagnetic clusters in diluted magnetic semiconductors. The nonmagnetic random disorder arises from randomness in the host lattice. Including the disorder to the Kondo lattice model with random distribution of magnetic dopants, the ferromagnetic–paramagnetic transition in the system is investigated in the framework of dynamical mean-field theory. At a certain low temperature one finds a fraction of ferromagnetic sites transiting to the paramagnetic state. Enlarging the nonmagnetic random disorder strength, the paramagnetic regimes expand resulting in the formation of the ferromagnetic clusters.

  2. Accurate recapture identification for genetic mark–recapture studies with error-tolerant likelihood-based match calling and sample clustering

    Science.gov (United States)

    Sethi, Suresh; Linden, Daniel; Wenburg, John; Lewis, Cara; Lemons, Patrick R.; Fuller, Angela K.; Hare, Matthew P.

    2016-01-01

    Error-tolerant likelihood-based match calling presents a promising technique to accurately identify recapture events in genetic mark–recapture studies by combining probabilities of latent genotypes and probabilities of observed genotypes, which may contain genotyping errors. Combined with clustering algorithms to group samples into sets of recaptures based upon pairwise match calls, these tools can be used to reconstruct accurate capture histories for mark–recapture modelling. Here, we assess the performance of a recently introduced error-tolerant likelihood-based match-calling model and sample clustering algorithm for genetic mark–recapture studies. We assessed both biallelic (i.e. single nucleotide polymorphisms; SNP) and multiallelic (i.e. microsatellite; MSAT) markers using a combination of simulation analyses and case study data on Pacific walrus (Odobenus rosmarus divergens) and fishers (Pekania pennanti). A novel two-stage clustering approach is demonstrated for genetic mark–recapture applications. First, repeat captures within a sampling occasion are identified. Subsequently, recaptures across sampling occasions are identified. The likelihood-based matching protocol performed well in simulation trials, demonstrating utility for use in a wide range of genetic mark–recapture studies. Moderately sized SNP (64+) and MSAT (10–15) panels produced accurate match calls for recaptures and accurate non-match calls for samples from closely related individuals in the face of low to moderate genotyping error. Furthermore, matching performance remained stable or increased as the number of genetic markers increased, genotyping error notwithstanding.

  3. Expanding Comparative Literature into Comparative Sciences Clusters with Neutrosophy and Quad-stage Method

    Directory of Open Access Journals (Sweden)

    Fu Yuhua

    2016-08-01

    Full Text Available By using Neutrosophy and Quad-stage Method, the expansions of comparative literature include: comparative social sciences clusters, comparative natural sciences clusters, comparative interdisciplinary sciences clusters, and so on. Among them, comparative social sciences clusters include: comparative literature, comparative history, comparative philosophy, and so on; comparative natural sciences clusters include: comparative mathematics, comparative physics, comparative chemistry, comparative medicine, comparative biology, and so on.

  4. Clustering, randomness, and regularity in cloud fields. 4: Stratocumulus cloud fields

    Science.gov (United States)

    Lee, J.; Chou, J.; Weger, R. C.; Welch, R. M.

    1994-01-01

    To complete the analysis of the spatial distribution of boundary layer cloudiness, the present study focuses on nine stratocumulus Landsat scenes. The results indicate many similarities between stratocumulus and cumulus spatial distributions. Most notably, at full spatial resolution all scenes exhibit a decidedly clustered distribution. The strength of the clustering signal decreases with increasing cloud size; the clusters themselves consist of a few clouds (less than 10), occupy a small percentage of the cloud field area (less than 5%), contain between 20% and 60% of the cloud field population, and are randomly located within the scene. In contrast, stratocumulus in almost every respect are more strongly clustered than are cumulus cloud fields. For instance, stratocumulus clusters contain more clouds per cluster, occupy a larger percentage of the total area, and have a larger percentage of clouds participating in clusters than the corresponding cumulus examples. To investigate clustering at intermediate spatial scales, the local dimensionality statistic is introduced. Results obtained from this statistic provide the first direct evidence for regularity among large (more than 900 m in diameter) clouds in stratocumulus and cumulus cloud fields, in support of the inhibition hypothesis of Ramirez and Bras (1990). Also, the size compensated point-to-cloud cumulative distribution function statistic is found to be necessary to obtain a consistent description of stratocumulus cloud distributions. A hypothesis regarding the underlying physical mechanisms responsible for cloud clustering is presented. It is suggested that cloud clusters often arise from 4 to 10 triggering events localized within regions less than 2 km in diameter and randomly distributed within the cloud field. As the size of the cloud surpasses the scale of the triggering region, the clustering signal weakens and the larger cloud locations become more random.

  5. Clustering, randomness, and regularity in cloud fields. 4. Stratocumulus cloud fields

    Science.gov (United States)

    Lee, J.; Chou, J.; Weger, R. C.; Welch, R. M.

    1994-07-01

    To complete the analysis of the spatial distribution of boundary layer cloudiness, the present study focuses on nine stratocumulus Landsat scenes. The results indicate many similarities between stratocumulus and cumulus spatial distributions. Most notably, at full spatial resolution all scenes exhibit a decidedly clustered distribution. The strength of the clustering signal decreases with increasing cloud size; the clusters themselves consist of a few clouds (less than 10), occupy a small percentage of the cloud field area (less than 5%), contain between 20% and 60% of the cloud field population, and are randomly located within the scene. In contrast, stratocumulus in almost every respect are more strongly clustered than are cumulus cloud fields. For instance, stratocumulus clusters contain more clouds per cluster, occupy a larger percentage of the total area, and have a larger percentage of clouds participating in clusters than the corresponding cumulus examples. To investigate clustering at intermediate spatial scales, the local dimensionality statistic is introduced. Results obtained from this statistic provide the first direct evidence for regularity among large (>900 m in diameter) clouds in stratocumulus and cumulus cloud fields, in support of the inhibition hypothesis of Ramirez and Bras (1990). Also, the size compensated point-to-cloud cumulative distribution function statistic is found to be necessary to obtain a consistent description of stratocumulus cloud distributions. A hypothesis regarding the underlying physical mechanisms responsible for cloud clustering is presented. It is suggested that cloud clusters often arise from 4 to 10 triggering events localized within regions less than 2 km in diameter and randomly distributed within the cloud field. As the size of the cloud surpasses the scale of the triggering region, the clustering signal weakens and the larger cloud locations become more random.

  6. Projection correlation between two random vectors.

    Science.gov (United States)

    Zhu, Liping; Xu, Kai; Li, Runze; Zhong, Wei

    2017-12-01

    We propose the use of projection correlation to characterize dependence between two random vectors. Projection correlation has several appealing properties. It equals zero if and only if the two random vectors are independent, it is not sensitive to the dimensions of the two random vectors, it is invariant with respect to the group of orthogonal transformations, and its estimation is free of tuning parameters and does not require moment conditions on the random vectors. We show that the sample estimate of the projection correction is [Formula: see text]-consistent if the two random vectors are independent and root-[Formula: see text]-consistent otherwise. Monte Carlo simulation studies indicate that the projection correlation has higher power than the distance correlation and the ranks of distances in tests of independence, especially when the dimensions are relatively large or the moment conditions required by the distance correlation are violated.

  7. Two-stage revision surgery with preformed spacers and cementless implants for septic hip arthritis: a prospective, non-randomized cohort study

    Directory of Open Access Journals (Sweden)

    Logoluso Nicola

    2011-05-01

    Full Text Available Abstract Background Outcome data on two-stage revision surgery for deep infection after septic hip arthritis are limited and inconsistent. This study presents the medium-term results of a new, standardized two-stage arthroplasty with preformed hip spacers and cementless implants in a consecutive series of adult patients with septic arthritis of the hip treated according to a same protocol. Methods Nineteen patients (20 hips were enrolled in this prospective, non-randomized cohort study between 2000 and 2008. The first stage comprised femoral head resection, debridement, and insertion of a preformed, commercially available, antibiotic-loaded cement hip spacer. After eradication of infection, a cementless total hip arthroplasty was implanted in the second stage. Patients were assessed for infection recurrence, pain (visual analog scale [VAS] and hip joint function (Harris Hip score. Results The mean time between first diagnosis of infection and revision surgery was 5.8 ± 9.0 months; the average duration of follow up was 56.6 (range, 24 - 104 months; all 20 hips were successfully converted to prosthesis an average 22 ± 5.1 weeks after spacer implantation. Reinfection after total hip joint replacement occurred in 1 patient. The mean VAS pain score improved from 48 (range, 35 - 84 pre-operatively to 18 (range, 0 - 38 prior to spacer removal and to 8 (range, 0 - 15 at the last follow-up assessment after prosthesis implantation. The average Harris Hip score improved from 27.5 before surgery to 61.8 between the two stages to 92.3 at the final follow-up assessment. Conclusions Satisfactory outcomes can be obtained with two-stage revision hip arthroplasty using preformed spacers and cementless implants for prosthetic hip joint infections of various etiologies.

  8. Employing post-DEA cross-evaluation and cluster analysis in a sample of Greek NHS hospitals.

    Science.gov (United States)

    Flokou, Angeliki; Kontodimopoulos, Nick; Niakas, Dimitris

    2011-10-01

    To increase Data Envelopment Analysis (DEA) discrimination of efficient Decision Making Units (DMUs), by complementing "self-evaluated" efficiencies with "peer-evaluated" cross-efficiencies and, based on these results, to classify the DMUs using cluster analysis. Healthcare, which is deprived of such studies, was chosen as the study area. The sample consisted of 27 small- to medium-sized (70-500 beds) NHS general hospitals distributed throughout Greece, in areas where they are the sole NHS representatives. DEA was performed on 2005 data collected from the Ministry of Health and the General Secretariat of the National Statistical Service. Three inputs -hospital beds, physicians and other health professionals- and three outputs -case-mix adjusted hospitalized cases, surgeries and outpatient visits- were included in input-oriented, constant-returns-to-scale (CRS) and variable-returns-to-scale (VRS) models. In a second stage (post-DEA), aggressive and benevolent cross-efficiency formulations and clustering were employed, to validate (or not) the initial DEA scores. The "maverick index" was used to sort the peer-appraised hospitals. All analyses were performed using custom-made software. Ten benchmark hospitals were identified by DEA, but using the aggressive and benevolent formulations showed that two and four of them respectively were at the lower end of the maverick index list. On the other hand, only one 100% efficient (self-appraised) hospital was at the higher end of the list, using either formulation. Cluster analysis produced a hierarchical "tree" structure which dichotomized the hospitals in accordance to the cross-evaluation results, and provided insight on the two-dimensional path to improving efficiency. This is, to our awareness, the first study in the healthcare domain to employ both of these post-DEA techniques (cross efficiency and clustering) at the hospital (i.e. micro) level. The potential benefit for decision-makers is the capability to examine high

  9. An unbiased estimator of the variance of simple random sampling using mixed random-systematic sampling

    OpenAIRE

    Padilla, Alberto

    2009-01-01

    Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...

  10. The Transeurope Footrace Project: longitudinal data acquisition in a cluster randomized mobile MRI observational cohort study on 44 endurance runners at a 64-stage 4,486km transcontinental ultramarathon

    Directory of Open Access Journals (Sweden)

    Schütz Uwe HW

    2012-07-01

    Full Text Available Abstract Background The TransEurope FootRace 2009 (TEFR09 was one of the longest transcontinental ultramarathons with an extreme endurance physical load of running nearly 4,500 km in 64 days. The aim of this study was to assess the wide spectrum of adaptive responses in humans regarding the different tissues, organs and functional systems being exposed to such chronic physical endurance load with limited time for regeneration and resulting negative energy balance. A detailed description of the TEFR project and its implemented measuring methods in relation to the hypotheses are presented. Methods The most important research tool was a 1.5 Tesla magnetic resonance imaging (MRI scanner mounted on a mobile unit following the ultra runners from stage to stage each day. Forty-four study volunteers (67% of the participants were cluster randomized into two groups for MRI measurements (22 subjects each according to the project protocol with its different research modules: musculoskeletal system, brain and pain perception, cardiovascular system, body composition, and oxidative stress and inflammation. Complementary to the diverse daily mobile MR-measurements on different topics (muscle and joint MRI, T2*-mapping of cartilage, MR-spectroscopy of muscles, functional MRI of the brain, cardiac and vascular cine MRI, whole body MRI other methods were also used: ice-water pain test, psychometric questionnaires, bioelectrical impedance analysis (BIA, skinfold thickness and limb circumference measurements, daily urine samples, periodic blood samples and electrocardiograms (ECG. Results Thirty volunteers (68% reached the finish line at North Cape. The mean total race speed was 8.35 km/hour. Finishers invested 552 hours in total. The completion rate for planned MRI investigations was more than 95%: 741 MR-examinations with 2,637 MRI sequences (more than 200,000 picture data, 5,720 urine samples, 244 blood samples, 205 ECG, 1,018 BIA, 539 anthropological

  11. Cluster-randomized xylitol toothpaste trial for early childhood caries prevention

    Science.gov (United States)

    Chi, Donald L.; Tut, Ohnmar K.; Milgrom, Peter

    2013-01-01

    Purpose We assessed the efficacy of supervised toothbrushing with xylitol toothpaste to prevent early childhood caries (ECC) and to reduce mutans streptococci (MS). Methods In this cluster-randomized efficacy trial, 4 Head Start classrooms in the Marshall Islands were randomly assigned to supervised toothbrushing with 1,400ppm/31% fluoride-xylitol (Epic Dental, Provo, UT) or 1,450ppm fluoride-sorbitol toothpaste (Colgate-Palmolive, New York, NY) (N=196 children, ages 4–5 yrs). We hypothesized no difference in efficacy between the two types of toothpaste. The primary outcome was primary molar d2-3mfs increment after 6 mos. A single examiner was blinded to classroom assignments. Two classrooms were assigned to the fluoride-xylitol group (85 children) and 2 classrooms to the fluoride-sorbitol group (83 children). The child-level analyses accounted for clustering. Results There was no difference between the two groups in baseline or end-of-trial mean d2-3mfs. The mean d2-3mfs increment was greater in the fluoride-xylitol group compared to the fluoride-sorbitol group (2.5 and 1.4 d2-3mfs, respectively), but the difference was not significant (95% CI:−0.17, 2.37;P=0.07). No adverse effects were reported. Conclusion After 6 mos, brushing with a low strength xylitol/fluoride toothpaste is no more efficacious in reducing ECC than a fluoride only toothpaste in a high caries risk child population. PMID:24709430

  12. Production of complex particles in low energy spallation and in fragmentation reactions by in-medium random clusterization

    International Nuclear Information System (INIS)

    Lacroix, D.; Durand, D.

    2005-09-01

    Rules for in-medium complex particle production in nuclear reactions are proposed. These rules have been implemented in two models to simulate nucleon-nucleus and nucleus-nucleus reactions around the Fermi energy. Our work emphasizes the effect of randomness in cluster formation, the importance of the nucleonic Fermi motion as well as the role of conservation laws. The concepts of total available phase-space and explored phase-space under constraint imposed by the reaction are clarified. The compatibility of experimental observations with a random clusterization is illustrated in a schematic scenario of a proton-nucleus collision. The role of randomness under constraint is also illustrated in the nucleus-nucleus case. (authors)

  13. Person mobility in the design and analysis of cluster-randomized cohort prevention trials.

    Science.gov (United States)

    Vuchinich, Sam; Flay, Brian R; Aber, Lawrence; Bickman, Leonard

    2012-06-01

    Person mobility is an inescapable fact of life for most cluster-randomized (e.g., schools, hospitals, clinic, cities, state) cohort prevention trials. Mobility rates are an important substantive consideration in estimating the effects of an intervention. In cluster-randomized trials, mobility rates are often correlated with ethnicity, poverty and other variables associated with disparity. This raises the possibility that estimated intervention effects may generalize to only the least mobile segments of a population and, thus, create a threat to external validity. Such mobility can also create threats to the internal validity of conclusions from randomized trials. Researchers must decide how to deal with persons who leave study clusters during a trial (dropouts), persons and clusters that do not comply with an assigned intervention, and persons who enter clusters during a trial (late entrants), in addition to the persons who remain for the duration of a trial (stayers). Statistical techniques alone cannot solve the key issues of internal and external validity raised by the phenomenon of person mobility. This commentary presents a systematic, Campbellian-type analysis of person mobility in cluster-randomized cohort prevention trials. It describes four approaches for dealing with dropouts, late entrants and stayers with respect to data collection, analysis and generalizability. The questions at issue are: 1) From whom should data be collected at each wave of data collection? 2) Which cases should be included in the analyses of an intervention effect? and 3) To what populations can trial results be generalized? The conclusions lead to recommendations for the design and analysis of future cluster-randomized cohort prevention trials.

  14. Bond percolation on a class of correlated and clustered random graphs

    International Nuclear Information System (INIS)

    Allard, A; Hébert-Dufresne, L; Noël, P-A; Marceau, V; Dubé, L J

    2012-01-01

    We introduce a formalism for computing bond percolation properties of a class of correlated and clustered random graphs. This class of graphs is a generalization of the configuration model where nodes of different types are connected via different types of hyperedges, edges that can link more than two nodes. We argue that the multitype approach coupled with the use of clustered hyperedges can reproduce a wide spectrum of complex patterns, and thus enhances our capability to model real complex networks. As an illustration of this claim, we use our formalism to highlight unusual behaviours of the size and composition of the components (small and giant) in a synthetic, albeit realistic, social network. (paper)

  15. Sampling large random knots in a confined space

    International Nuclear Information System (INIS)

    Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M

    2007-01-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications

  16. Sampling large random knots in a confined space

    Science.gov (United States)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  17. Sampling large random knots in a confined space

    Energy Technology Data Exchange (ETDEWEB)

    Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)

    2007-09-28

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  18. A Coupled Hidden Markov Random Field Model for Simultaneous Face Clustering and Tracking in Videos

    KAUST Repository

    Wu, Baoyuan

    2016-10-25

    Face clustering and face tracking are two areas of active research in automatic facial video processing. They, however, have long been studied separately, despite the inherent link between them. In this paper, we propose to perform simultaneous face clustering and face tracking from real world videos. The motivation for the proposed research is that face clustering and face tracking can provide useful information and constraints to each other, thus can bootstrap and improve the performances of each other. To this end, we introduce a Coupled Hidden Markov Random Field (CHMRF) to simultaneously model face clustering, face tracking, and their interactions. We provide an effective algorithm based on constrained clustering and optimal tracking for the joint optimization of cluster labels and face tracking. We demonstrate significant improvements over state-of-the-art results in face clustering and tracking on several videos.

  19. Air quality comparison between two European ceramic tile clusters

    Science.gov (United States)

    Minguillón, M. C.; Monfort, E.; Escrig, A.; Celades, I.; Guerra, L.; Busani, G.; Sterni, A.; Querol, X.

    2013-08-01

    The European ceramic tile industry is mostly concentrated in two clusters, one in Castelló (Spain) and another one in Modena (Italy). Industrial clusters may have problems to accomplish the EU air quality regulations because of the concentration of some specific pollutants and, hence, the feasibility of the industrial clusters can be jeopardised. The present work assesses the air quality in these ceramic clusters in 2008, when the new EU emission regulations where put into force. PM10 samples were collected at two sampling sites in the Modena ceramic cluster and one sampling site in the Castelló ceramic cluster. PM10 annual average concentrations were 12-14 μg m-3 higher in Modena than in Castelló, and were close to or exceeded the European limit. Air quality in Modena was mainly influenced by road traffic and, in a lower degree, the metalmechanical industry, as evidenced by the high concentrations of Mn, Cu, Zn, Sn and Sb registered. The stagnant weather conditions from Modena hindering dispersion of pollutants also contributed to the relatively high pollution levels. In Castelló, the influence of the ceramic industry is evidenced by the high concentrations of Ti, Se, Tl and Pb, whereas this influence is not seen in Modena. The difference in the impact of the ceramic industry on the air quality in the two areas was attributed to: better abatement systems in the spray-drier facilities in Modena, higher coverage of the areas for storage and handling of dusty raw materials in Modena, presence of two open air quarries in the Castelló region, low degree of abatement systems in the ceramic tile kilns in Castelló, and abundance of ceramic frit, glaze and pigment manufacture in Castelló as opposed to scarce manufacture of these products in Modena. The necessity of additional measures to fulfil the EU air quality requirements in the Modena region is evidenced, despite the high degree of environmental measures implemented in the ceramic industry. The Principal

  20. Meaningful Effect Sizes, Intraclass Correlations, and Proportions of Variance Explained by Covariates for Planning Two- and Three-Level Cluster Randomized Trials of Social and Behavioral Outcomes.

    Science.gov (United States)

    Dong, Nianbo; Reinke, Wendy M; Herman, Keith C; Bradshaw, Catherine P; Murray, Desiree W

    2016-09-30

    There is a need for greater guidance regarding design parameters and empirical benchmarks for social and behavioral outcomes to inform assumptions in the design and interpretation of cluster randomized trials (CRTs). We calculated the empirical reference values on critical research design parameters associated with statistical power for children's social and behavioral outcomes, including effect sizes, intraclass correlations (ICCs), and proportions of variance explained by a covariate at different levels (R 2 ). Children from kindergarten to Grade 5 in the samples from four large CRTs evaluating the effectiveness of two classroom- and two school-level preventive interventions. Teacher ratings of students' social and behavioral outcomes using the Teacher Observation of Classroom Adaptation-Checklist and the Social Competence Scale-Teacher. Two types of effect size benchmarks were calculated: (1) normative expectations for change and (2) policy-relevant demographic performance gaps. The ICCs and R 2 were calculated using two-level hierarchical linear modeling (HLM), where students are nested within schools, and three-level HLM, where students were nested within classrooms, and classrooms were nested within schools. Comprehensive tables of benchmarks and ICC values are provided to inform prevention researchers in interpreting the effect size of interventions and conduct power analyses for designing CRTs of children's social and behavioral outcomes. The discussion also provides a demonstration for how to use the parameter reference values provided in this article to calculate the sample size for two- and three-level CRTs designs. © The Author(s) 2016.

  1. MAPS OF MASSIVE CLUMPS IN THE EARLY STAGE OF CLUSTER FORMATION: TWO MODES OF CLUSTER FORMATION, COEVAL OR NON-COEVAL?

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Aya E.; Saito, Masao; Mauersberger, Rainer; Kawabe, Ryohei [Joint ALMA Observatory, Alonso de Cordova 3107, Vitacura, Santiago (Chile); Kurono, Yasutaka; Naoi, Takahiro, E-mail: ahiguchi@alma.cl [National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588 (Japan)

    2013-03-10

    We present maps of seven young massive molecular clumps within five target regions in C{sup 18}O (J = 1-0) line emission, using the Nobeyama 45 m telescope. These clumps, which are not associated with clusters, lie at distances between 0.7 and 2.1 kpc. We find C{sup 18}O clumps with radii of 0.5-1.7 pc, masses of 470-4200 M{sub Sun }, and velocity widths of 1.4-3.3 km s{sup -1}. All of the clumps are massive and approximately in virial equilibrium, suggesting they will potentially form clusters. Three of our target regions are associated with H II regions (CWHRs), while the other two are unassociated with H II regions (CWOHRs). The C{sup 18}O clumps can be classified into two morphological types: CWHRs with a filamentary or shell-like structure and spherical CWOHRs. The two CWOHRs have systematic velocity gradients. Using the publicly released WISE database, Class I and Class II protostellar candidates are identified within the C{sup 18}O clumps. The fraction of Class I candidates among all YSO candidates (Class I+Class II) is {>=}50% in CWHRs and {<=}50% in CWOHRs. We conclude that effects from the H II regions can be seen in (1) the spatial distributions of the clumps: filamentary or shell-like structure running along the H II regions; (2) the velocity structures of the clumps: large velocity dispersion along shells; and (3) the small age spreads of YSOs. The small spreads in age of the YSOs show that the presence of H II regions tends to trigger coeval cluster formation.

  2. k-Means: Random Sampling Procedure

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. k-Means: Random Sampling Procedure. Optimal 1-Mean is. Approximation of Centroid (Inaba et al). S = random sample of size O(1/ ); Centroid of S is a (1+ )-approx centroid of P with constant probability.

  3. HICOSMO - X-ray analysis of a complete sample of galaxy clusters

    Science.gov (United States)

    Schellenberger, G.; Reiprich, T.

    2017-10-01

    Galaxy clusters are known to be the largest virialized objects in the Universe. Based on the theory of structure formation one can use them as cosmological probes, since they originate from collapsed overdensities in the early Universe and witness its history. The X-ray regime provides the unique possibility to measure in detail the most massive visible component, the intra cluster medium. Using Chandra observations of a local sample of 64 bright clusters (HIFLUGCS) we provide total (hydrostatic) and gas mass estimates of each cluster individually. Making use of the completeness of the sample we quantify two interesting cosmological parameters by a Bayesian cosmological likelihood analysis. We find Ω_{M}=0.3±0.01 and σ_{8}=0.79±0.03 (statistical uncertainties) using our default analysis strategy combining both, a mass function analysis and the gas mass fraction results. The main sources of biases that we discuss and correct here are (1) the influence of galaxy groups (higher incompleteness in parent samples and a differing behavior of the L_{x} - M relation), (2) the hydrostatic mass bias (as determined by recent hydrodynamical simulations), (3) the extrapolation of the total mass (comparing various methods), (4) the theoretical halo mass function and (5) other cosmological (non-negligible neutrino mass), and instrumental (calibration) effects.

  4. Randomized Comparison of Two Vaginal Self-Sampling Methods for Human Papillomavirus Detection: Dry Swab versus FTA Cartridge.

    Science.gov (United States)

    Catarino, Rosa; Vassilakos, Pierre; Bilancioni, Aline; Vanden Eynde, Mathieu; Meyer-Hamme, Ulrike; Menoud, Pierre-Alain; Guerry, Frédéric; Petignat, Patrick

    2015-01-01

    Human papillomavirus (HPV) self-sampling (self-HPV) is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab. A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY) or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA). After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET). HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic. HPV prevalence for high-risk types was 62.3% (95%CI: 53.7-70.2) detected by s-DRY, 56.2% (95%CI: 47.6-64.4) by Dr-WET, and 54.6% (95%CI: 46.1-62.9) by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34), and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56). Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+) were: 64.0% (95%CI: 44.5-79.8) for s-FTA, 84.6% (95%CI: 66.5-93.9) for s-DRY, and 76.9% (95%CI: 58.0-89.0) for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%). Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD)/per card vs. ~1 USD/per swab). Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA. International Standard Randomized Controlled Trial Number (ISRCTN): 43310942.

  5. Analyzing indirect effects in cluster randomized trials. The effect of estimation method, number of groups and group sizes on accuracy and power.

    Directory of Open Access Journals (Sweden)

    Joop eHox

    2014-02-01

    Full Text Available Cluster randomized trials assess the effect of an intervention that is carried out at the group or cluster level. Ajzen’s theory of planned behaviour is often used to model the effect of the intervention as an indirect effect mediated in turn by attitude, norms and behavioural intention. Structural equation modelling (SEM is the technique of choice to estimate indirect effects and their significance. However, this is a large sample technique, and its application in a cluster randomized trial assumes a relatively large number of clusters. In practice, the number of clusters in these studies tends to be relatively small, e.g. much less than fifty. This study uses simulation methods to find the lowest number of clusters needed when multilevel SEM is used to estimate the indirect effect. Maximum likelihood estimation is compared to Bayesian analysis, with the central quality criteria being accuracy of the point estimate and the confidence interval. We also investigate the power of the test for the indirect effect. We conclude that Bayes estimation works well with much smaller cluster level sample sizes such as 20 cases than maximum likelihood estimation; although the bias is larger the coverage is much better. When only 5 to 10 clusters are available per treatment condition even with Bayesian estimation problems occur.

  6. Hierarchical Bayesian modelling of gene expression time series across irregularly sampled replicates and clusters.

    Science.gov (United States)

    Hensman, James; Lawrence, Neil D; Rattray, Magnus

    2013-08-20

    Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.

  7. Generation and Analysis of Constrained Random Sampling Patterns

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2016-01-01

    Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, which...... indicates signal sampling points in time. Practical random sampling patterns are constrained by ADC characteristics and application requirements. In this paper, we introduce statistical methods which evaluate random sampling pattern generators with emphasis on practical applications. Furthermore, we propose...... algorithm generates random sampling patterns dedicated for event-driven-ADCs better than existed sampling pattern generators. Finally, implementation issues of random sampling patterns are discussed....

  8. On the limiting characteristics of quantum random number generators at various clusterings of photocounts

    Science.gov (United States)

    Molotkov, S. N.

    2017-03-01

    Various methods for the clustering of photocounts constituting a sequence of random numbers are considered. It is shown that the clustering of photocounts resulting in the Fermi-Dirac distribution makes it possible to achieve the theoretical limit of the random number generation rate.

  9. Health plan auditing: 100-percent-of-claims vs. random-sample audits.

    Science.gov (United States)

    Sillup, George P; Klimberg, Ronald K

    2011-01-01

    The objective of this study was to examine the relative efficacy of two different methodologies for auditing self-funded medical claim expenses: 100-percent-of-claims auditing versus random-sampling auditing. Multiple data sets of claim errors or 'exceptions' from two Fortune-100 corporations were analysed and compared to 100 simulated audits of 300- and 400-claim random samples. Random-sample simulations failed to identify a significant number and amount of the errors that ranged from $200,000 to $750,000. These results suggest that health plan expenses of corporations could be significantly reduced if they audited 100% of claims and embraced a zero-defect approach.

  10. E-learning or educational leaflet: does it make a difference in oral health promotion? A clustered randomized trial.

    Science.gov (United States)

    Al Bardaweel, Susan; Dashash, Mayssoon

    2018-05-10

    The early recognition of technology together with great ability to use computers and smart systems have promoted researchers to investigate the possibilities of utilizing technology for improving health care in children. The aim of this study was to compare between the traditional educational leaflets and E-applications in improving oral health knowledge, oral hygiene and gingival health in schoolchildren of Damascus city, Syria. A clustered randomized controlled trial at two public primary schools was performed. About 220 schoolchildren aged 10-11 years were included in this study and grouped into two clusters. Children in Leaflet cluster received oral health education through leaflets, while children in E-learning cluster received oral health education through an E-learning program. A questionnaire was designed to register information related to oral health knowledge and to record Plaque and Gingival indices. Questionnaire administration and clinical assessment were undertaken at baseline, 6 and at 12 weeks of oral health education. Data was analysed using one way repeated measures ANOVA, post hoc Bonferroni test and independent samples t-test. Leaflet cluster (107 participants) had statistically significant better oral health knowledge than E-learning cluster (104 participants) at 6 weeks (P E-learning cluster:100 participants). The mean knowledge gain compared to baseline was higher in Leaflet cluster than in E-learning cluster. A significant reduction in the PI means at 6 weeks and 12 weeks was observed in both clusters (P E-learning cluster at 6 weeks (P E-learning cluster at 6 weeks (P < 0.05) and 12 weeks (P < 0.05). Traditional educational leaflets are an effective tool in the improvement of both oral health knowledge as well as clinical indices of oral hygiene and care among Syrian children. Leaflets can be used in school-based oral health education for a positive outcome. Australian New Zealand Clinical Trials Registry ( ACTRN

  11. Precipitation and clustering in the early stages of ageing in Inconel 718

    International Nuclear Information System (INIS)

    Alam, Talukder; Chaturvedi, Mahesh; Ringer, Simon P.; Cairney, Julie M.

    2010-01-01

    Research highlights: → IN718 could be age hardened rapidly by secondary phase formation. → Co-located phases were observed in the earliest stage of detection. → Clustering of Ti/Al and Nb atoms was observed prior to precipitation. - Abstract: In this report we investigate the onset and evolution of precipitation in the early stages of ageing in the alloy WE 91, a variant of the Ni-Fe-Cr superalloy Inconel 718 (IN718). Transmission electron microscopy and atom probe tomography were used to study the size and volume fraction of γ' and γ'' precipitates and the extent of pre-precipitate clustering of Al/Ti and Nb. Co-located γ' and γ'' precipitates were observed from the shortest ageing times that precipitates could be visualised using atom probe. At shorter times, prior to the observation of precipitates, clustering of Al/Ti and Nb was shown to occur. The respective volume fraction of the γ' and γ'' precipitates and the clustering of Al/Ti and Nb suggest that γ'' nucleates prior to γ' during ageing at 706 deg. C for this alloy.

  12. Cluster II quartet take the stage together

    Science.gov (United States)

    1999-11-01

    This is the only occasion on which all four of ESA's Cluster II spacecraft will be on display together in Europe. Four Spacecraft, One Mission The unique event takes place near the end of the lengthy assembly and test programme, during which each individual spacecraft is being assembled in sequence, one after the other. Two have already completed their assembly and systems testing and are about to be stored in special containers at IABG prior to shipment to the Baikonur launch site in Kazakhstan next spring. In the case of the other two, flight models 5 and 8, installation of the science payloads has finished, but their exhaustive series of environmental tests at IABG have yet to begin. Following delivery to the launch site next April, the satellites will be launched in pairs in June and July 2000. Two Soyuz rockets, each with a newly designed Fregat upper stage, are being provided by the Russian-French Starsem company. This will be the first time ESA satellites have been launched from the former Soviet Union. Cluster II is a replacement for the original Cluster mission, which was lost during the maiden launch of Ariane 5 in June 1996. ESA, given the mission's importance in its overall strategy in the area of the Sun-Earth connection, decided to rebuild this unique project. ESA member states supported that proposal. On 3 April 1997, the Agency's Science Programme Committee agreed. Cluster II was born. European Teamwork Scientific institutions and industrial enterprises in almost all the 14 ESA member states and the United States are taking part in the Cluster II project. Construction of the eight Cluster / Cluster II spacecraft has been a major undertaking for European industry. Built into each 1200 kg satellite are six propellant tanks, two pressure tanks, eight thrusters, 80 metres of pipework, about 5 km of wiring, 380 connectors and more than 14 000 electrical contacts. All the spacecraft were assembled in the giant clean room at the Friedrichshafen plant of

  13. Universal Prevention for Anxiety and Depressive Symptoms in Children: A Meta-analysis of Randomized and Cluster-Randomized Trials.

    Science.gov (United States)

    Ahlen, Johan; Lenhard, Fabian; Ghaderi, Ata

    2015-12-01

    Although under-diagnosed, anxiety and depression are among the most prevalent psychiatric disorders in children and adolescents, leading to severe impairment, increased risk of future psychiatric problems, and a high economic burden to society. Universal prevention may be a potent way to address these widespread problems. There are several benefits to universal relative to targeted interventions because there is limited knowledge as to how to screen for anxiety and depression in the general population. Earlier meta-analyses of the prevention of depression and anxiety symptoms among children suffer from methodological inadequacies such as combining universal, selective, and indicated interventions in the same analyses, and comparing cluster-randomized trials with randomized trials without any correction for clustering effects. The present meta-analysis attempted to determine the effectiveness of universal interventions to prevent anxiety and depressive symptoms after correcting for clustering effects. A systematic search of randomized studies in PsychINFO, Cochrane Library, and Google Scholar resulted in 30 eligible studies meeting inclusion criteria, namely peer-reviewed, randomized or cluster-randomized trials of universal interventions for anxiety and depressive symptoms in school-aged children. Sixty-three percent of the studies reported outcome data regarding anxiety and 87 % reported outcome data regarding depression. Seventy percent of the studies used randomization at the cluster level. There were small but significant effects regarding anxiety (.13) and depressive (.11) symptoms as measured at immediate posttest. At follow-up, which ranged from 3 to 48 months, effects were significantly larger than zero regarding depressive (.07) but not anxiety (.11) symptoms. There was no significant moderation effect of the following pre-selected variables: the primary aim of the intervention (anxiety or depression), deliverer of the intervention, gender distribution

  14. Two-stage nonrecursive filter/decimator

    International Nuclear Information System (INIS)

    Yoder, J.R.; Richard, B.D.

    1980-08-01

    A two-stage digital filter/decimator has been designed and implemented to reduce the sampling rate associated with the long-term computer storage of certain digital waveforms. This report describes the design selection and implementation process and serves as documentation for the system actually installed. A filter design with finite-impulse response (nonrecursive) was chosen for implementation via direct convolution. A newly-developed system-test statistic validates the system under different computer-operating environments

  15. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  16. Energy Balance 4 Kids with Play: Results from a Two-Year Cluster-Randomized Trial.

    Science.gov (United States)

    Madsen, Kristine; Linchey, Jennifer; Gerstein, Dana; Ross, Michelle; Myers, Esther; Brown, Katie; Crawford, Patricia

    2015-08-01

    Identifying sustainable approaches to improving the physical activity (PA) and nutrition environments in schools is an important public health goal. This study examined the impact of Energy Balance for Kids with Play (EB4K with Play), a school-based intervention developed by the Academy of Nutrition and Dietetics Foundation and Playworks, on students' PA, dietary habits and knowledge, and weight status over 2 years. This cluster-randomized, controlled trial took place in four intervention and two control schools over 2 years (n=879; third- to fifth-grade students). PA (fourth and fifth grades only), dietary knowledge and behaviors, school policies, and BMI z-score were assessed at baseline (fall 2011), midpoint (spring 2012), and endpoint (fall 2012 for accelerometers; spring 2013 for all other outcomes). At endpoint, there were no group differences in change in PA or dietary behaviors, although BMI z-score decreased overall by -0.07 (p=0.05). Students' dietary knowledge significantly increased, as did the amount of vegetables schools served. Post-hoc analyses stratified by grade revealed that, relative to control students, fourth-grade intervention students reduced school-day sedentary time by 15 minutes (p=0.023) and third-grade intervention students reduced BMI z-score by -0.2 (0.05; pchildren's dietary knowledge and may improve weight status and decrease sedentary behaviors among younger children. Future iterations should examine programming specific for different age groups.

  17. Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs.

    Science.gov (United States)

    Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael

    2013-12-01

    Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.

  18. A cluster-randomized trial of a college health center-based alcohol and sexual violence intervention (GIFTSS): Design, rationale, and baseline sample.

    Science.gov (United States)

    Abebe, Kaleab Z; Jones, Kelley A; Rofey, Dana; McCauley, Heather L; Clark, Duncan B; Dick, Rebecca; Gmelin, Theresa; Talis, Janine; Anderson, Jocelyn; Chugani, Carla; Algarroba, Gabriela; Antonio, Ashley; Bee, Courtney; Edwards, Clare; Lethihet, Nadia; Macak, Justin; Paley, Joshua; Torres, Irving; Van Dusen, Courtney; Miller, Elizabeth

    2018-02-01

    Sexual violence (SV) on college campuses is common, especially alcohol-related SV. This is a 2-arm cluster randomized controlled trial to test a brief intervention to reduce risk for alcohol-related sexual violence (SV) among students receiving care from college health centers (CHCs). Intervention CHC staff are trained to deliver universal SV education to all students seeking care, to facilitate patient and provider comfort in discussing SV and related abusive experiences (including the role of alcohol). Control sites provide participants with information about drinking responsibly. Across 28 participating campuses (12 randomized to intervention and 16 to control), 2292 students seeking care at CHCs complete surveys prior to their appointment (baseline), immediately after (exit), 4months later (T2) and one year later (T3). The primary outcome is change in recognition of SV and sexual risk. Among those reporting SV exposure at baseline, changes in SV victimization, disclosure, and use of SV services are additional outcomes. Intervention effects will be assessed using generalized linear mixed models that account for clustering of repeated observations both within CHCs and within students. Slightly more than half of the participating colleges have undergraduate enrollment of ≥3000 students; two-thirds are public and almost half are urban. Among participants there were relatively more Asian (10 v 1%) and Black/African American (13 v 7%) and fewer White (58 v 74%) participants in the intervention compared to control. This study will offer the first formal assessment for SV prevention in the CHC setting. Clinical Trials #: NCT02355470. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Two generalizations of Kohonen clustering

    Science.gov (United States)

    Bezdek, James C.; Pal, Nikhil R.; Tsao, Eric C. K.

    1993-01-01

    The relationship between the sequential hard c-means (SHCM), learning vector quantization (LVQ), and fuzzy c-means (FCM) clustering algorithms is discussed. LVQ and SHCM suffer from several major problems. For example, they depend heavily on initialization. If the initial values of the cluster centers are outside the convex hull of the input data, such algorithms, even if they terminate, may not produce meaningful results in terms of prototypes for cluster representation. This is due in part to the fact that they update only the winning prototype for every input vector. The impact and interaction of these two families with Kohonen's self-organizing feature mapping (SOFM), which is not a clustering method, but which often leads ideas to clustering algorithms is discussed. Then two generalizations of LVQ that are explicitly designed as clustering algorithms are presented; these algorithms are referred to as generalized LVQ = GLVQ; and fuzzy LVQ = FLVQ. Learning rules are derived to optimize an objective function whose goal is to produce 'good clusters'. GLVQ/FLVQ (may) update every node in the clustering net for each input vector. Neither GLVQ nor FLVQ depends upon a choice for the update neighborhood or learning rate distribution - these are taken care of automatically. Segmentation of a gray tone image is used as a typical application of these algorithms to illustrate the performance of GLVQ/FLVQ.

  20. Inadequacy of ethical conduct and reporting of stepped wedge cluster randomized trials: Results from a systematic review.

    Science.gov (United States)

    Taljaard, Monica; Hemming, Karla; Shah, Lena; Giraudeau, Bruno; Grimshaw, Jeremy M; Weijer, Charles

    2017-08-01

    Background/aims The use of the stepped wedge cluster randomized design is rapidly increasing. This design is commonly used to evaluate health policy and service delivery interventions. Stepped wedge cluster randomized trials have unique characteristics that complicate their ethical interpretation. The 2012 Ottawa Statement provides comprehensive guidance on the ethical design and conduct of cluster randomized trials, and the 2010 CONSORT extension for cluster randomized trials provides guidelines for reporting. Our aims were to assess the adequacy of the ethical conduct and reporting of stepped wedge trials to date, focusing on research ethics review and informed consent. Methods We conducted a systematic review of stepped wedge cluster randomized trials in health research published up to 2014 in English language journals. We extracted details of study intervention and data collection procedures, as well as reporting of research ethics review and informed consent. Two reviewers independently extracted data from each trial; discrepancies were resolved through discussion. We identified the presence of any research participants at the cluster level and the individual level. We assessed ethical conduct by tabulating reporting of research ethics review and informed consent against the presence of research participants. Results Of 32 identified stepped wedge trials, only 24 (75%) reported review by a research ethics committee, and only 16 (50%) reported informed consent from any research participants-yet, all trials included research participants at some level. In the subgroup of 20 trials with research participants at cluster level, only 4 (20%) reported informed consent from such participants; in 26 trials with individual-level research participants, only 15 (58%) reported their informed consent. Interventions (regardless of whether targeting cluster- or individual-level participants) were delivered at the group level in more than two-thirds of trials; nine trials (28

  1. A Coupled Hidden Conditional Random Field Model for Simultaneous Face Clustering and Naming in Videos

    KAUST Repository

    Zhang, Yifan

    2016-08-18

    For face naming in TV series or movies, a typical way is using subtitles/script alignment to get the time stamps of the names, and tagging them to the faces. We study the problem of face naming in videos when subtitles are not available. To this end, we divide the problem into two tasks: face clustering which groups the faces depicting a certain person into a cluster, and name assignment which associates a name to each face. Each task is formulated as a structured prediction problem and modeled by a hidden conditional random field (HCRF) model. We argue that the two tasks are correlated problems whose outputs can provide prior knowledge of the target prediction for each other. The two HCRFs are coupled in a unified graphical model called coupled HCRF where the joint dependence of the cluster labels and face name association is naturally embedded in the correlation between the two HCRFs. We provide an effective algorithm to optimize the two HCRFs iteratively and the performance of the two tasks on real-world data set can be both improved.

  2. Sampling problems for randomly broken sticks

    Energy Technology Data Exchange (ETDEWEB)

    Huillet, Thierry [Laboratoire de Physique Theorique et Modelisation, CNRS-UMR 8089 et Universite de Cergy-Pontoise, 5 mail Gay-Lussac, 95031, Neuville sur Oise (France)

    2003-04-11

    Consider the random partitioning model of a population (represented by a stick of length 1) into n species (fragments) with identically distributed random weights (sizes). Upon ranking the fragments' weights according to ascending sizes, let S{sub m:n} be the size of the mth smallest fragment. Assume that some observer is sampling such populations as follows: drop at random k points (the sample size) onto this stick and record the corresponding numbers of visited fragments. We shall investigate the following sampling problems: (1) what is the sample size if the sampling is carried out until the first visit of the smallest fragment (size S{sub 1:n})? (2) For a given sample size, have all the fragments of the stick been visited at least once or not? This question is related to Feller's random coupon collector problem. (3) In what order are new fragments being discovered and what is the random number of samples separating the discovery of consecutive new fragments until exhaustion of the list? For this problem, the distribution of the size-biased permutation of the species' weights, as the sequence of their weights in their order of appearance is needed and studied.

  3. The Effect of Cluster-Based Instruction on Mathematic Achievement in Inclusive Schools

    Science.gov (United States)

    Gunarhadi, Sunardi; Anwar, Mohammad; Andayani, Tri Rejeki; Shaari, Abdull Sukor

    2016-01-01

    The research aimed to investigate the effect of Cluster-Based Instruction (CBI) on the academic achievement of Mathematics in inclusive schools. The sample was 68 students in two intact classes, including those with learning disabilities, selected using a cluster random technique among 17 inclusive schools in the regency of Surakarta. The two…

  4. Assessment of heavy metals in Averrhoa bilimbi and A. carambola fruit samples at two developmental stages.

    Science.gov (United States)

    Soumya, S L; Nair, Bindu R

    2016-05-01

    Though the fruits of Averrhoa bilimbi and A. carambola are economically and medicinally important, they remain underutilized. The present study reports heavy metal quantitation in the fruit samples of A. bilimbi and A. carambola (Oxalidaceae), collected at two stages of maturity. Heavy metals are known to interfere with the functioning of vital cellular components. Although toxic, some elements are considered essential for human health, in trace quantities. Heavy metals such as Cr, Mn, Co, Cu, Zn, As, Se, Pb, and Cd were analyzed by atomic absorption spectroscopy (AAS). The samples under investigation included, A. bilimbi unripe (BU) and ripe (BR), A. carambola sour unripe (CSU) and ripe (CSR), and A. carambola sweet unripe (CTU) and ripe (CTR). Heavy metal analysis showed that relatively higher level of heavy metals was present in BR samples compared to the rest of the samples. The highest amount of As and Se were recorded in BU samples while Mn content was highest in CSU samples and Co in CSR. Least amounts of Cr, Zn, Se, Cd, and Pb were noted in CTU while, Mn, Cu, and As were least in CTR. Thus, the sweet types of A. carambola (CTU, CTR) had comparatively lower heavy metal content. There appears to be no reason for concern since different fruit samples of Averrhoa studied presently showed the presence of various heavy metals in trace quantities.

  5. Precipitation and clustering in the early stages of ageing in Inconel 718

    Energy Technology Data Exchange (ETDEWEB)

    Alam, Talukder, E-mail: talukder.alam@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, University of Sydney, NSW 2006 (Australia); Chaturvedi, Mahesh [Department of Mechanical and Industrial Engineering, University of Manitoba, Winnipeg, MB R3T 5V6 (Canada); Ringer, Simon P.; Cairney, Julie M. [Australian Centre for Microscopy and Microanalysis, University of Sydney, NSW 2006 (Australia)

    2010-11-15

    Research highlights: {yields} IN718 could be age hardened rapidly by secondary phase formation. {yields} Co-located phases were observed in the earliest stage of detection. {yields} Clustering of Ti/Al and Nb atoms was observed prior to precipitation. - Abstract: In this report we investigate the onset and evolution of precipitation in the early stages of ageing in the alloy WE 91, a variant of the Ni-Fe-Cr superalloy Inconel 718 (IN718). Transmission electron microscopy and atom probe tomography were used to study the size and volume fraction of {gamma}' and {gamma}'' precipitates and the extent of pre-precipitate clustering of Al/Ti and Nb. Co-located {gamma}' and {gamma}'' precipitates were observed from the shortest ageing times that precipitates could be visualised using atom probe. At shorter times, prior to the observation of precipitates, clustering of Al/Ti and Nb was shown to occur. The respective volume fraction of the {gamma}' and {gamma}'' precipitates and the clustering of Al/Ti and Nb suggest that {gamma}'' nucleates prior to {gamma}' during ageing at 706 deg. C for this alloy.

  6. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    Science.gov (United States)

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith

  7. Randomized Comparison of Two Vaginal Self-Sampling Methods for Human Papillomavirus Detection: Dry Swab versus FTA Cartridge.

    Directory of Open Access Journals (Sweden)

    Rosa Catarino

    Full Text Available Human papillomavirus (HPV self-sampling (self-HPV is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab.A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA. After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET. HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic.HPV prevalence for high-risk types was 62.3% (95%CI: 53.7-70.2 detected by s-DRY, 56.2% (95%CI: 47.6-64.4 by Dr-WET, and 54.6% (95%CI: 46.1-62.9 by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34, and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56. Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+ were: 64.0% (95%CI: 44.5-79.8 for s-FTA, 84.6% (95%CI: 66.5-93.9 for s-DRY, and 76.9% (95%CI: 58.0-89.0 for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%. Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD/per card vs. ~1 USD/per swab.Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA.International Standard Randomized Controlled Trial Number (ISRCTN: 43310942.

  8. Preoperative staging of lung cancer with PET/CT: cost-effectiveness evaluation alongside a randomized controlled trial

    DEFF Research Database (Denmark)

    Søgaard, Rikke; Fischer, Barbara Malene B; Mortensen, Jann

    2011-01-01

    PURPOSE: Positron emission tomography (PET)/CT has become a widely used technology for preoperative staging of non-small cell lung cancer (NSCLC). Two recent randomized controlled trials (RCT) have established its efficacy over conventional staging, but no studies have assessed its cost-effective......PURPOSE: Positron emission tomography (PET)/CT has become a widely used technology for preoperative staging of non-small cell lung cancer (NSCLC). Two recent randomized controlled trials (RCT) have established its efficacy over conventional staging, but no studies have assessed its cost......-effectiveness. The objective of this study was to assess the cost-effectiveness of PET/CT as an adjunct to conventional workup for preoperative staging of NSCLC. METHODS: The study was conducted alongside an RCT in which 189 patients were allocated to conventional staging (n = 91) or conventional staging + PET/CT (n = 98......) and followed for 1 year after which the numbers of futile thoracotomies in each group were monitored. A full health care sector perspective was adapted for costing resource use. The outcome parameter was defined as the number needed to treat (NNT)-here number of PET/CT scans needed-to avoid one futile...

  9. Myeloid clusters are associated with a pro-metastatic environment and poor prognosis in smoking-related early stage non-small cell lung cancer.

    Directory of Open Access Journals (Sweden)

    Wang Zhang

    Full Text Available This study aimed to understand the role of myeloid cell clusters in uninvolved regional lymph nodes from early stage non-small cell lung cancer patients.Uninvolved regional lymph node sections from 67 patients with stage I-III resected non-small cell lung cancer were immunostained to detect myeloid clusters, STAT3 activity and occult metastasis. Anthracosis intensity, myeloid cluster infiltration associated with anthracosis and pSTAT3 level were scored and correlated with patient survival. Multivariate Cox regression analysis was performed with prognostic variables. Human macrophages were used for in vitro nicotine treatment.CD68+ myeloid clusters associated with anthracosis and with an immunosuppressive and metastasis-promoting phenotype and elevated overall STAT3 activity were observed in uninvolved lymph nodes. In patients with a smoking history, myeloid cluster score significantly correlated with anthracosis intensity and pSTAT3 level (P<0.01. Nicotine activated STAT3 in macrophages in long-term culture. CD68+ myeloid clusters correlated and colocalized with occult metastasis. Myeloid cluster score was an independent prognostic factor (P = 0.049 and was associated with survival by Kaplan-Maier estimate in patients with a history of smoking (P = 0.055. The combination of myeloid cluster score with either lymph node stage or pSTAT3 level defined two populations with a significant difference in survival (P = 0.024 and P = 0.004, respectively.Myeloid clusters facilitate a pro-metastatic microenvironment in uninvolved regional lymph nodes and associate with occult metastasis in early stage non-small cell lung cancer. Myeloid cluster score is an independent prognostic factor for survival in patients with a history of smoking, and may present a novel method to inform therapy choices in the adjuvant setting. Further validation studies are warranted.

  10. A cluster expansion approach to exponential random graph models

    International Nuclear Information System (INIS)

    Yin, Mei

    2012-01-01

    The exponential family of random graphs are among the most widely studied network models. We show that any exponential random graph model may alternatively be viewed as a lattice gas model with a finite Banach space norm. The system may then be treated using cluster expansion methods from statistical mechanics. In particular, we derive a convergent power series expansion for the limiting free energy in the case of small parameters. Since the free energy is the generating function for the expectations of other random variables, this characterizes the structure and behavior of the limiting network in this parameter region

  11. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models.

    Science.gov (United States)

    Wang, Wei; Griswold, Michael E

    2016-11-30

    The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Edge Principal Components and Squash Clustering: Using the Special Structure of Phylogenetic Placement Data for Sample Comparison

    Science.gov (United States)

    Matsen IV, Frederick A.; Evans, Steven N.

    2013-01-01

    Principal components analysis (PCA) and hierarchical clustering are two of the most heavily used techniques for analyzing the differences between nucleic acid sequence samples taken from a given environment. They have led to many insights regarding the structure of microbial communities. We have developed two new complementary methods that leverage how this microbial community data sits on a phylogenetic tree. Edge principal components analysis enables the detection of important differences between samples that contain closely related taxa. Each principal component axis is a collection of signed weights on the edges of the phylogenetic tree, and these weights are easily visualized by a suitable thickening and coloring of the edges. Squash clustering outputs a (rooted) clustering tree in which each internal node corresponds to an appropriate “average” of the original samples at the leaves below the node. Moreover, the length of an edge is a suitably defined distance between the averaged samples associated with the two incident nodes, rather than the less interpretable average of distances produced by UPGMA, the most widely used hierarchical clustering method in this context. We present these methods and illustrate their use with data from the human microbiome. PMID:23505415

  13. Mismatch of Posttraumatic Stress Disorder (PTSD) Symptoms and DSM-IV Symptom Clusters in a Cancer Sample: Exploratory Factor Analysis of the PTSD Checklist-Civilian Version

    Science.gov (United States)

    Shelby, Rebecca A.; Golden-Kreutz, Deanna M.; Andersen, Barbara L.

    2007-01-01

    The Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV; American Psychiatric Association, 1994a) conceptualization of posttraumatic stress disorder (PTSD) includes three symptom clusters: reexperiencing, avoidance/numbing, and arousal. The PTSD Checklist-Civilian Version (PCL-C) corresponds to the DSM-IV PTSD symptoms. In the current study, we conducted exploratory factor analysis (EFA) of the PCL-C with two aims: (a) to examine whether the PCL-C evidenced the three-factor solution implied by the DSM-IV symptom clusters, and (b) to identify a factor solution for the PCL-C in a cancer sample. Women (N = 148) with Stage II or III breast cancer completed the PCL-C after completion of cancer treatment. We extracted two-, three-, four-, and five-factor solutions using EFA. Our data did not support the DSM-IV PTSD symptom clusters. Instead, EFA identified a four-factor solution including reexperiencing, avoidance, numbing, and arousal factors. Four symptom items, which may be confounded with illness and cancer treatment-related symptoms, exhibited poor factor loadings. Using these symptom items in cancer samples may lead to overdiagnosis of PTSD and inflated rates of PTSD symptoms. PMID:16281232

  14. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  15. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  16. Chronic infections in hip arthroplasties: comparing risk of reinfection following one-stage and two-stage revision: a systematic review and meta-analysis

    Directory of Open Access Journals (Sweden)

    Lange J

    2012-03-01

    Full Text Available Jeppe Lange1,2, Anders Troelsen3, Reimar W Thomsen4, Kjeld Søballe1,51Lundbeck Foundation Centre for Fast-Track Hip and Knee Surgery, Aarhus C, 2Center for Planned Surgery, Silkeborg Regional Hospital, Silkeborg, 3Department of Orthopaedics, Hvidovre Hospital, Hvidovre, 4Department of Clinical Epidemiology, Aarhus University Hospital, Aalborg, 5Department of Orthopaedics, Aarhus University Hospital, Aarhus C, DenmarkBackground: Two-stage revision is regarded by many as the best treatment of chronic infection in hip arthroplasties. Some international reports, however, have advocated one-stage revision. No systematic review or meta-analysis has ever compared the risk of reinfection following one-stage and two-stage revisions for chronic infection in hip arthroplasties.Methods: The review was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis. Relevant studies were identified using PubMed and Embase. We assessed studies that included patients with a chronic infection of a hip arthroplasty treated with either one-stage or two-stage revision and with available data on occurrence of reinfections. We performed a meta-analysis estimating absolute risk of reinfection using a random-effects model.Results: We identified 36 studies eligible for inclusion. None were randomized controlled trials or comparative studies. The patients in these studies had received either one-stage revision (n = 375 or two-stage revision (n = 929. Reinfection occurred with an estimated absolute risk of 13.1% (95% confidence interval: 10.0%–17.1% in the one-stage cohort and 10.4% (95% confidence interval: 8.5%–12.7% in the two-stage cohort. The methodological quality of most included studies was considered low, with insufficient data to evaluate confounding factors.Conclusions: Our results may indicate three additional reinfections per 100 reimplanted patients when performing a one-stage versus two-stage revision. However, the

  17. Occurrence of Radio Minihalos in a Mass-limited Sample of Galaxy Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Giacintucci, Simona; Clarke, Tracy E. [Naval Research Laboratory, 4555 Overlook Avenue SW, Code 7213, Washington, DC 20375 (United States); Markevitch, Maxim [NASA/Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Cassano, Rossella; Venturi, Tiziana; Brunetti, Gianfranco, E-mail: simona.giacintucci@nrl.navy.mil [INAF—Istituto di Radioastronomia, via Gobetti 101, I-40129 Bologna (Italy)

    2017-06-01

    We investigate the occurrence of radio minihalos—diffuse radio sources of unknown origin observed in the cores of some galaxy clusters—in a statistical sample of 58 clusters drawn from the Planck Sunyaev–Zel’dovich cluster catalog using a mass cut ( M {sub 500} > 6 × 10{sup 14} M {sub ⊙}). We supplement our statistical sample with a similarly sized nonstatistical sample mostly consisting of clusters in the ACCEPT X-ray catalog with suitable X-ray and radio data, which includes lower-mass clusters. Where necessary (for nine clusters), we reanalyzed the Very Large Array archival radio data to determine whether a minihalo is present. Our total sample includes all 28 currently known and recently discovered radio minihalos, including six candidates. We classify clusters as cool-core or non-cool-core according to the value of the specific entropy floor in the cluster center, rederived or newly derived from the Chandra X-ray density and temperature profiles where necessary (for 27 clusters). Contrary to the common wisdom that minihalos are rare, we find that almost all cool cores—at least 12 out of 15 (80%)—in our complete sample of massive clusters exhibit minihalos. The supplementary sample shows that the occurrence of minihalos may be lower in lower-mass cool-core clusters. No minihalos are found in non-cool cores or “warm cores.” These findings will help test theories of the origin of minihalos and provide information on the physical processes and energetics of the cluster cores.

  18. Design, rationale, and baseline demographics of SEARCH I: a prospective cluster-randomized study

    Directory of Open Access Journals (Sweden)

    Albers F

    2012-07-01

    Full Text Available Frank Albers,1 Asif Shaikh,2 Ahmar Iqbal,31Medical Affairs Respiratory, 2Clinical Development and Medical Affairs, Field Based Medicine-Respiratory, Boehringer Ingelheim Pharmaceuticals, Inc, Ridgefield, CT, USA; 3Respiratory Medical Affairs, Pfizer Inc, New York, NY, USAAbstract: Questionnaires are available to identify patients at risk for several chronic diseases, including COPD, but are infrequently utilized in primary care. COPD is often underdiagnosed, while at the same time the US Preventive Services Task Force recommends against spirometric screening for COPD in asymptomatic adults. Use of a symptom-based questionnaire and subsequent handheld spirometric device depending on the answers to the questionnaire is a promising approach to identify patients at risk for COPD. Screening, Evaluating and Assessing Rate CHanges of diagnosing respiratory conditions in primary care 1 (SEARCH I was a prospective cluster-randomized study in 168 US primary care practices evaluating the effect of the COPD-Population Screener (COPD-PS™ questionnaire. The effect of this questionnaire alone or sequentially with the handheld copd-6TM device was evaluated on new diagnoses of COPD and on respiratory diagnostic practice patterns (including referrals for pulmonary function testing, referrals to pulmonologists, new diagnoses of COPD, and new respiratory medication prescriptions. Participating practices entered a total of 9704 consecutive consenting subjects aged ≥ 40 years attending primary care clinics. Study arm results were compared for new COPD diagnosis rates between usual care and (1 COPD-PS plus copd-6 and (2 COPD-PS alone. A cluster-randomization design allowed comparison of the intervention effects at the practice level instead of individuals being the subjects of the intervention. Regional principal investigators controlled the flow of study information to sub-investigators at participating practices to reduce observation bias (Hawthorne effect. The

  19. SIMS analysis using a new novel sample stage

    International Nuclear Information System (INIS)

    Miwa, Shiro; Nomachi, Ichiro; Kitajima, Hideo

    2006-01-01

    We have developed a novel sample stage for Cameca IMS-series instruments that allows us to adjust the tilt of the sample holder and to vary the height of the sample surface from outside the vacuum chamber. A third function of the stage is the capability to cool sample to -150 deg. C using liquid nitrogen. Using this stage, we can measure line profiles of 10 mm in length without any variation in the secondary ion yields. By moving the sample surface toward the input lens, the primary ion beam is well focused when the energy of the primary ions is reduced. Sample cooling is useful for samples such as organic materials that are easily damaged by primary ions or electrons

  20. SIMS analysis using a new novel sample stage

    Energy Technology Data Exchange (ETDEWEB)

    Miwa, Shiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan)]. E-mail: Shiro.Miwa@jp.sony.com; Nomachi, Ichiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan); Kitajima, Hideo [Nanotechnos Corp., 5-4-30 Nishihashimoto, Sagamihara 229-1131 (Japan)

    2006-07-30

    We have developed a novel sample stage for Cameca IMS-series instruments that allows us to adjust the tilt of the sample holder and to vary the height of the sample surface from outside the vacuum chamber. A third function of the stage is the capability to cool sample to -150 deg. C using liquid nitrogen. Using this stage, we can measure line profiles of 10 mm in length without any variation in the secondary ion yields. By moving the sample surface toward the input lens, the primary ion beam is well focused when the energy of the primary ions is reduced. Sample cooling is useful for samples such as organic materials that are easily damaged by primary ions or electrons.

  1. Cluster-cluster correlations in the two-dimensional stationary Ising-model

    International Nuclear Information System (INIS)

    Klassmann, A.

    1997-01-01

    In numerical integration of the Cahn-Hillard equation, which describes Oswald rising in a two-phase matrix, N. Masbaum showed that spatial correlations between clusters scale with respect to the mean cluster size (itself a function of time). T. B. Liverpool showed by Monte Carlo simulations for the Ising model that the analogous correlations have a similar form. Both demonstrated that immediately around each cluster there is some depletion area followed by something like a ring of clusters of the same size as the original one. More precisely, it has been shown that the distribution of clusters around a given cluster looks like a sinus-curve decaying exponentially with respect to the distance to a constant value

  2. One-stage versus two-stage exchange arthroplasty for infected total knee arthroplasty: a systematic review.

    Science.gov (United States)

    Nagra, Navraj S; Hamilton, Thomas W; Ganatra, Sameer; Murray, David W; Pandit, Hemant

    2016-10-01

    Infection complicating total knee arthroplasty (TKA) has serious implications. Traditionally the debate on whether one- or two-stage exchange arthroplasty is the optimum management of infected TKA has favoured two-stage procedures; however, a paradigm shift in opinion is emerging. This study aimed to establish whether current evidence supports one-stage revision for managing infected TKA based on reinfection rates and functional outcomes post-surgery. MEDLINE/PubMed and CENTRAL databases were reviewed for studies that compared one- and two-stage exchange arthroplasty TKA in more than ten patients with a minimum 2-year follow-up. From an initial sample of 796, five cohort studies with a total of 231 patients (46 single-stage/185 two-stage; median patient age 66 years, range 61-71 years) met inclusion criteria. Overall, there were no significant differences in risk of reinfection following one- or two-stage exchange arthroplasty (OR -0.06, 95 % confidence interval -0.13, 0.01). Subgroup analysis revealed that in studies published since 2000, one-stage procedures have a significantly lower reinfection rate. One study investigated functional outcomes and reported that one-stage surgery was associated with superior functional outcomes. Scarcity of data, inconsistent study designs, surgical technique and antibiotic regime disparities limit recommendations that can be made. Recent studies suggest one-stage exchange arthroplasty may provide superior outcomes, including lower reinfection rates and superior function, in select patients. Clinically, for some patients, one-stage exchange arthroplasty may represent optimum treatment; however, patient selection criteria and key components of surgical and post-operative anti-microbial management remain to be defined. III.

  3. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    Science.gov (United States)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2018-03-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  4. Toward cost-efficient sampling methods

    Science.gov (United States)

    Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie

    2015-09-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

  5. Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs

    Directory of Open Access Journals (Sweden)

    Faqir Muhammad

    2007-01-01

    Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.

  6. A COMPARISON OF TWO FUZZY CLUSTERING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Samarjit Das

    2013-10-01

    Full Text Available - In fuzzy clustering, unlike hard clustering, depending on the membership value, a single object may belong exactly to one cluster or partially to more than one cluster. Out of a number of fuzzy clustering techniques Bezdek’s Fuzzy C-Means and GustafsonKessel clustering techniques are well known where Euclidian distance and Mahalanobis distance are used respectively as a measure of similarity. We have applied these two fuzzy clustering techniques on a dataset of individual differences consisting of fifty feature vectors of dimension (feature three. Based on some validity measures we have tried to see the performances of these two clustering techniques from three different aspects- first, by initializing the membership values of the feature vectors considering the values of the three features separately one at a time, secondly, by changing the number of the predefined clusters and thirdly, by changing the size of the dataset.

  7. One-stage and two-stage penile buccal mucosa urethroplasty

    Directory of Open Access Journals (Sweden)

    G. Barbagli

    2016-03-01

    Full Text Available The paper provides the reader with the detailed description of current techniques of one-stage and two-stage penile buccal mucosa urethroplasty. The paper provides the reader with the preoperative patient evaluation paying attention to the use of diagnostic tools. The one-stage penile urethroplasty using buccal mucosa graft with the application of glue is preliminary showed and discussed. Two-stage penile urethroplasty is then reported. A detailed description of first-stage urethroplasty according Johanson technique is reported. A second-stage urethroplasty using buccal mucosa graft and glue is presented. Finally postoperative course and follow-up are addressed.

  8. Teaching basic life support with an automated external defibrillator using the two-stage or the four-stage teaching technique.

    Science.gov (United States)

    Bjørnshave, Katrine; Krogh, Lise Q; Hansen, Svend B; Nebsbjerg, Mette A; Thim, Troels; Løfgren, Bo

    2018-02-01

    Laypersons often hesitate to perform basic life support (BLS) and use an automated external defibrillator (AED) because of self-perceived lack of knowledge and skills. Training may reduce the barrier to intervene. Reduced training time and costs may allow training of more laypersons. The aim of this study was to compare BLS/AED skills' acquisition and self-evaluated BLS/AED skills after instructor-led training with a two-stage versus a four-stage teaching technique. Laypersons were randomized to either two-stage or four-stage teaching technique courses. Immediately after training, the participants were tested in a simulated cardiac arrest scenario to assess their BLS/AED skills. Skills were assessed using the European Resuscitation Council BLS/AED assessment form. The primary endpoint was passing the test (17 of 17 skills adequately performed). A prespecified noninferiority margin of 20% was used. The two-stage teaching technique (n=72, pass rate 57%) was noninferior to the four-stage technique (n=70, pass rate 59%), with a difference in pass rates of -2%; 95% confidence interval: -18 to 15%. Neither were there significant differences between the two-stage and four-stage groups in the chest compression rate (114±12 vs. 115±14/min), chest compression depth (47±9 vs. 48±9 mm) and number of sufficient rescue breaths between compression cycles (1.7±0.5 vs. 1.6±0.7). In both groups, all participants believed that their training had improved their skills. Teaching laypersons BLS/AED using the two-stage teaching technique was noninferior to the four-stage teaching technique, although the pass rate was -2% (95% confidence interval: -18 to 15%) lower with the two-stage teaching technique.

  9. Effectiveness of the 'Healthy School and Drugs' prevention programme on adolescents' substance use: a randomized clustered trial

    NARCIS (Netherlands)

    Malmberg, M.; Kleinjan, M.; Overbeek, G.; Vermulst, A.; Monshouwer, K.; Lammers, J.; Vollebergh, W.A.M.; Engels, R.C.M.E.

    2014-01-01

    Aim: To evaluate the effectiveness of the Healthy School and Drugs programme on alcohol, tobacco and marijuana use among Dutch early adolescents. Design: Randomized clustered trial with two intervention conditions (i.e. e-learning and integral). Setting: General population of 11-15-year-old

  10. Effectiveness of the 'Healthy School and Drugs' prevention programme on adolescents' substance use : A randomized clustered trial

    NARCIS (Netherlands)

    Malmberg, Monique; Kleinjan, Marloes; Overbeek, Geertjan; Vermulst, Ad; Monshouwer, Karin; Lammers, Jeroen; Vollebergh, Wilma A M; Engels, Rutger C M E

    2014-01-01

    Aim: To evaluate the effectiveness of the Healthy School and Drugs programme on alcohol, tobacco and marijuana use among Dutch early adolescents. Design: Randomized clustered trial with two intervention conditions (i.e. e-learning and integral). Setting: General population of 11-15-year-old

  11. Effectiveness of the 'Healthy School and Drugs' prevention programme on adolescents' substance use: a randomized clustered trial

    NARCIS (Netherlands)

    Malmberg, M.; Kleinjan, M.; Overbeek, G.J.; Vermulst, A.A.; Monshouwer, K.; Lammers, J.; Vollebergh, W.A.M.; Engels, R.C.M.E.

    2014-01-01

    Aim To evaluate the effectiveness of the Healthy School and Drugs programme on alcohol, tobacco and marijuana use among Dutch early adolescents. Design Randomized clustered trial with two intervention conditions (i.e. e-learning and integral). Setting General population of 11-15-year-old adolescents

  12. Independent random sampling methods

    CERN Document Server

    Martino, Luca; Míguez, Joaquín

    2018-01-01

    This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the li...

  13. Cluster-sample surveys and lot quality assurance sampling to evaluate yellow fever immunisation coverage following a national campaign, Bolivia, 2007.

    Science.gov (United States)

    Pezzoli, Lorenzo; Pineda, Silvia; Halkyer, Percy; Crespo, Gladys; Andrews, Nick; Ronveaux, Olivier

    2009-03-01

    To estimate the yellow fever (YF) vaccine coverage for the endemic and non-endemic areas of Bolivia and to determine whether selected districts had acceptable levels of coverage (>70%). We conducted two surveys of 600 individuals (25 x 12 clusters) to estimate coverage in the endemic and non-endemic areas. We assessed 11 districts using lot quality assurance sampling (LQAS). The lot (district) sample was 35 individuals with six as decision value (alpha error 6% if true coverage 70%; beta error 6% if true coverage 90%). To increase feasibility, we divided the lots into five clusters of seven individuals; to investigate the effect of clustering, we calculated alpha and beta by conducting simulations where each cluster's true coverage was sampled from a normal distribution with a mean of 70% or 90% and standard deviations of 5% or 10%. Estimated coverage was 84.3% (95% CI: 78.9-89.7) in endemic areas, 86.8% (82.5-91.0) in non-endemic and 86.0% (82.8-89.1) nationally. LQAS showed that four lots had unacceptable coverage levels. In six lots, results were inconsistent with the estimated administrative coverage. The simulations suggested that the effect of clustering the lots is unlikely to have significantly increased the risk of making incorrect accept/reject decisions. Estimated YF coverage was high. Discrepancies between administrative coverage and LQAS results may be due to incorrect population data. Even allowing for clustering in LQAS, the statistical errors would remain low. Catch-up campaigns are recommended in districts with unacceptable coverage.

  14. On the Coupling Time of the Heat-Bath Process for the Fortuin-Kasteleyn Random-Cluster Model

    Science.gov (United States)

    Collevecchio, Andrea; Elçi, Eren Metin; Garoni, Timothy M.; Weigel, Martin

    2018-01-01

    We consider the coupling from the past implementation of the random-cluster heat-bath process, and study its random running time, or coupling time. We focus on hypercubic lattices embedded on tori, in dimensions one to three, with cluster fugacity at least one. We make a number of conjectures regarding the asymptotic behaviour of the coupling time, motivated by rigorous results in one dimension and Monte Carlo simulations in dimensions two and three. Amongst our findings, we observe that, for generic parameter values, the distribution of the appropriately standardized coupling time converges to a Gumbel distribution, and that the standard deviation of the coupling time is asymptotic to an explicit universal constant multiple of the relaxation time. Perhaps surprisingly, we observe these results to hold both off criticality, where the coupling time closely mimics the coupon collector's problem, and also at the critical point, provided the cluster fugacity is below the value at which the transition becomes discontinuous. Finally, we consider analogous questions for the single-spin Ising heat-bath process.

  15. A two-stage flow-based intrusion detection model for next-generation networks.

    Science.gov (United States)

    Umer, Muhammad Fahad; Sher, Muhammad; Bi, Yaxin

    2018-01-01

    The next-generation network provides state-of-the-art access-independent services over converged mobile and fixed networks. Security in the converged network environment is a major challenge. Traditional packet and protocol-based intrusion detection techniques cannot be used in next-generation networks due to slow throughput, low accuracy and their inability to inspect encrypted payload. An alternative solution for protection of next-generation networks is to use network flow records for detection of malicious activity in the network traffic. The network flow records are independent of access networks and user applications. In this paper, we propose a two-stage flow-based intrusion detection system for next-generation networks. The first stage uses an enhanced unsupervised one-class support vector machine which separates malicious flows from normal network traffic. The second stage uses a self-organizing map which automatically groups malicious flows into different alert clusters. We validated the proposed approach on two flow-based datasets and obtained promising results.

  16. A Note on the Effect of Data Clustering on the Multiple-Imputation Variance Estimator: A Theoretical Addendum to the Lewis et al. article in JOS 2014

    Directory of Open Access Journals (Sweden)

    He Yulei

    2016-03-01

    Full Text Available Multiple imputation is a popular approach to handling missing data. Although it was originally motivated by survey nonresponse problems, it has been readily applied to other data settings. However, its general behavior still remains unclear when applied to survey data with complex sample designs, including clustering. Recently, Lewis et al. (2014 compared single- and multiple-imputation analyses for certain incomplete variables in the 2008 National Ambulatory Medicare Care Survey, which has a nationally representative, multistage, and clustered sampling design. Their study results suggested that the increase of the variance estimate due to multiple imputation compared with single imputation largely disappears for estimates with large design effects. We complement their empirical research by providing some theoretical reasoning. We consider data sampled from an equally weighted, single-stage cluster design and characterize the process using a balanced, one-way normal random-effects model. Assuming that the missingness is completely at random, we derive analytic expressions for the within- and between-multiple-imputation variance estimators for the mean estimator, and thus conveniently reveal the impact of design effects on these variance estimators. We propose approximations for the fraction of missing information in clustered samples, extending previous results for simple random samples. We discuss some generalizations of this research and its practical implications for data release by statistical agencies.

  17. Experimental studies of two-stage centrifugal dust concentrator

    Science.gov (United States)

    Vechkanova, M. V.; Fadin, Yu M.; Ovsyannikov, Yu G.

    2018-03-01

    The article presents data of experimental results of two-stage centrifugal dust concentrator, describes its design, and shows the development of a method of engineering calculation and laboratory investigations. For the experiments, the authors used quartz, ceramic dust and slag. Experimental dispersion analysis of dust particles was obtained by sedimentation method. To build a mathematical model of the process, dust collection was built using central composite rotatable design of the four factorial experiment. A sequence of experiments was conducted in accordance with the table of random numbers. Conclusion were made.

  18. The Influence of Social Network Characteristics on Peer Clustering in Smoking: A Two-Wave Panel Study of 19- and 23-Year-Old Swedes.

    Science.gov (United States)

    Miething, Alexander; Rostila, Mikael; Edling, Christofer; Rydgren, Jens

    2016-01-01

    The present study examines how the composition of social networks and perceived relationship content influence peer clustering in smoking, and how the association changes during the transition from late adolescence to early adulthood. The analysis was based on a Swedish two-wave survey sample comprising ego-centric network data. Respondents were 19 years old in the initial wave, and 23 when the follow-up sample was conducted. 17,227 ego-alter dyads were included in the analyses, which corresponds to an average response rate of 48.7 percent. Random effects logistic regression models were performed to calculate gender-specific average marginal effects of social network characteristics on smoking. The association of egos' and alters' smoking behavior was confirmed and found to be stronger when correlated in the female sample. For females, the associations decreased between age 19 and 23. Interactions between network characteristics and peer clustering in smoking showed that intense social interactions with smokers increase egos' smoking probability. The influence of network structures on peer clustering in smoking decreased during the transition from late adolescence to early adulthood. The study confirmed peer clustering in smoking and revealed that females' smoking behavior in particular is determined by social interactions. Female smokers' propensity to interact with other smokers was found to be associated with the quality of peer relationships, frequent social interactions, and network density. The influence of social networks on peer clustering in smoking decreased during the transition from late adolescence to early adulthood.

  19. Space density and clustering properties of a new sample of emission-line galaxies

    International Nuclear Information System (INIS)

    Wasilewski, A.J.

    1982-01-01

    A moderate-dispersion objective-prism survey for low-redshift emission-line galaxies has been carried out in an 825 sq. deg. region of sky with the Burrell Schmidt telescope of Case Western Reserve University. A 4 0 prism (300 A/mm at H#betta#) was used with the Illa-J emulsion to show that a new sample of emission-line galaxies is available even in areas already searched with the excess uv-continuum technique. The new emission-line galaxies occur quite commonly in systems with peculiar morphology indicating gravitational interaction with a close companion or other disturbance. About 10 to 15% of the sample are Seyfert galaxies. It is suggested that tidal interaction involving matter infall play a significant role in the generation of an emission-line spectrum. The space density of the new galaxies is found to be similar to the space density of the Makarian galaxies. Like the Markarian sample, the galaxies in the present survey represent about 10% of all galaxies in the absolute magnitude range M/sub p/ = -16 to -22. The observations also indicate that current estimates of dwarf galaxy space densities may be too low. The clustering properties of the new galaxies have been investigated using two approaches: cluster contour maps and the spatial correlation function. These tests suggest that there is weak clustering and possibly superclustering within the sample itself and that the galaxies considered here are about as common in clusters of ordinary galaxies as in the field

  20. The global kernel k-means algorithm for clustering in feature space.

    Science.gov (United States)

    Tzortzis, Grigorios F; Likas, Aristidis C

    2009-07-01

    Kernel k-means is an extension of the standard k -means clustering algorithm that identifies nonlinearly separable clusters. In order to overcome the cluster initialization problem associated with this method, we propose the global kernel k-means algorithm, a deterministic and incremental approach to kernel-based clustering. Our method adds one cluster at each stage, through a global search procedure consisting of several executions of kernel k-means from suitable initializations. This algorithm does not depend on cluster initialization, identifies nonlinearly separable clusters, and, due to its incremental nature and search procedure, locates near-optimal solutions avoiding poor local minima. Furthermore, two modifications are developed to reduce the computational cost that do not significantly affect the solution quality. The proposed methods are extended to handle weighted data points, which enables their application to graph partitioning. We experiment with several data sets and the proposed approach compares favorably to kernel k -means with random restarts.

  1. Cluster tails for critical power-law inhomogeneous random graphs

    NARCIS (Netherlands)

    van der Hofstad, R.; Kliem, S.; van Leeuwaarden, J.S.H.

    2018-01-01

    Recently, the scaling limit of cluster sizes for critical inhomogeneous random graphs of rank-1 type having finite variance but infinite third moment degrees was obtained in Bhamidi et al. (Ann Probab 40:2299–2361, 2012). It was proved that when the degrees obey a power law with exponent τ∈ (3 , 4)

  2. Optics of two-stage photovoltaic concentrators with dielectric second stages

    Science.gov (United States)

    Ning, Xiaohui; O'Gallagher, Joseph; Winston, Roland

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  3. Optics of two-stage photovoltaic concentrators with dielectric second stages.

    Science.gov (United States)

    Ning, X; O'Gallagher, J; Winston, R

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  4. Spatially explicit population estimates for black bears based on cluster sampling

    Science.gov (United States)

    Humm, J.; McCown, J. Walter; Scheick, B.K.; Clark, Joseph D.

    2017-01-01

    We estimated abundance and density of the 5 major black bear (Ursus americanus) subpopulations (i.e., Eglin, Apalachicola, Osceola, Ocala-St. Johns, Big Cypress) in Florida, USA with spatially explicit capture-mark-recapture (SCR) by extracting DNA from hair samples collected at barbed-wire hair sampling sites. We employed a clustered sampling configuration with sampling sites arranged in 3 × 3 clusters spaced 2 km apart within each cluster and cluster centers spaced 16 km apart (center to center). We surveyed all 5 subpopulations encompassing 38,960 km2 during 2014 and 2015. Several landscape variables, most associated with forest cover, helped refine density estimates for the 5 subpopulations we sampled. Detection probabilities were affected by site-specific behavioral responses coupled with individual capture heterogeneity associated with sex. Model-averaged bear population estimates ranged from 120 (95% CI = 59–276) bears or a mean 0.025 bears/km2 (95% CI = 0.011–0.44) for the Eglin subpopulation to 1,198 bears (95% CI = 949–1,537) or 0.127 bears/km2 (95% CI = 0.101–0.163) for the Ocala-St. Johns subpopulation. The total population estimate for our 5 study areas was 3,916 bears (95% CI = 2,914–5,451). The clustered sampling method coupled with information on land cover was efficient and allowed us to estimate abundance across extensive areas that would not have been possible otherwise. Clustered sampling combined with spatially explicit capture-recapture methods has the potential to provide rigorous population estimates for a wide array of species that are extensive and heterogeneous in their distribution.

  5. Efficacy of a workplace osteoporosis prevention intervention: a cluster randomized trial.

    Science.gov (United States)

    Tan, Ai May; LaMontagne, Anthony D; English, Dallas R; Howard, Peter

    2016-08-24

    Osteoporosis is a debilitating disease. Adequate calcium consumption and physical activity are the two major modifiable risk factors. This paper describes the major outcomes and efficacy of a workplace-based targeted behaviour change intervention to improve the dietary and physical activity behaviours of working women in sedentary occupations in Singapore. A cluster-randomized design was used, comparing the efficacy of a tailored intervention to standard care. Workplaces were the units of randomization and intervention. Sixteen workplaces were recruited from a pool of 97, and randomly assigned to intervention and control arms (eight workplaces in each). Women meeting specified inclusion criteria were then recruited to participate. Workplaces in the intervention arm received three participatory workshops and organization-wide educational activities. Workplaces in the control/standard care arm received print resources. Outcome measures were calcium intake (milligrams/day) and physical activity level (duration: minutes/week), measured at baseline, 4 weeks and 6 months post intervention. Adjusted cluster-level analyses were conducted comparing changes in intervention versus control groups, following intention-to-treat principles and CONSORT guidelines. Workplaces in the intervention group reported a significantly greater increase in calcium intake and duration of load-bearing moderate to vigorous physical activity (MVPA) compared with the standard care control group. Four weeks after intervention, the difference in adjusted mean calcium intake was 343.2 mg/day (95 % CI = 337.4 to 349.0, p workplace-based intervention substantially improved calcium intake and load-bearing moderate to vigorous physical activity 6 months after the intervention began. Australia New Zealand Clinical Trial Registry ACTRN12616000079448 . Registered 25 January 2016 (retrospectively registered).

  6. Estimating Accuracy of Land-Cover Composition From Two-Stage Clustering Sampling

    Science.gov (United States)

    Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), ...

  7. Single-stage-to-orbit versus two-stage-two-orbit: A cost perspective

    Science.gov (United States)

    Hamaker, Joseph W.

    1996-03-01

    This paper considers the possible life-cycle costs of single-stage-to-orbit (SSTO) and two-stage-to-orbit (TSTO) reusable launch vehicles (RLV's). The analysis parametrically addresses the issue such that the preferred economic choice comes down to the relative complexity of the TSTO compared to the SSTO. The analysis defines the boundary complexity conditions at which the two configurations have equal life-cycle costs, and finally, makes a case for the economic preference of SSTO over TSTO.

  8. The dilute random field Ising model by finite cluster approximation

    International Nuclear Information System (INIS)

    Benyoussef, A.; Saber, M.

    1987-09-01

    Using the finite cluster approximation, phase diagrams of bond and site diluted three-dimensional simple cubic Ising models with a random field have been determined. The resulting phase diagrams have the same general features for both bond and site dilution. (author). 7 refs, 4 figs

  9. Effectiveness of a selective intervention program targeting personality risk factors for alcohol misuse among young adolescents: results of a cluster randomized controlled trial

    NARCIS (Netherlands)

    Lammers, J.; Goossens, F.; Conrod, P.; Engels, R.C.M.E.; Wiers, R.W.H.J.; Kleinjan, M.

    2015-01-01

    Aim The effectiveness of Preventure was tested on drinking behaviour of young adolescents in secondary education in the Netherlands. Design A cluster randomized controlled trial was carried out, with participants assigned randomly to a two-session coping skills intervention or a control

  10. ATCA observations of the MACS-Planck Radio Halo Cluster Project. II. Radio observations of an intermediate redshift cluster sample

    Science.gov (United States)

    Martinez Aviles, G.; Johnston-Hollitt, M.; Ferrari, C.; Venturi, T.; Democles, J.; Dallacasa, D.; Cassano, R.; Brunetti, G.; Giacintucci, S.; Pratt, G. W.; Arnaud, M.; Aghanim, N.; Brown, S.; Douspis, M.; Hurier, J.; Intema, H. T.; Langer, M.; Macario, G.; Pointecouteau, E.

    2018-04-01

    Aim. A fraction of galaxy clusters host diffuse radio sources whose origins are investigated through multi-wavelength studies of cluster samples. We investigate the presence of diffuse radio emission in a sample of seven galaxy clusters in the largely unexplored intermediate redshift range (0.3 http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A94

  11. Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.

    Science.gov (United States)

    Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J

    2015-06-15

    Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance

  12. Using pilot data to size a two-arm randomized trial to find a nearly optimal personalized treatment strategy.

    Science.gov (United States)

    Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R

    2016-04-15

    A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Understanding the cluster randomised crossover design: a graphical illustraton of the components of variation and a sample size tutorial.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B

    2017-08-15

    In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of

  14. Dependence of the clustering properties of galaxies on stellar velocity dispersion in the Main galaxy sample of SDSS DR10

    Science.gov (United States)

    Deng, Xin-Fa; Song, Jun; Chen, Yi-Qing; Jiang, Peng; Ding, Ying-Ping

    2014-08-01

    Using two volume-limited Main galaxy samples of the Sloan Digital Sky Survey Data Release 10 (SDSS DR10), we investigate the dependence of the clustering properties of galaxies on stellar velocity dispersion by cluster analysis. It is found that in the luminous volume-limited Main galaxy sample, except at r=1.2, richer and larger systems can be more easily formed in the large stellar velocity dispersion subsample, while in the faint volume-limited Main galaxy sample, at r≥0.9, an opposite trend is observed. According to statistical analyses of the multiplicity functions, we conclude in two volume-limited Main galaxy samples: small stellar velocity dispersion galaxies preferentially form isolated galaxies, close pairs and small group, while large stellar velocity dispersion galaxies preferentially inhabit the dense groups and clusters. However, we note the difference between two volume-limited Main galaxy samples: in the faint volume-limited Main galaxy sample, at r≥0.9, the small stellar velocity dispersion subsample has a higher proportion of galaxies in superclusters ( n≥200) than the large stellar velocity dispersion subsample.

  15. Minimal spanning trees, filaments and galaxy clustering

    International Nuclear Information System (INIS)

    Barrow, J.D.; Sonoda, D.H.

    1985-01-01

    A graph theoretical technique for assessing intrinsic patterns in point data sets is described. A unique construction, the minimal spanning tree, can be associated with any point data set given all the inter-point separations. This construction enables the skeletal pattern of galaxy clustering to be singled out in quantitative fashion and differs from other statistics applied to these data sets. This technique is described and applied to two- and three-dimensional distributions of galaxies and also to comparable random samples and numerical simulations. The observed CfA and Zwicky data exhibit characteristic distributions of edge-lengths in their minimal spanning trees which are distinct from those found in random samples. (author)

  16. A versatile ultra high vacuum sample stage with six degrees of freedom

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, A. W.; Tromp, R. M. [IBM T.J. Watson Research Center, 1101 Kitchawan Road, P.O. Box 218, Yorktown Heights, New York 10598 (United States)

    2013-07-15

    We describe the design and practical realization of a versatile sample stage with six degrees of freedom. The stage was designed for use in a Low Energy Electron Microscope, but its basic design features will be useful for numerous other applications. The degrees of freedom are X, Y, and Z, two tilts, and azimuth. All motions are actuated in an ultrahigh vacuum base pressure environment by piezoelectric transducers with integrated position sensors. The sample can be load-locked. During observation, the sample is held at a potential of −15 kV, at temperatures between room temperature and 1500 °C, and in background gas pressures up to 1 × 10{sup −4} Torr.

  17. Implementation of client versus care-provider strategies to improve external cephalic version rates: a cluster randomized controlled trial

    NARCIS (Netherlands)

    Vlemmix, Floortje; Rosman, Ageeth N.; Rijnders, Marlies E.; Beuckens, Antje; Opmeer, Brent C.; Mol, Ben W. J.; Kok, Marjolein; Fleuren, Margot A. H.

    2015-01-01

    To determine the effectiveness of a client or care-provider strategy to improve the implementation of external cephalic version. Cluster randomized controlled trial. Twenty-five clusters; hospitals and their referring midwifery practices randomly selected in the Netherlands. Singleton breech

  18. Implementation of client versus care-provider strategies to improve external cephalic version rates: a cluster randomized controlled trial

    NARCIS (Netherlands)

    Vlemmix, F.; Rosman, A.N.; Rijnders, M.E.; Beuckens, A.; Opmeer, B.C.; Mol, B.W.J.; Kok, M.; Fleuren, M.A.H.

    2015-01-01

    Onjective: To determine the effectiveness of a client or care-provider strategy to improve the implementation of external cephalic version. Design: Cluster randomized controlled trial.Setting: Twenty-five clusters; hospitals and their referring midwifery practices randomly selected in the

  19. Effectiveness of a multifaceted implementation strategy on physicians? referral behavior to an evidence-based psychosocial intervention in dementia: a cluster randomized controlled trial

    OpenAIRE

    D?pp, Carola ME; Graff, Maud JL; Teerenstra, Steven; Nijhuis-van der Sanden, Maria WG; Olde Rikkert, Marcel GM; Vernooij-Dassen, Myrra JFJ

    2013-01-01

    BACKGROUND: To evaluate the effectiveness of a multifaceted implementation strategy on physicians' referral rate to and knowledge on the community occupational therapy in dementia program (COTiD program). METHODS: A cluster randomized controlled trial with 28 experimental and 17 control clusters was conducted. Cluster included a minimum of one physician, one manager, and two occupational therapists. In the control group physicians and managers received no interventions and occupational therap...

  20. A two-stage approach for multi-objective decision making with applications to system reliability optimization

    International Nuclear Information System (INIS)

    Li Zhaojun; Liao Haitao; Coit, David W.

    2009-01-01

    This paper proposes a two-stage approach for solving multi-objective system reliability optimization problems. In this approach, a Pareto optimal solution set is initially identified at the first stage by applying a multiple objective evolutionary algorithm (MOEA). Quite often there are a large number of Pareto optimal solutions, and it is difficult, if not impossible, to effectively choose the representative solutions for the overall problem. To overcome this challenge, an integrated multiple objective selection optimization (MOSO) method is utilized at the second stage. Specifically, a self-organizing map (SOM), with the capability of preserving the topology of the data, is applied first to classify those Pareto optimal solutions into several clusters with similar properties. Then, within each cluster, the data envelopment analysis (DEA) is performed, by comparing the relative efficiency of those solutions, to determine the final representative solutions for the overall problem. Through this sequential solution identification and pruning process, the final recommended solutions to the multi-objective system reliability optimization problem can be easily determined in a more systematic and meaningful way.

  1. Coarse Point Cloud Registration by Egi Matching of Voxel Clusters

    Science.gov (United States)

    Wang, Jinhu; Lindenbergh, Roderik; Shen, Yueqian; Menenti, Massimo

    2016-06-01

    Laser scanning samples the surface geometry of objects efficiently and records versatile information as point clouds. However, often more scans are required to fully cover a scene. Therefore, a registration step is required that transforms the different scans into a common coordinate system. The registration of point clouds is usually conducted in two steps, i.e. coarse registration followed by fine registration. In this study an automatic marker-free coarse registration method for pair-wise scans is presented. First the two input point clouds are re-sampled as voxels and dimensionality features of the voxels are determined by principal component analysis (PCA). Then voxel cells with the same dimensionality are clustered. Next, the Extended Gaussian Image (EGI) descriptor of those voxel clusters are constructed using significant eigenvectors of each voxel in the cluster. Correspondences between clusters in source and target data are obtained according to the similarity between their EGI descriptors. The random sampling consensus (RANSAC) algorithm is employed to remove outlying correspondences until a coarse alignment is obtained. If necessary, a fine registration is performed in a final step. This new method is illustrated on scan data sampling two indoor scenarios. The results of the tests are evaluated by computing the point to point distance between the two input point clouds. The presented two tests resulted in mean distances of 7.6 mm and 9.5 mm respectively, which are adequate for fine registration.

  2. Comparisons of single-stage and two-stage approaches to genomic selection.

    Science.gov (United States)

    Schulz-Streeck, Torben; Ogutu, Joseph O; Piepho, Hans-Peter

    2013-01-01

    Genomic selection (GS) is a method for predicting breeding values of plants or animals using many molecular markers that is commonly implemented in two stages. In plant breeding the first stage usually involves computation of adjusted means for genotypes which are then used to predict genomic breeding values in the second stage. We compared two classical stage-wise approaches, which either ignore or approximate correlations among the means by a diagonal matrix, and a new method, to a single-stage analysis for GS using ridge regression best linear unbiased prediction (RR-BLUP). The new stage-wise method rotates (orthogonalizes) the adjusted means from the first stage before submitting them to the second stage. This makes the errors approximately independently and identically normally distributed, which is a prerequisite for many procedures that are potentially useful for GS such as machine learning methods (e.g. boosting) and regularized regression methods (e.g. lasso). This is illustrated in this paper using componentwise boosting. The componentwise boosting method minimizes squared error loss using least squares and iteratively and automatically selects markers that are most predictive of genomic breeding values. Results are compared with those of RR-BLUP using fivefold cross-validation. The new stage-wise approach with rotated means was slightly more similar to the single-stage analysis than the classical two-stage approaches based on non-rotated means for two unbalanced datasets. This suggests that rotation is a worthwhile pre-processing step in GS for the two-stage approaches for unbalanced datasets. Moreover, the predictive accuracy of stage-wise RR-BLUP was higher (5.0-6.1%) than that of componentwise boosting.

  3. Improving Language Comprehension in Preschool Children with Language Difficulties: A Cluster Randomized Trial

    Science.gov (United States)

    Hagen, Åste M.; Melby-Lervåg, Monica; Lervåg, Arne

    2017-01-01

    Background: Children with language comprehension difficulties are at risk of educational and social problems, which in turn impede employment prospects in adulthood. However, few randomized trials have examined how such problems can be ameliorated during the preschool years. Methods: We conducted a cluster randomized trial in 148 preschool…

  4. Possible two-stage 87Sr evolution in the Stockdale Rhyolite

    International Nuclear Information System (INIS)

    Compston, W.; McDougall, I.; Wyborn, D.

    1982-01-01

    The Rb-Sr total-rock data for the Stockdale Rhyolite, of significance for the Palaeozoic time scale, are more scattered about a single-stage isochron than expected from experimental error. Two-stage 87 Sr evolution for several of the samples is explored to explain this, as an alternative to variation in the initial 87 Sr/ 86 Sr which is customarily used in single-stage dating models. The deletion of certain samples having very high Rb/Sr removes most of the excess scatter and leads to an estimate of 430 +- 7 m.y. for the age of extrusion. There is a younger alignment of Rb-Sr data within each sampling site at 412 +- 7 m.y. We suggest that the Stockdale Rhyolite is at least 430 m.y. old, that its original range in Rb/Sr was smaller than now observed, and that it experienced a net loss in Sr during later hydrothermal alteration at ca. 412 m.y. (orig.)

  5. Cluster lot quality assurance sampling: effect of increasing the number of clusters on classification precision and operational feasibility.

    Science.gov (United States)

    Okayasu, Hiromasa; Brown, Alexandra E; Nzioki, Michael M; Gasasira, Alex N; Takane, Marina; Mkanda, Pascal; Wassilak, Steven G F; Sutter, Roland W

    2014-11-01

    To assess the quality of supplementary immunization activities (SIAs), the Global Polio Eradication Initiative (GPEI) has used cluster lot quality assurance sampling (C-LQAS) methods since 2009. However, since the inception of C-LQAS, questions have been raised about the optimal balance between operational feasibility and precision of classification of lots to identify areas with low SIA quality that require corrective programmatic action. To determine if an increased precision in classification would result in differential programmatic decision making, we conducted a pilot evaluation in 4 local government areas (LGAs) in Nigeria with an expanded LQAS sample size of 16 clusters (instead of the standard 6 clusters) of 10 subjects each. The results showed greater heterogeneity between clusters than the assumed standard deviation of 10%, ranging from 12% to 23%. Comparing the distribution of 4-outcome classifications obtained from all possible combinations of 6-cluster subsamples to the observed classification of the 16-cluster sample, we obtained an exact match in classification in 56% to 85% of instances. We concluded that the 6-cluster C-LQAS provides acceptable classification precision for programmatic action. Considering the greater resources required to implement an expanded C-LQAS, the improvement in precision was deemed insufficient to warrant the effort. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  6. Hot Zone Identification: Analyzing Effects of Data Sampling on Spam Clustering

    Directory of Open Access Journals (Sweden)

    Rasib Khan

    2014-03-01

    Full Text Available Email is the most common and comparatively the most efficient means of exchanging information in today's world. However, given the widespread use of emails in all sectors, they have been the target of spammers since the beginning. Filtering spam emails has now led to critical actions such as forensic activities based on mining spam email. The data mine for spam emails at the University of Alabama at Birmingham is considered to be one of the most prominent resources for mining and identifying spam sources. It is a widely researched repository used by researchers from different global organizations. The usual process of mining the spam data involves going through every email in the data mine and clustering them based on their different attributes. However, given the size of the data mine, it takes an exceptionally long time to execute the clustering mechanism each time. In this paper, we have illustrated sampling as an efficient tool for data reduction, while preserving the information within the clusters, which would thus allow the spam forensic experts to quickly and effectively identify the ‘hot zone’ from the spam campaigns. We have provided detailed comparative analysis of the quality of the clusters after sampling, the overall distribution of clusters on the spam data, and timing measurements for our sampling approach. Additionally, we present different strategies which allowed us to optimize the sampling process using data-preprocessing and using the database engine's computational resources, and thus improving the performance of the clustering process.

  7. Two-Way Regularized Fuzzy Clustering of Multiple Correspondence Analysis.

    Science.gov (United States)

    Kim, Sunmee; Choi, Ji Yeh; Hwang, Heungsun

    2017-01-01

    Multiple correspondence analysis (MCA) is a useful tool for investigating the interrelationships among dummy-coded categorical variables. MCA has been combined with clustering methods to examine whether there exist heterogeneous subclusters of a population, which exhibit cluster-level heterogeneity. These combined approaches aim to classify either observations only (one-way clustering of MCA) or both observations and variable categories (two-way clustering of MCA). The latter approach is favored because its solutions are easier to interpret by providing explicitly which subgroup of observations is associated with which subset of variable categories. Nonetheless, the two-way approach has been built on hard classification that assumes observations and/or variable categories to belong to only one cluster. To relax this assumption, we propose two-way fuzzy clustering of MCA. Specifically, we combine MCA with fuzzy k-means simultaneously to classify a subgroup of observations and a subset of variable categories into a common cluster, while allowing both observations and variable categories to belong partially to multiple clusters. Importantly, we adopt regularized fuzzy k-means, thereby enabling us to decide the degree of fuzziness in cluster memberships automatically. We evaluate the performance of the proposed approach through the analysis of simulated and real data, in comparison with existing two-way clustering approaches.

  8. Identification of Clusters of Foot Pain Location in a Community Sample.

    Science.gov (United States)

    Gill, Tiffany K; Menz, Hylton B; Landorf, Karl B; Arnold, John B; Taylor, Anne W; Hill, Catherine L

    2017-12-01

    To identify foot pain clusters according to pain location in a community-based sample of the general population. This study analyzed data from the North West Adelaide Health Study. Data were obtained between 2004 and 2006, using computer-assisted telephone interviewing, clinical assessment, and self-completed questionnaire. The location of foot pain was assessed using a diagram during the clinical assessment. Hierarchical cluster analysis was undertaken to identify foot pain location clusters, which were then compared in relation to demographics, comorbidities, and podiatry services utilization. There were 558 participants with foot pain (mean age 54.4 years, 57.5% female). Five clusters were identified: 1 with predominantly arch and ball pain (26.8%), 1 with rearfoot pain (20.9%), 1 with heel pain (13.3%), and 2 with predominantly forefoot, toe, and nail pain (28.3% and 10.7%). Each cluster was distinct in age, sex, and comorbidity profile. Of the two clusters with predominantly forefoot, toe, and nail pain, one of them had a higher proportion of men and those classified as obese, had diabetes mellitus, and used podiatry services (30%), while the other was comprised of a higher proportion of women who were overweight and reported less use of podiatry services (17.5%). Five clusters of foot pain according to pain location were identified, all with distinct age, sex, and comorbidity profiles. These findings may assist in the identification of individuals at risk for developing foot pain and in the development of targeted preventive strategies and treatments. © 2017, American College of Rheumatology.

  9. Hydration of Atmospheric Molecular Clusters: Systematic Configurational Sampling.

    Science.gov (United States)

    Kildgaard, Jens; Mikkelsen, Kurt V; Bilde, Merete; Elm, Jonas

    2018-05-09

    We present a new systematic configurational sampling algorithm for investigating the potential energy surface of hydrated atmospheric molecular clusters. The algo- rithm is based on creating a Fibonacci sphere around each atom in the cluster and adding water molecules to each point in 9 different orientations. To allow the sam- pling of water molecules to existing hydrogen bonds, the cluster is displaced along the hydrogen bond and a water molecule is placed in between in three different ori- entations. Generated redundant structures are eliminated based on minimizing the root mean square distance (RMSD) of different conformers. Initially, the clusters are sampled using the semiempirical PM6 method and subsequently using density func- tional theory (M06-2X and ωB97X-D) with the 6-31++G(d,p) basis set. Applying the developed algorithm we study the hydration of sulfuric acid with up to 15 water molecules. We find that the additions of the first four water molecules "saturate" the sulfuric acid molecule and are more thermodynamically favourable than the addition of water molecule 5-15. Using the large generated set of conformers, we assess the performance of approximate methods (ωB97X-D, M06-2X, PW91 and PW6B95-D3) in calculating the binding energies and assigning the global minimum conformation compared to high level CCSD(T)-F12a/VDZ-F12 reference calculations. The tested DFT functionals systematically overestimates the binding energies compared to cou- pled cluster calculations, and we find that this deficiency can be corrected by a simple scaling factor.

  10. Comparative effectiveness of one-stage versus two-stage basilic vein transposition arteriovenous fistulas.

    Science.gov (United States)

    Ghaffarian, Amir A; Griffin, Claire L; Kraiss, Larry W; Sarfati, Mark R; Brooke, Benjamin S

    2018-02-01

    Basilic vein transposition (BVT) fistulas may be performed as either a one-stage or two-stage operation, although there is debate as to which technique is superior. This study was designed to evaluate the comparative clinical efficacy and cost-effectiveness of one-stage vs two-stage BVT. We identified all patients at a single large academic hospital who had undergone creation of either a one-stage or two-stage BVT between January 2007 and January 2015. Data evaluated included patient demographics, comorbidities, medication use, reasons for abandonment, and interventions performed to maintain patency. Costs were derived from the literature, and effectiveness was expressed in quality-adjusted life-years (QALYs). We analyzed primary and secondary functional patency outcomes as well as survival during follow-up between one-stage and two-stage BVT procedures using multivariate Cox proportional hazards models and Kaplan-Meier analysis with log-rank tests. The incremental cost-effectiveness ratio was used to determine cost savings. We identified 131 patients in whom 57 (44%) one-stage BVT and 74 (56%) two-stage BVT fistulas were created among 8 different vascular surgeons during the study period that each performed both procedures. There was no significant difference in the mean age, male gender, white race, diabetes, coronary disease, or medication profile among patients undergoing one- vs two-stage BVT. After fistula transposition, the median follow-up time was 8.3 months (interquartile range, 3-21 months). Primary patency rates of one-stage BVT were 56% at 12-month follow-up, whereas primary patency rates of two-stage BVT were 72% at 12-month follow-up. Patients undergoing two-stage BVT also had significantly higher rates of secondary functional patency at 12 months (57% for one-stage BVT vs 80% for two-stage BVT) and 24 months (44% for one-stage BVT vs 73% for two-stage BVT) of follow-up (P < .001 using log-rank test). However, there was no significant difference

  11. Comparison of population-averaged and cluster-specific models for the analysis of cluster randomized trials with missing binary outcomes: a simulation study

    Directory of Open Access Journals (Sweden)

    Ma Jinhui

    2013-01-01

    Full Text Available Abstracts Background The objective of this simulation study is to compare the accuracy and efficiency of population-averaged (i.e. generalized estimating equations (GEE and cluster-specific (i.e. random-effects logistic regression (RELR models for analyzing data from cluster randomized trials (CRTs with missing binary responses. Methods In this simulation study, clustered responses were generated from a beta-binomial distribution. The number of clusters per trial arm, the number of subjects per cluster, intra-cluster correlation coefficient, and the percentage of missing data were allowed to vary. Under the assumption of covariate dependent missingness, missing outcomes were handled by complete case analysis, standard multiple imputation (MI and within-cluster MI strategies. Data were analyzed using GEE and RELR. Performance of the methods was assessed using standardized bias, empirical standard error, root mean squared error (RMSE, and coverage probability. Results GEE performs well on all four measures — provided the downward bias of the standard error (when the number of clusters per arm is small is adjusted appropriately — under the following scenarios: complete case analysis for CRTs with a small amount of missing data; standard MI for CRTs with variance inflation factor (VIF 50. RELR performs well only when a small amount of data was missing, and complete case analysis was applied. Conclusion GEE performs well as long as appropriate missing data strategies are adopted based on the design of CRTs and the percentage of missing data. In contrast, RELR does not perform well when either standard or within-cluster MI strategy is applied prior to the analysis.

  12. A clustering algorithm for sample data based on environmental pollution characteristics

    Science.gov (United States)

    Chen, Mei; Wang, Pengfei; Chen, Qiang; Wu, Jiadong; Chen, Xiaoyun

    2015-04-01

    Environmental pollution has become an issue of serious international concern in recent years. Among the receptor-oriented pollution models, CMB, PMF, UNMIX, and PCA are widely used as source apportionment models. To improve the accuracy of source apportionment and classify the sample data for these models, this study proposes an easy-to-use, high-dimensional EPC algorithm that not only organizes all of the sample data into different groups according to the similarities in pollution characteristics such as pollution sources and concentrations but also simultaneously detects outliers. The main clustering process consists of selecting the first unlabelled point as the cluster centre, then assigning each data point in the sample dataset to its most similar cluster centre according to both the user-defined threshold and the value of similarity function in each iteration, and finally modifying the clusters using a method similar to k-Means. The validity and accuracy of the algorithm are tested using both real and synthetic datasets, which makes the EPC algorithm practical and effective for appropriately classifying sample data for source apportionment models and helpful for better understanding and interpreting the sources of pollution.

  13. Does poverty alleviation decrease depression symptoms in post-conflict settings? A cluster-randomized trial of microenterprise assistance in Northern Uganda.

    Science.gov (United States)

    Green, E P; Blattman, C; Jamison, J; Annan, J

    2016-01-01

    By 2009, two decades of war and widespread displacement left the majority of the population of Northern Uganda impoverished. This study used a cluster-randomized design to test the hypothesis that a poverty alleviation program would improve economic security and reduce symptoms of depression in a sample of mostly young women. Roughly 120 villages in Northern Uganda were invited to participate. Community committees were asked to identify the most vulnerable women (and some men) to participate. The implementing agency screened all proposed participants, and a total of 1800 were enrolled. Following a baseline survey, villages were randomized to a treatment or wait-list control group. Participants in treatment villages received training, start-up capital, and follow-up support. Participants, implementers, and data collectors were not blinded to treatment status. Villages were randomized to the treatment group (60 villages with 896 participants) or the wait-list control group (60 villages with 904 participants) with an allocation ration of 1:1. All clusters participated in the intervention and were included in the analysis. The intent-to-treat analysis included 860 treatment participants and 866 control participants (4.1% attrition). Sixteen months after the program, monthly cash earnings doubled from UGX 22 523 to 51 124, non-household and non-farm businesses doubled, and cash savings roughly quadrupled. There was no measurable effect on a locally derived measure of symptoms of depression. Despite finding large increases in business, income, and savings among the treatment group, we do not find support for an indirect effect of poverty alleviation on symptoms of depression.

  14. Design considerations for single-stage and two-stage pneumatic pellet injectors

    International Nuclear Information System (INIS)

    Gouge, M.J.; Combs, S.K.; Fisher, P.W.; Milora, S.L.

    1988-09-01

    Performance of single-stage pneumatic pellet injectors is compared with several models for one-dimensional, compressible fluid flow. Agreement is quite good for models that reflect actual breech chamber geometry and incorporate nonideal effects such as gas friction. Several methods of improving the performance of single-stage pneumatic pellet injectors in the near term are outlined. The design and performance of two-stage pneumatic pellet injectors are discussed, and initial data from the two-stage pneumatic pellet injector test facility at Oak Ridge National Laboratory are presented. Finally, a concept for a repeating two-stage pneumatic pellet injector is described. 27 refs., 8 figs., 3 tabs

  15. Genetic algorithm based two-mode clustering of metabolomics data

    NARCIS (Netherlands)

    Hageman, J.A.; van den Berg, R.A.; Westerhuis, J.A.; van der Werf, M.J.; Smilde, A.K.

    2008-01-01

    Metabolomics and other omics tools are generally characterized by large data sets with many variables obtained under different environmental conditions. Clustering methods and more specifically two-mode clustering methods are excellent tools for analyzing this type of data. Two-mode clustering

  16. Assessment of Random Assignment in Training and Test Sets using Generalized Cluster Analysis Technique

    Directory of Open Access Journals (Sweden)

    Sorana D. BOLBOACĂ

    2011-06-01

    Full Text Available Aim: The properness of random assignment of compounds in training and validation sets was assessed using the generalized cluster technique. Material and Method: A quantitative Structure-Activity Relationship model using Molecular Descriptors Family on Vertices was evaluated in terms of assignment of carboquinone derivatives in training and test sets during the leave-many-out analysis. Assignment of compounds was investigated using five variables: observed anticancer activity and four structure descriptors. Generalized cluster analysis with K-means algorithm was applied in order to investigate if the assignment of compounds was or not proper. The Euclidian distance and maximization of the initial distance using a cross-validation with a v-fold of 10 was applied. Results: All five variables included in analysis proved to have statistically significant contribution in identification of clusters. Three clusters were identified, each of them containing both carboquinone derivatives belonging to training as well as to test sets. The observed activity of carboquinone derivatives proved to be normal distributed on every. The presence of training and test sets in all clusters identified using generalized cluster analysis with K-means algorithm and the distribution of observed activity within clusters sustain a proper assignment of compounds in training and test set. Conclusion: Generalized cluster analysis using the K-means algorithm proved to be a valid method in assessment of random assignment of carboquinone derivatives in training and test sets.

  17. Ca II TRIPLET SPECTROSCOPY OF SMALL MAGELLANIC CLOUD RED GIANTS. I. ABUNDANCES AND VELOCITIES FOR A SAMPLE OF CLUSTERS

    International Nuclear Information System (INIS)

    Parisi, M. C.; Claria, J. J.; Grocholski, A. J.; Geisler, D.; Sarajedini, A.

    2009-01-01

    We have obtained near-infrared spectra covering the Ca II triplet lines for a large number of stars associated with 16 Small Magellanic Cloud (SMC) clusters using the VLT + FORS2. These data compose the largest available sample of SMC clusters with spectroscopically derived abundances and velocities. Our clusters span a wide range of ages and provide good areal coverage of the galaxy. Cluster members are selected using a combination of their positions relative to the cluster center as well as their location in the color-magnitude diagram, abundances, and radial velocities (RVs). We determine mean cluster velocities to typically 2.7 km s -1 and metallicities to 0.05 dex (random errors), from an average of 6.4 members per cluster. By combining our clusters with previously published results, we compile a sample of 25 clusters on a homogeneous metallicity scale and with relatively small metallicity errors, and thereby investigate the metallicity distribution, metallicity gradient, and age-metallicity relation (AMR) of the SMC cluster system. For all 25 clusters in our expanded sample, the mean metallicity [Fe/H] = -0.96 with σ = 0.19. The metallicity distribution may possibly be bimodal, with peaks at ∼-0.9 dex and -1.15 dex. Similar to the Large Magellanic Cloud (LMC), the SMC cluster system gives no indication of a radial metallicity gradient. However, intermediate age SMC clusters are both significantly more metal-poor and have a larger metallicity spread than their LMC counterparts. Our AMR shows evidence for three phases: a very early (>11 Gyr) phase in which the metallicity reached ∼-1.2 dex, a long intermediate phase from ∼10 to 3 Gyr in which the metallicity only slightly increased, and a final phase from 3 to 1 Gyr ago in which the rate of enrichment was substantially faster. We find good overall agreement with the model of Pagel and Tautvaisiene, which assumes a burst of star formation at 4 Gyr. Finally, we find that the mean RV of the cluster system

  18. Handling missing data in cluster randomized trials: A demonstration of multiple imputation with PAN through SAS

    Directory of Open Access Journals (Sweden)

    Jiangxiu Zhou

    2014-09-01

    Full Text Available The purpose of this study is to demonstrate a way of dealing with missing data in clustered randomized trials by doing multiple imputation (MI with the PAN package in R through SAS. The procedure for doing MI with PAN through SAS is demonstrated in detail in order for researchers to be able to use this procedure with their own data. An illustration of the technique with empirical data was also included. In this illustration thePAN results were compared with pairwise deletion and three types of MI: (1 Normal Model (NM-MI ignoring the cluster structure; (2 NM-MI with dummy-coded cluster variables (fixed cluster structure; and (3 a hybrid NM-MI which imputes half the time ignoring the cluster structure, and the other half including the dummy-coded cluster variables. The empirical analysis showed that using PAN and the other strategies produced comparable parameter estimates. However, the dummy-coded MI overestimated the intraclass correlation, whereas MI ignoring the cluster structure and the hybrid MI underestimated the intraclass correlation. When compared with PAN, the p-value and standard error for the treatment effect were higher with dummy-coded MI, and lower with MI ignoring the clusterstructure, the hybrid MI approach, and pairwise deletion. Previous studies have shown that NM-MI is not appropriate for handling missing data in clustered randomized trials. This approach, in addition to the pairwise deletion approach, leads to a biased intraclass correlation and faultystatistical conclusions. Imputation in clustered randomized trials should be performed with PAN. We have demonstrated an easy way for using PAN through SAS.

  19. Systematic adaptive cluster sampling for the assessment of rare tree species in Nepal

    NARCIS (Netherlands)

    Acharya, B.; Bhattarai, G.; Gier, de A.; Stein, A.

    2000-01-01

    Sampling to assess rare tree species poses methodic problems, because they may cluster and many plots with no such trees are to be expected. We used systematic adaptive cluster sampling (SACS) to sample three rare tree species in a forest area of about 40 ha in Nepal. We checked its applicability

  20. Two-Stage Series-Resonant Inverter

    Science.gov (United States)

    Stuart, Thomas A.

    1994-01-01

    Two-stage inverter includes variable-frequency, voltage-regulating first stage and fixed-frequency second stage. Lightweight circuit provides regulated power and is invulnerable to output short circuits. Does not require large capacitor across ac bus, like parallel resonant designs. Particularly suitable for use in ac-power-distribution system of aircraft.

  1. Numerical simulation of two-dimensional late-stage coarsening for nucleation and growth

    International Nuclear Information System (INIS)

    Akaiwa, N.; Meiron, D.I.

    1995-01-01

    Numerical simulations of two-dimensional late-stage coarsening for nucleation and growth or Ostwald ripening are performed at area fractions 0.05 to 0.4 using the monopole and dipole approximations of a boundary integral formulation for the steady state diffusion equation. The simulations are performed using two different initial spatial distributions. One is a random spatial distribution, and the other is a random spatial distribution with depletion zones around the particles. We characterize the spatial correlations of particles by the radial distribution function, the pair correlation functions, and the structure function. Although the initial spatial correlations are different, we find time-independent scaled correlation functions in the late stage of coarsening. An important feature of the late-stage spatial correlations is that depletion zones exist around particles. A log-log plot of the structure function shows that the slope at small wave numbers is close to 4 and is -3 at very large wave numbers for all area fractions. At large wave numbers we observe oscillations in the structure function. We also confirm the cubic growth law of the average particle radius. The rate constant of the cubic growth law and the particle size distribution functions are also determined. We find qualitatively good agreement between experiments and the present simulations. In addition, the present results agree well with simulation results using the Cahn-Hilliard equation

  2. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    Science.gov (United States)

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  3. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  4. Discriminative motif discovery via simulated evolution and random under-sampling.

    Directory of Open Access Journals (Sweden)

    Tao Song

    Full Text Available Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.

  5. Discriminative motif discovery via simulated evolution and random under-sampling.

    Science.gov (United States)

    Song, Tao; Gu, Hong

    2014-01-01

    Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs) training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.

  6. A HIGH FIDELITY SAMPLE OF COLD FRONT CLUSTERS FROM THE CHANDRA ARCHIVE

    International Nuclear Information System (INIS)

    Owers, Matt S.; Nulsen, Paul E. J.; Markevitch, Maxim; Couch, Warrick J.

    2009-01-01

    This paper presents a sample of 'cold front' clusters selected from the Chandra archive. The clusters are selected based purely on the existence of surface brightness edges in their Chandra images which are modeled as density jumps. A combination of the derived density and temperature jumps across the fronts is used to select nine robust examples of cold front clusters: 1ES0657 - 558, Abell 1201, Abell 1758N, MS1455.0+2232, Abell 2069, Abell 2142, Abell 2163, RXJ1720.1+2638, and Abell 3667. This sample is the subject of an ongoing study aimed at relating cold fronts to cluster merger activity, and understanding how the merging environment affects the cluster constituents. Here, temperature maps are presented along with the Chandra X-ray images. A dichotomy is found in the sample in that there exists a subsample of cold front clusters which are clearly mergers based on their X-ray morphologies, and a second subsample of clusters which harbor cold fronts, but have surprisingly relaxed X-ray morphologies, and minimal evidence for merger activity at other wavelengths. For this second subsample, the existence of a cold front provides the sole evidence for merger activity at X-ray wavelengths. We discuss how cold fronts can provide additional information which may be used to constrain merger histories, and also the possibility of using cold fronts to distinguish major and minor mergers.

  7. Two-stage anaerobic digestion of cheese whey

    Energy Technology Data Exchange (ETDEWEB)

    Lo, K V; Liao, P H

    1986-01-01

    A two-stage digestion of cheese whey was studied using two anaerobic rotating biological contact reactors. The second-stage reactor receiving partially treated effluent from the first-stage reactor could be operated at a hydraulic retention time of one day. The results indicated that two-stage digestion is a feasible alternative for treating whey. 6 references.

  8. A Clustered Randomized Controlled Trial of the Positive Prevention PLUS Adolescent Pregnancy Prevention Program.

    Science.gov (United States)

    LaChausse, Robert G

    2016-09-01

    To determine the impact of Positive Prevention PLUS, a school-based adolescent pregnancy prevention program on delaying sexual intercourse, birth control use, and pregnancy. I randomly assigned a diverse sample of ninth grade students in 21 suburban public high schools in California into treatment (n = 2483) and control (n = 1784) groups that participated in a clustered randomized controlled trial. Between October 2013 and May 2014, participants completed baseline and 6-month follow-up surveys regarding sexual behavior and pregnancy. Participants in the treatment group were offered Positive Prevention PLUS, an 11-lesson adolescent pregnancy prevention program. The program had statistically significant impacts on delaying sexual intercourse and increasing the use of birth control. However, I detected no program effect on pregnancy rates at 6-month follow-up. The Positive Prevention PLUS program demonstrated positive impacts on adolescent sexual behavior. This suggests that programs that focus on having students practice risk reduction skills may delay sexual activity and increase birth control use.

  9. Random Sampling of Correlated Parameters – a Consistent Solution for Unfavourable Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Žerovnik, G., E-mail: gasper.zerovnik@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Trkov, A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Kodeli, I.A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Capote, R. [International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Smith, D.L. [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, CA 92118-3073 (United States)

    2015-01-15

    Two methods for random sampling according to a multivariate lognormal distribution – the correlated sampling method and the method of transformation of correlation coefficients – are briefly presented. The methods are mathematically exact and enable consistent sampling of correlated inherently positive parameters with given information on the first two distribution moments. Furthermore, a weighted sampling method to accelerate the convergence of parameters with extremely large relative uncertainties is described. However, the method is efficient only for a limited number of correlated parameters.

  10. Sample size determinations for group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms.

    Science.gov (United States)

    Heo, Moonseong; Litwin, Alain H; Blackstock, Oni; Kim, Namhee; Arnsten, Julia H

    2017-02-01

    We derived sample size formulae for detecting main effects in group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms. Such designs are necessary when experimental interventions need to be administered to groups of subjects whereas control conditions need to be administered to individual subjects. This type of trial, often referred to as a partially nested or partially clustered design, has been implemented for management of chronic diseases such as diabetes and is beginning to emerge more commonly in wider clinical settings. Depending on the research setting, the level of hierarchy of data structure for the experimental arm can be three or two, whereas that for the control arm is two or one. Such different levels of data hierarchy assume correlation structures of outcomes that are different between arms, regardless of whether research settings require two or three level data structure for the experimental arm. Therefore, the different correlations should be taken into account for statistical modeling and for sample size determinations. To this end, we considered mixed-effects linear models with different correlation structures between experimental and control arms to theoretically derive and empirically validate the sample size formulae with simulation studies.

  11. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  12. A spatial scan statistic for nonisotropic two-level risk cluster.

    Science.gov (United States)

    Li, Xiao-Zhou; Wang, Jin-Feng; Yang, Wei-Zhong; Li, Zhong-Jie; Lai, Sheng-Jie

    2012-01-30

    Spatial scan statistic methods are commonly used for geographical disease surveillance and cluster detection. The standard spatial scan statistic does not model any variability in the underlying risks of subregions belonging to a detected cluster. For a multilevel risk cluster, the isotonic spatial scan statistic could model a centralized high-risk kernel in the cluster. Because variations in disease risks are anisotropic owing to different social, economical, or transport factors, the real high-risk kernel will not necessarily take the central place in a whole cluster area. We propose a spatial scan statistic for a nonisotropic two-level risk cluster, which could be used to detect a whole cluster and a noncentralized high-risk kernel within the cluster simultaneously. The performance of the three methods was evaluated through an intensive simulation study. Our proposed nonisotropic two-level method showed better power and geographical precision with two-level risk cluster scenarios, especially for a noncentralized high-risk kernel. Our proposed method is illustrated using the hand-foot-mouth disease data in Pingdu City, Shandong, China in May 2009, compared with two other methods. In this practical study, the nonisotropic two-level method is the only way to precisely detect a high-risk area in a detected whole cluster. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Cluster analysis and its application to healthcare claims data: a study of end-stage renal disease patients who initiated hemodialysis.

    Science.gov (United States)

    Liao, Minlei; Li, Yunfeng; Kianifard, Farid; Obi, Engels; Arcona, Stephen

    2016-03-02

    Cluster analysis (CA) is a frequently used applied statistical technique that helps to reveal hidden structures and "clusters" found in large data sets. However, this method has not been widely used in large healthcare claims databases where the distribution of expenditure data is commonly severely skewed. The purpose of this study was to identify cost change patterns of patients with end-stage renal disease (ESRD) who initiated hemodialysis (HD) by applying different clustering methods. A retrospective, cross-sectional, observational study was conducted using the Truven Health MarketScan® Research Databases. Patients aged ≥18 years with ≥2 ESRD diagnoses who initiated HD between 2008 and 2010 were included. The K-means CA method and hierarchical CA with various linkage methods were applied to all-cause costs within baseline (12-months pre-HD) and follow-up periods (12-months post-HD) to identify clusters. Demographic, clinical, and cost information was extracted from both periods, and then examined by cluster. A total of 18,380 patients were identified. Meaningful all-cause cost clusters were generated using K-means CA and hierarchical CA with either flexible beta or Ward's methods. Based on cluster sample sizes and change of cost patterns, the K-means CA method and 4 clusters were selected: Cluster 1: Average to High (n = 113); Cluster 2: Very High to High (n = 89); Cluster 3: Average to Average (n = 16,624); or Cluster 4: Increasing Costs, High at Both Points (n = 1554). Median cost changes in the 12-month pre-HD and post-HD periods increased from $185,070 to $884,605 for Cluster 1 (Average to High), decreased from $910,930 to $157,997 for Cluster 2 (Very High to High), were relatively stable and remained low from $15,168 to $13,026 for Cluster 3 (Average to Average), and increased from $57,909 to $193,140 for Cluster 4 (Increasing Costs, High at Both Points). Relatively stable costs after starting HD were associated with more stable scores

  14. Investigating causal associations between use of nicotine, alcohol, caffeine and cannabis: a two-sample bidirectional Mendelian randomization study.

    Science.gov (United States)

    Verweij, Karin J H; Treur, Jorien L; Vink, Jacqueline M

    2018-07-01

    Epidemiological studies consistently show co-occurrence of use of different addictive substances. Whether these associations are causal or due to overlapping underlying influences remains an important question in addiction research. Methodological advances have made it possible to use published genetic associations to infer causal relationships between phenotypes. In this exploratory study, we used Mendelian randomization (MR) to examine the causality of well-established associations between nicotine, alcohol, caffeine and cannabis use. Two-sample MR was employed to estimate bidirectional causal effects between four addictive substances: nicotine (smoking initiation and cigarettes smoked per day), caffeine (cups of coffee per day), alcohol (units per week) and cannabis (initiation). Based on existing genome-wide association results we selected genetic variants associated with the exposure measure as an instrument to estimate causal effects. Where possible we applied sensitivity analyses (MR-Egger and weighted median) more robust to horizontal pleiotropy. Most MR tests did not reveal causal associations. There was some weak evidence for a causal positive effect of genetically instrumented alcohol use on smoking initiation and of cigarettes per day on caffeine use, but these were not supported by the sensitivity analyses. There was also some suggestive evidence for a positive effect of alcohol use on caffeine use (only with MR-Egger) and smoking initiation on cannabis initiation (only with weighted median). None of the suggestive causal associations survived corrections for multiple testing. Two-sample Mendelian randomization analyses found little evidence for causal relationships between nicotine, alcohol, caffeine and cannabis use. © 2018 Society for the Study of Addiction.

  15. Nonlinear random resistor diode networks and fractal dimensions of directed percolation clusters.

    Science.gov (United States)

    Stenull, O; Janssen, H K

    2001-07-01

    We study nonlinear random resistor diode networks at the transition from the nonpercolating to the directed percolating phase. The resistor-like bonds and the diode-like bonds under forward bias voltage obey a generalized Ohm's law V approximately I(r). Based on general grounds such as symmetries and relevance we develop a field theoretic model. We focus on the average two-port resistance, which is governed at the transition by the resistance exponent straight phi(r). By employing renormalization group methods we calculate straight phi(r) for arbitrary r to one-loop order. Then we address the fractal dimensions characterizing directed percolation clusters. Via considering distinct values of the nonlinearity r, we determine the dimension of the red bonds, the chemical path, and the backbone to two-loop order.

  16. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-07-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  17. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-12-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  18. Dynamical evolution of clusters with two stellar groups

    Energy Technology Data Exchange (ETDEWEB)

    Angeletti, L; Giannone, P. (Rome Univ. (Italy))

    1977-08-01

    The generalization of the fluid-dynamical approach from one-component star clusters to clusters with several stellar groups (as far as the star masses are concerned) has been applied to the study of two-component clusters. Rather extreme values of stellar masses and masses of groups were chosen in order to emphasize the different dynamical evolutions and asymptotic behaviors. Escape of stars from clusters and the problem of equipartition of kinetic energy among the two star groups are discussed. Comparisons of the main features of the results with those obtained by other authors have shown a good agreement. Some characteristic properties of the last computed models with an age of 18x10/sup 9/ yr have been pointed out and discussed in relation with some observed features of galactic globular clusters.

  19. Two stage-type railgun accelerator

    International Nuclear Information System (INIS)

    Ogino, Mutsuo; Azuma, Kingo.

    1995-01-01

    The present invention provides a two stage-type railgun accelerator capable of spiking a flying body (ice pellet) formed by solidifying a gaseous hydrogen isotope as a fuel to a thermonuclear reactor at a higher speed into a central portion of plasmas. Namely, the two stage-type railgun accelerator accelerates the flying body spiked from a initial stage accelerator to a portion between rails by Lorentz force generated when electric current is supplied to the two rails by way of a plasma armature. In this case, two sets of solenoids are disposed for compressing the plasma armature in the longitudinal direction of the rails. The first and the second sets of solenoid coils are previously supplied with electric current. After passing of the flying body, the armature formed into plasmas by a gas laser disposed at the back of the flying body is compressed in the longitudinal direction of the rails by a magnetic force of the first and the second sets of solenoid coils to increase the plasma density. A current density is also increased simultaneously. Then, the first solenoid coil current is turned OFF to accelerate the flying body in two stages by the compressed plasma armature. (I.S.)

  20. Improving the collection of knowledge, attitude and practice data with community surveys: a comparison of two second-stage sampling methods.

    Science.gov (United States)

    Davis, Rosemary H; Valadez, Joseph J

    2014-12-01

    Second-stage sampling techniques, including spatial segmentation, are widely used in community health surveys when reliable household sampling frames are not available. In India, an unresearched technique for household selection is used in eight states, which samples the house with the last marriage or birth as the starting point. Users question whether this last-birth or last-marriage (LBLM) approach introduces bias affecting survey results. We conducted two simultaneous population-based surveys. One used segmentation sampling; the other used LBLM. LBLM sampling required modification before assessment was possible and a more systematic approach was tested using last birth only. We compared coverage proportions produced by the two independent samples for six malaria indicators and demographic variables (education, wealth and caste). We then measured the level of agreement between the caste of the selected participant and the caste of the health worker making the selection. No significant difference between methods was found for the point estimates of six malaria indicators, education, caste or wealth of the survey participants (range of P: 0.06 to >0.99). A poor level of agreement occurred between the caste of the health worker used in household selection and the caste of the final participant, (Κ = 0.185), revealing little association between the two, and thereby indicating that caste was not a source of bias. Although LBLM was not testable, a systematic last-birth approach was tested. If documented concerns of last-birth sampling are addressed, this new method could offer an acceptable alternative to segmentation in India. However, inter-state caste variation could affect this result. Therefore, additional assessment of last birth is required before wider implementation is recommended. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2013; all rights reserved.

  1. A cluster randomized controlled trial of a clinical pathway for hospital treatment of heart failure: study design and population

    Directory of Open Access Journals (Sweden)

    Gardini Andrea

    2007-11-01

    Full Text Available Abstract Background The hospital treatment of heart failure frequently does not follow published guidelines, potentially contributing to the high morbidity, mortality and economic cost of this disorder. Consequently the development of clinical pathways has the potential to reduce the current variability in care, enhance guideline adherence, and improve outcomes for patients. Despite enthusiasm and diffusion, the widespread acceptance of clinical pathways remain questionable because very little prospective controlled data demonstrated their effectiveness. The Experimental Prospective Study on the Effectiveness and Efficiency of the Implementation of Clinical Pathways was designed in order to conduct a rigorous evaluation of clinical pathways in hospital treatment of acute heart failure. The primary objective of the trial was to evaluate the effectiveness of the implementation of clinical pathways for hospital treatment of heart failure in Italian hospitals. Methods/design Two-arm, cluster-randomized trial. 14 community hospitals were randomized either to arm 1 (clinical pathway: appropriate use of practice guidelines and supplies of drugs and ancillary services, new organization and procedures, patient education, etc. or to arm 2 (no intervention, usual care. 424 patients sample (212 in each group, 80% of power at the 5% significance level (two-sided. The primary outcome measure is in-hospital mortality. We will also analyze the impact of the clinical pathways comparing the length and the appropriateness of the stay, the rate of unscheduled readmissions, the customers' satisfaction and the costs treating the patients with the pathways and with the current practice along all the observation period. The quality of the care will be assessed by monitoring the use of diagnostic and therapeutic procedures during hospital stay and by measuring key quality indicators at discharge. Discussion This paper examines the design of the evaluation of a complex

  2. The effectiveness of xylitol in a school-based cluster-randomized clinical trial.

    Science.gov (United States)

    Lee, Wonik; Spiekerman, Charles; Heima, Masahiro; Eggertsson, Hafsteinn; Ferretti, Gerald; Milgrom, Peter; Nelson, Suchitra

    2015-01-01

    The purpose of this double-blind, cluster-randomized clinical trial was to examine the effects of xylitol gummy bear snacks on dental caries progression in primary and permanent teeth of inner-city school children. A total of 562 children aged 5-6 years were recruited from five elementary schools in East Cleveland, Ohio. Children were randomized by classroom to receive xylitol (7.8 g/day) or placebo (inulin fiber 20 g/day) gummy bears. Gummy bears were given three times per day for the 9-month kindergarten year within a supervised school environment. Children in both groups also received oral health education, toothbrush and fluoridated toothpaste, topical fluoride varnish treatment and dental sealants. The numbers of new decayed, missing, and filled surfaces for primary teeth (dmfs) and permanent teeth (DMFS) from baseline to the middle of 2nd grade (exit exam) were compared between the treatment (xylitol/placebo) groups using an optimally-weighted permutation test for cluster-randomized data. The mean new d(3-6)mfs at the exit exam was 5.0 ± 7.6 and 4.0 ± 6.5 for the xylitol and placebo group, respectively. Similarly, the mean new D(3-6)MFS was 0.38 ± 0.88 and 0.48 ± 1.39 for the xylitol and placebo group, respectively. The adjusted mean difference between the two groups was not statistically significant: new d(3-6)mfs: mean 0.4, 95% CI -0.25, 0.8), and new D(3-6)MFS: mean 0.16, 95% CI -0.16, 0.43. Xylitol consumption did not have additional benefit beyond other preventive measures. Caries progression in the permanent teeth of both groups was minimal, suggesting that other simultaneous prevention modalities may have masked the possible beneficial effects of xylitol in this trial. © 2014 S. Karger AG, Basel.

  3. Clustering of samples and elements based on multi-variable chemical data

    International Nuclear Information System (INIS)

    Op de Beeck, J.

    1984-01-01

    Clustering and classification are defined in the context of multivariable chemical analysis data. Classical multi-variate techniques, commonly used to interpret such data, are shown to be based on probabilistic and geometrical principles which are not justified for analytical data, since in that case one assumes or expects a system of more or less systematically related objects (samples) as defined by measurements on more or less systematically interdependent variables (elements). For the specific analytical problem of data set concerning a large number of trace elements determined in a large number of samples, a deterministic cluster analysis can be used to develop the underlying classification structure. Three main steps can be distinguished: diagnostic evaluation and preprocessing of the raw input data; computation of a symmetric matrix with pairwise standardized dissimilarity values between all possible pairs of samples and/or elements; and ultrametric clustering strategy to produce the final classification as a dendrogram. The software packages designed to perform these tasks are discussed and final results are given. Conclusions are formulated concerning the dangers of using multivariate, clustering and classification software packages as a black-box

  4. Possible two-stage /sup 87/Sr evolution in the Stockdale Rhyolite

    Energy Technology Data Exchange (ETDEWEB)

    Compston, W.; McDougall, I. (Australian National Univ., Canberra. Research School of Earth Sciences); Wyborn, D. (Department of Minerals and Energy, Canberra (Australia). Bureau of Mineral Resources)

    1982-12-01

    The Rb-Sr total-rock data for the Stockdale Rhyolite, of significance for the Palaeozoic time scale, are more scattered about a single-stage isochron than expected from experimental error. Two-stage /sup 87/Sr evolution for several of the samples is explored to explain this, as an alternative to variation in the initial /sup 87/Sr//sup 86/Sr which is customarily used in single-stage dating models. The deletion of certain samples having very high Rb/Sr removes most of the excess scatter and leads to an estimate of 430 +- 7 m.y. for the age of extrusion. There is a younger alignment of Rb-Sr data within each sampling site at 412 +- 7 m.y. We suggest that the Stockdale Rhyolite is at least 430 m.y. old, that its original range in Rb/Sr was smaller than now observed, and that it experienced a net loss in Sr during later hydrothermal alteration at ca. 412 m.y.

  5. Effects of regularly consuming dietary fibre rich soluble cocoa products on bowel habits in healthy subjects: a free-living, two-stage, randomized, crossover, single-blind intervention

    Directory of Open Access Journals (Sweden)

    Sarriá Beatriz

    2012-04-01

    Full Text Available Abstract Background Dietary fibre is both preventive and therapeutic for bowel functional diseases. Soluble cocoa products are good sources of dietary fibre that may be supplemented with this dietary component. This study assessed the effects of regularly consuming two soluble cocoa products (A and B with different non-starch polysaccharides levels (NSP, 15.1 and 22.0% w/w, respectively on bowel habits using subjective intestinal function and symptom questionnaires, a daily diary and a faecal marker in healthy individuals. Methods A free-living, two-stage, randomized, crossover, single-blind intervention was carried out in 44 healthy men and women, between 18-55 y old, who had not taken dietary supplements, laxatives, or antibiotics six months before the start of the study. In the four-week-long intervention stages, separated by a three-week-wash-out stage, two servings of A and B, that provided 2.26 vs. 6.60 g/day of NSP respectively, were taken. In each stage, volunteers' diet was recorded using a 72-h food intake report. Results Regularly consuming cocoa A and B increased fibre intake, although only cocoa B significantly increased fibre intake (p Conclusions Regular consumption of the cocoa products increases dietary fibre intake to recommended levels and product B improves bowel habits. The use of both objective and subjective assessments to evaluate the effects of food on bowel habits is recommended.

  6. Two-stage implant systems.

    Science.gov (United States)

    Fritz, M E

    1999-06-01

    Since the advent of osseointegration approximately 20 years ago, there has been a great deal of scientific data developed on two-stage integrated implant systems. Although these implants were originally designed primarily for fixed prostheses in the mandibular arch, they have been used in partially dentate patients, in patients needing overdentures, and in single-tooth restorations. In addition, this implant system has been placed in extraction sites, in bone-grafted areas, and in maxillary sinus elevations. Often, the documentation of these procedures has lagged. In addition, most of the reports use survival criteria to describe results, often providing overly optimistic data. It can be said that the literature describes a true adhesion of the epithelium to the implant similar to adhesion to teeth, that two-stage implants appear to have direct contact somewhere between 50% and 70% of the implant surface, that the microbial flora of the two-stage implant system closely resembles that of the natural tooth, and that the microbiology of periodontitis appears to be closely related to peri-implantitis. In evaluations of the data from implant placement in all of the above-noted situations by means of meta-analysis, it appears that there is a strong case that two-stage dental implants are successful, usually showing a confidence interval of over 90%. It also appears that the mandibular implants are more successful than maxillary implants. Studies also show that overdenture therapy is valid, and that single-tooth implants and implants placed in partially dentate mouths have a success rate that is quite good, although not quite as high as in the fully edentulous dentition. It would also appear that the potential causes of failure in the two-stage dental implant systems are peri-implantitis, placement of implants in poor-quality bone, and improper loading of implants. There are now data addressing modifications of the implant surface to alter the percentage of

  7. Standardized Effect Size Measures for Mediation Analysis in Cluster-Randomized Trials

    Science.gov (United States)

    Stapleton, Laura M.; Pituch, Keenan A.; Dion, Eric

    2015-01-01

    This article presents 3 standardized effect size measures to use when sharing results of an analysis of mediation of treatment effects for cluster-randomized trials. The authors discuss 3 examples of mediation analysis (upper-level mediation, cross-level mediation, and cross-level mediation with a contextual effect) with demonstration of the…

  8. Two-step two-stage fission gas release model

    International Nuclear Information System (INIS)

    Kim, Yong-soo; Lee, Chan-bock

    2006-01-01

    Based on the recent theoretical model, two-step two-stage model is developed which incorporates two stage diffusion processes, grain lattice and grain boundary diffusion, coupled with the two step burn-up factor in the low and high burn-up regime. FRAPCON-3 code and its in-pile data sets have been used for the benchmarking and validation of this model. Results reveals that its prediction is in better agreement with the experimental measurements than that by any model contained in the FRAPCON-3 code such as ANS 5.4, modified ANS5.4, and Forsberg-Massih model over whole burn-up range up to 70,000 MWd/MTU. (author)

  9. Family-based clusters of cognitive test performance in familial schizophrenia

    Directory of Open Access Journals (Sweden)

    Partonen Timo

    2004-07-01

    Full Text Available Abstract Background Cognitive traits derived from neuropsychological test data are considered to be potential endophenotypes of schizophrenia. Previously, these traits have been found to form a valid basis for clustering samples of schizophrenia patients into homogeneous subgroups. We set out to identify such clusters, but apart from previous studies, we included both schizophrenia patients and family members into the cluster analysis. The aim of the study was to detect family clusters with similar cognitive test performance. Methods Test scores from 54 randomly selected families comprising at least two siblings with schizophrenia spectrum disorders, and at least two unaffected family members were included in a complete-linkage cluster analysis with interactive data visualization. Results A well-performing, an impaired, and an intermediate family cluster emerged from the analysis. While the neuropsychological test scores differed significantly between the clusters, only minor differences were observed in the clinical variables. Conclusions The visually aided clustering algorithm was successful in identifying family clusters comprising both schizophrenia patients and their relatives. The present classification method may serve as a basis for selecting phenotypically more homogeneous groups of families in subsequent genetic analyses.

  10. Identifying seizure clusters in patients with psychogenic nonepileptic seizures.

    Science.gov (United States)

    Baird, Grayson L; Harlow, Lisa L; Machan, Jason T; Thomas, Dave; LaFrance, W C

    2017-08-01

    The present study explored how seizure clusters may be defined for those with psychogenic nonepileptic seizures (PNES), a topic for which there is a paucity of literature. The sample was drawn from a multisite randomized clinical trial for PNES; seizure data are from participants' seizure diaries. Three possible cluster definitions were examined: 1) common clinical definition, where ≥3 seizures in a day is considered a cluster, along with two novel statistical definitions, where ≥3 seizures in a day are considered a cluster if the observed number of seizures statistically exceeds what would be expected relative to a patient's: 1) average seizure rate prior to the trial, 2) observed seizure rate for the previous seven days. Prevalence of clusters was 62-68% depending on cluster definition used, and occurrence rate of clusters was 6-19% depending on cluster definition. Based on these data, clusters seem to be common in patients with PNES, and more research is needed to identify if clusters are related to triggers and outcomes. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. HICOSMO - cosmology with a complete sample of galaxy clusters - I. Data analysis, sample selection and luminosity-mass scaling relation

    Science.gov (United States)

    Schellenberger, G.; Reiprich, T. H.

    2017-08-01

    The X-ray regime, where the most massive visible component of galaxy clusters, the intracluster medium, is visible, offers directly measured quantities, like the luminosity, and derived quantities, like the total mass, to characterize these objects. The aim of this project is to analyse a complete sample of galaxy clusters in detail and constrain cosmological parameters, like the matter density, Ωm, or the amplitude of initial density fluctuations, σ8. The purely X-ray flux-limited sample (HIFLUGCS) consists of the 64 X-ray brightest galaxy clusters, which are excellent targets to study the systematic effects, that can bias results. We analysed in total 196 Chandra observations of the 64 HIFLUGCS clusters, with a total exposure time of 7.7 Ms. Here, we present our data analysis procedure (including an automated substructure detection and an energy band optimization for surface brightness profile analysis) that gives individually determined, robust total mass estimates. These masses are tested against dynamical and Planck Sunyaev-Zeldovich (SZ) derived masses of the same clusters, where good overall agreement is found with the dynamical masses. The Planck SZ masses seem to show a mass-dependent bias to our hydrostatic masses; possible biases in this mass-mass comparison are discussed including the Planck selection function. Furthermore, we show the results for the (0.1-2.4) keV luminosity versus mass scaling relation. The overall slope of the sample (1.34) is in agreement with expectations and values from literature. Splitting the sample into galaxy groups and clusters reveals, even after a selection bias correction, that galaxy groups exhibit a significantly steeper slope (1.88) compared to clusters (1.06).

  12. Two stages of economic development

    OpenAIRE

    Gong, Gang

    2016-01-01

    This study suggests that the development process of a less-developed country can be divided into two stages, which demonstrate significantly different properties in areas such as structural endowments, production modes, income distribution, and the forces that drive economic growth. The two stages of economic development have been indicated in the growth theory of macroeconomics and in the various "turning point" theories in development economics, including Lewis's dual economy theory, Kuznet...

  13. Effectiveness of a self-management program for dual sensory impaired seniors in aged care settings: study protocol for a cluster randomized controlled trial.

    Science.gov (United States)

    Roets-Merken, Lieve M; Graff, Maud J L; Zuidema, Sytse U; Hermsen, Pieter G J M; Teerenstra, Steven; Kempen, Gertrudis I J M; Vernooij-Dassen, Myrra J F J

    2013-10-07

    Five to 25 percent of residents in aged care settings have a combined hearing and visual sensory impairment. Usual care is generally restricted to single sensory impairment, neglecting the consequences of dual sensory impairment on social participation and autonomy. The aim of this study is to evaluate the effectiveness of a self-management program for seniors who acquired dual sensory impairment at old age. In a cluster randomized, single-blind controlled trial, with aged care settings as the unit of randomization, the effectiveness of a self-management program will be compared to usual care. A minimum of 14 and maximum of 20 settings will be randomized to either the intervention cluster or the control cluster, aiming to include a total of 132 seniors with dual sensory impairment. Each senior will be linked to a licensed practical nurse working at the setting. During a five to six month intervention period, nurses at the intervention clusters will be trained in a self-management program to support and empower seniors to use self-management strategies. In two separate diaries, nurses keep track of the interviews with the seniors and their reflections on their own learning process. Nurses of the control clusters offer care as usual. At senior level, the primary outcome is the social participation of the seniors measured using the Hearing Handicap Questionnaire and the Activity Card Sort, and secondary outcomes are mood, autonomy and quality of life. At nurse level, the outcome is job satisfaction. Effectiveness will be evaluated using linear mixed model analysis. The results of this study will provide evidence for the effectiveness of the Self-Management Program for seniors with dual sensory impairment living in aged care settings. The findings are expected to contribute to the knowledge on the program's potential to enhance social participation and autonomy of the seniors, as well as increasing the job satisfaction of the licensed practical nurses. Furthermore, an

  14. Mutation Clusters from Cancer Exome.

    Science.gov (United States)

    Kakushadze, Zura; Yu, Willie

    2017-08-15

    We apply our statistically deterministic machine learning/clustering algorithm *K-means (recently developed in https://ssrn.com/abstract=2908286) to 10,656 published exome samples for 32 cancer types. A majority of cancer types exhibit a mutation clustering structure. Our results are in-sample stable. They are also out-of-sample stable when applied to 1389 published genome samples across 14 cancer types. In contrast, we find in- and out-of-sample instabilities in cancer signatures extracted from exome samples via nonnegative matrix factorization (NMF), a computationally-costly and non-deterministic method. Extracting stable mutation structures from exome data could have important implications for speed and cost, which are critical for early-stage cancer diagnostics, such as novel blood-test methods currently in development.

  15. Effects of pushing techniques during the second stage of labor: A randomized controlled trial.

    Science.gov (United States)

    Koyucu, Refika Genç; Demirci, Nurdan

    2017-10-01

    Spontaneous pushing is a method that is used in the management of the second stage of labor and suggested to be more physiological for the mother and infant. The present study aims to evaluate the effects of pushing techniques on the mother and newborn. This randomized prospective study was performed between June 2013-March 2014 in a tertiary maternity clinic in Istanbul. 80 low risk, nulliparous cases were randomized to pushing groups. Valsalva pushing group was told to hold their breath while pushing. No visual-verbal instructions were given to spontaneous pushing group and they were encouraged to push without preventing respiration. Demographic data, second stage period, perineal laceration rates, fetal heart rate patterns, presence of meconium stained amniotic liquid, newborn APGAR scores, POP-Q examination and Q-tip test results were evaluated in these cases. The second stage of labor was significantly longer with spontaneous pushing. Decrease in Hb levels in valsalva pushing group was determined to be higher than spontaneous pushing group. An increased urethral mobility was observed in valsalva pushing group. Although the duration of the second stage of labor was longer compared to valsalva pushing technique, women were able to give birth without requiring any verbal or visual instruction, without exceeding the limit value of two hours and without affecting fetal wellness and neonatal results. Copyright © 2017. Published by Elsevier B.V.

  16. Simulating star clusters with the AMUSE software framework. I. Dependence of cluster lifetimes on model assumptions and cluster dissolution modes

    International Nuclear Information System (INIS)

    Whitehead, Alfred J.; McMillan, Stephen L. W.; Vesperini, Enrico; Portegies Zwart, Simon

    2013-01-01

    We perform a series of simulations of evolving star clusters using the Astrophysical Multipurpose Software Environment (AMUSE), a new community-based multi-physics simulation package, and compare our results to existing work. These simulations model a star cluster beginning with a King model distribution and a selection of power-law initial mass functions and contain a tidal cutoff. They are evolved using collisional stellar dynamics and include mass loss due to stellar evolution. After studying and understanding that the differences between AMUSE results and results from previous studies are understood, we explored the variation in cluster lifetimes due to the random realization noise introduced by transforming a King model to specific initial conditions. This random realization noise can affect the lifetime of a simulated star cluster by up to 30%. Two modes of star cluster dissolution were identified: a mass evolution curve that contains a runaway cluster dissolution with a sudden loss of mass, and a dissolution mode that does not contain this feature. We refer to these dissolution modes as 'dynamical' and 'relaxation' dominated, respectively. For Salpeter-like initial mass functions, we determined the boundary between these two modes in terms of the dynamical and relaxation timescales.

  17. FunSAV: predicting the functional effect of single amino acid variants using a two-stage random forest model.

    Directory of Open Access Journals (Sweden)

    Mingjun Wang

    Full Text Available Single amino acid variants (SAVs are the most abundant form of known genetic variations associated with human disease. Successful prediction of the functional impact of SAVs from sequences can thus lead to an improved understanding of the underlying mechanisms of why a SAV may be associated with certain disease. In this work, we constructed a high-quality structural dataset that contained 679 high-quality protein structures with 2,048 SAVs by collecting the human genetic variant data from multiple resources and dividing them into two categories, i.e., disease-associated and neutral variants. We built a two-stage random forest (RF model, termed as FunSAV, to predict the functional effect of SAVs by combining sequence, structure and residue-contact network features with other additional features that were not explored in previous studies. Importantly, a two-step feature selection procedure was proposed to select the most important and informative features that contribute to the prediction of disease association of SAVs. In cross-validation experiments on the benchmark dataset, FunSAV achieved a good prediction performance with the area under the curve (AUC of 0.882, which is competitive with and in some cases better than other existing tools including SIFT, SNAP, Polyphen2, PANTHER, nsSNPAnalyzer and PhD-SNP. The sourcecodes of FunSAV and the datasets can be downloaded at http://sunflower.kuicr.kyoto-u.ac.jp/sjn/FunSAV.

  18. Placental cord drainage in the third stage of labor: Randomized clinical trial.

    Science.gov (United States)

    Vasconcelos, Fernanda Barros; Katz, Leila; Coutinho, Isabela; Lins, Vanessa Laranjeiras; de Amorim, Melania Maria

    2018-01-01

    An open randomized clinical trial was developed at Instituto de Medicina Integral Prof. Fernando Figueira (IMIP) in Recife and at Petronila Campos Municipal Hospital in São Lourenço da Mata, both in Pernambuco, northeastern Brazil, including 226 low-risk pregnant women bearing a single, full-term, live fetus after delayed cord clamping, 113 randomized to placental cord drainage and 113 to a control group not submitted to this procedure. Women incapable of understanding the study objectives and those who went on to have an instrumental or cesarean delivery were excluded. Duration of the third stage of labor did not differ between the two groups (14.2±12.9 versus 13.7±12.1 minutes (mean ± SD), p = 0.66). Likewise, there was no significant difference in mean blood loss (248±254 versus 208±187ml, p = 0.39) or in postpartum hematocrit levels (32.3±4.06 versus 32.8±4.25mg/dl, p = 0.21). Furthermore, no differences were found between the groups for any of the secondary outcomes (postpartum hemorrhage >500 or >1000ml, therapeutic use of oxytocin, third stage >30 or 60 minutes, digital evacuation of the uterus or curettage, symptoms of postpartum anemia and maternal satisfaction). Placental cord drainage had no effect in reducing duration or blood loss during the third stage of labor. ClinicalTrials.gov: www.clinicaltrial.gov, NCT01655576.

  19. The Quality Initiative in Rectal Cancer (QIRC trial: study protocol of a cluster randomized controlled trial in surgery

    Directory of Open Access Journals (Sweden)

    Thabane Lehana

    2008-02-01

    Full Text Available Abstract Background Two unfortunate outcomes for patients treated surgically for rectal cancer are placement of a permanent colostomy and local tumor recurrence. Total mesorectal excision is a new technique for rectal cancer surgery that can lead to improved patient outcomes. We describe a cluster randomized controlled trial that is testing if the above patient outcomes can be improved through a knowledge translation strategy called the Quality Initiative in Rectal Cancer (QIRC strategy. The strategy is designed to optimize the use of total mesorectal excision techniques. Methods and Design Hospitals were randomized to the QIRC strategy (experimental group versus normal practice environment (control group. Participating hospitals, and the respective surgeon group operating in them, are from Ontario, Canada and have an annual procedure volume for major rectal cancer resections of 15 or greater. Patients were eligible if they underwent major rectal surgery for a diagnosis of primary rectal cancer. The surgeon-directed QIRC interventions included a workshop, use of opinion leaders, operative demonstrations, a post-operative questionnaire, and, audit and feedback. For an operative demonstration participating surgeons invited a study team surgeon to assist them with a case of rectal cancer surgery. The intent was to demonstrate total mesorectal excision techniques. Control arm surgeons received no intervention. Sample size calculations were two-sided, considered the clustering of data at the hospital level, and were driven by requirements for the outcome local recurrence. To detect an improvement in local recurrence from 20% to 8% with confidence we required 16 hospitals and 672 patients – 8 hospitals and 336 patients in each arm. Outcomes data are collected via chart review for at least 30 months after surgery. Analyses will use an intention-to-treat principle and will consider the clustering of data. Data collection will be complete by the end of

  20. Characterisation of the early stages of solute clustering in 1Ni-1.3Mn welds containing Cu

    Energy Technology Data Exchange (ETDEWEB)

    Hyde, J.M., E-mail: jonathan.hyde@materials.ox.ac.uk [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); National Nuclear Laboratory Ltd, B168 Harwell, Didcot, Oxon OX11 0QJ (United Kingdom); Burke, M.G. [Bechtel Bettis Inc., 814 Pittsburgh-McKeesport Blvd, West Mifflin, Pittsburgh 15122-0079 (United States); Boothby, R.M.; English, C.A. [National Nuclear Laboratory Ltd, B168 Harwell, Didcot, Oxon OX11 0QJ (United Kingdom)

    2009-04-15

    Microstructural characterisation of neutron irradiated low alloy steels is important for developing mechanistic understanding of irradiation embrittlement. This work is focused on the early stages of irradiation-induced clustering in a low Cu (0.03 wt%), high Ni ({approx}1 wt%) weld. The weld was irradiated at a very high dose rate and then examined by atom probe (energy-compensated position-sensitive atom probe (ECOPoSAP) and local electrode atom probe (LEAP)) with supporting microstructural information obtained by small angle neutron scattering (SANS) and positron annihilation (PALA). It was demonstrated that extreme care must be taken optimising parameters used to characterise the extent of clustering. This is particularly important during the early stages of irradiation-damage when the clusters are poorly defined and significant compositional variations are present in what is traditionally described as matrix. Analysis of the irradiated materials showed increasing clustering of Cu, Mn, Ni and Si with dose. In the low Cu steel the results showed that initially the irradiation damage results in clustering of Mn, Ni and Si, but at very high doses, at very high dose rates, redistribution of Si is significantly more advanced than that for Mn and Ni.

  1. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  2. Two stages of Kondo effect and competition between RKKY and Kondo in Gd-based intermetallic compound

    International Nuclear Information System (INIS)

    Vaezzadeh, Mehdi; Yazdani, Ahmad; Vaezzadeh, Majid; Daneshmand, Gissoo; Kanzeghi, Ali

    2006-01-01

    The magnetic behavior of Gd-based intermetallic compound (Gd 2 Al (1-x) Au x ) in the form of the powder and needle, is investigated. All the samples are an orthorhombic crystal structure. Only the compound with x=0.4 shows the Kondo effect (other compounds have a normal behavior). Although, for the compound in the form of powder, with x=0.4, the susceptibility measurement χ(T) shows two different stages. Moreover for (T>T K2 ) a fall of the value of χ(T) is observable, which indicates a weak presence of ferromagnetic phase. About the two stages of Kondo effect, we observe at the first (T K1 ) an increase of χ(T) and in the second stage (T K2 ) a new remarkable decrease of χ(T) (T K1 >T K2 ). For the sample in the form of needles, the first stage is observable only under high magnetic field. This first stage could be corresponds to a narrow resonance between Kondo cloud and itinerant electron. The second stage, which is remarkably visible for the sample in the form of the powder, can be attribute to a complete polarization of Kondo cloud. Observation of these two Kondo stages could be due to the weak presence of RKKY contribution

  3. Born series for (2 cluster) → (2 cluster) scattering of two, three, and four particle Schroedinger operators

    International Nuclear Information System (INIS)

    Hagedorn, G.A.

    1979-01-01

    We investigate elastic and inelastic (2 cluster)→(2 cluster)scattering for classes of two, three, and four body Schroedinger operators H=H 0 +ΣVij. Formulas are derived for those generalized eigenfunctions of H which correspond asymptotically in the past to two freely moving clusters. With these eigenfunctions, we establish a formula for the (2 cluster)→(2 cluster) T-matrix and prove the convergence of a Born series for the T-matrix at high energy. (orig.) [de

  4. Effectiveness of a multi-level implementation strategy for ASD interventions: study protocol for two linked cluster randomized trials.

    Science.gov (United States)

    Brookman-Frazee, Lauren; Stahmer, Aubyn C

    2018-05-09

    The Centers for Disease Control (2018) estimates that 1 in 59 children has autism spectrum disorder, and the annual cost of ASD in the U.S. is estimated to be $236 billion. Evidence-based interventions have been developed and demonstrate effectiveness in improving child outcomes. However, research on generalizable methods to scale up these practices in the multiple service systems caring for these children has been limited and is critical to meet this growing public health need. This project includes two, coordinated studies testing the effectiveness of the Translating Evidence-based Interventions (EBI) for ASD: Multi-Level Implementation Strategy (TEAMS) model. TEAMS focuses on improving implementation leadership, organizational climate, and provider attitudes and motivation in order to improve two key implementation outcomes-provider training completion and intervention fidelity and subsequent child outcomes. The TEAMS Leadership Institute applies implementation leadership strategies and TEAMS Individualized Provider Strategies for training applies motivational interviewing strategies to facilitate provider and organizational behavior change. A cluster randomized implementation/effectiveness Hybrid, type 3, trial with a dismantling design will be used to understand the effectiveness of TEAMS and the mechanisms of change across settings and participants. Study #1 will test the TEAMS model with AIM HI (An Individualized Mental Health Intervention for ASD) in publicly funded mental health services. Study #2 will test TEAMS with CPRT (Classroom Pivotal Response Teaching) in education settings. Thirty-seven mental health programs and 37 school districts will be randomized, stratified by county and study, to one of four groups (Standard Provider Training Only, Standard Provider Training + Leader Training, Enhanced Provider Training, Enhanced Provider Training + Leader Training) to test the effectiveness of combining standard, EBI-specific training with the two TEAMS

  5. Plasmon response in K, Na and Li clusters: systematics using the separable random-phase approximation with pseudo-Hamiltonians

    International Nuclear Information System (INIS)

    Kleinig, W.; Nesterenko, V.O.; Reinhard, P.-G.; Serra, Ll.

    1998-01-01

    The systematics of the plasmon response in spherical K, Na and Li clusters in a wide size region (8≤N≤440) is studied. We have considered two simplifying approximations whose validity has been established previously. First, a separable approach to the random-phase approximation is used. This involves an expansion of the residual interaction into a sum of separable terms until convergence is reached. Second, the electron-ion interaction is modelled by using the pseudo-Hamiltonian jellium model (MHJM) which includes nonlocal effects by means of realistic atomic pseudo-Hamiltonians. In cases where nonlocal effects are negligible the Structure Averaged Jellium Model (SAJM) has been used. Good agreement with available experimental data is achieved for K, Na (using the SAJM) and small Li clusters (invoking the PHJM). The trends for peak position and width are generally well reproduced, even up to details of the Landau fragmentation in several clusters. Less good agreement, however, is found for large Li clusters. This remains an open question

  6. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis.

    Science.gov (United States)

    Jamshidy, Ladan; Mozaffari, Hamid Reza; Faraji, Payam; Sharifi, Roohollah

    2016-01-01

    Introduction . One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods . A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results . The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion . The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  7. Small Launch Vehicle Design Approaches: Clustered Cores Compared with Multi-Stage Inline Concepts

    Science.gov (United States)

    Waters, Eric D.; Beers, Benjamin; Esther, Elizabeth; Philips, Alan; Threet, Grady E., Jr.

    2013-01-01

    In an effort to better define small launch vehicle design options two approaches were investigated from the small launch vehicle trade space. The primary focus was to evaluate a clustered common core design against a purpose built inline vehicle. Both designs focused on liquid oxygen (LOX) and rocket propellant grade kerosene (RP-1) stages with the terminal stage later evaluated as a LOX/methane (CH4) stage. A series of performance optimization runs were done in order to minimize gross liftoff weight (GLOW) including alternative thrust levels, delivery altitude for payload, vehicle length to diameter ratio, alternative engine feed systems, re-evaluation of mass growth allowances, passive versus active guidance systems, and rail and tower launch methods. Additionally manufacturability, cost, and operations also play a large role in the benefits and detriments for each design. Presented here is the Advanced Concepts Office's Earth to Orbit Launch Team methodology and high level discussion of the performance trades and trends of both small launch vehicle solutions along with design philosophies that shaped both concepts. Without putting forth a decree stating one approach is better than the other; this discussion is meant to educate the community at large and let the reader determine which architecture is truly the most economical; since each path has such a unique set of limitations and potential payoffs.

  8. Comparison of single-stage and temperature-phased two-stage anaerobic digestion of oily food waste

    International Nuclear Information System (INIS)

    Wu, Li-Jie; Kobayashi, Takuro; Li, Yu-You; Xu, Kai-Qin

    2015-01-01

    Highlights: • A single-stage and two two-stage anaerobic systems were synchronously operated. • Similar methane production 0.44 L/g VS_a_d_d_e_d from oily food waste was achieved. • The first stage of the two-stage process became inefficient due to serious pH drop. • Recycle favored the hythan production in the two-stage digestion. • The conversion of unsaturated fatty acids was enhanced by recycle introduction. - Abstract: Anaerobic digestion is an effective technology to recover energy from oily food waste. A single-stage system and temperature-phased two-stage systems with and without recycle for anaerobic digestion of oily food waste were constructed to compare the operation performances. The synchronous operation indicated the similar ability to produce methane in the three systems, with a methane yield of 0.44 L/g VS_a_d_d_e_d. The pH drop to less than 4.0 in the first stage of two-stage system without recycle resulted in poor hydrolysis, and methane or hydrogen was not produced in this stage. Alkalinity supplement from the second stage of two-stage system with recycle improved pH in the first stage to 5.4. Consequently, 35.3% of the particulate COD in the influent was reduced in the first stage of two-stage system with recycle according to a COD mass balance, and hydrogen was produced with a percentage of 31.7%, accordingly. Similar solids and organic matter were removed in the single-stage system and two-stage system without recycle. More lipid degradation and the conversion of long-chain fatty acids were achieved in the single-stage system. Recycling was proved to be effective in promoting the conversion of unsaturated long-chain fatty acids into saturated fatty acids in the two-stage system.

  9. GLACE: freezing the environment of line--emitting cluster galaxies

    Science.gov (United States)

    Pintos--Castro, I.; Sánchez--Portal, M.; Cepa, J.; Povi, M.; Santos, J.; Altieri, B.; Bongiovanni, A.; Ederoclite, A.; Oteo, I.; Pérez García, A.; Pérez--Martínez, R.; Polednikova, J.; Ramón--Pérez, M.

    2015-05-01

    GLACE is performing a survey of emission-line galaxies in clusters with the main aim of studying the effect of the environment in the star formation activity. The innovation of this work is the use of tunable filters in scan mode to obtain low resolution spectra of the desired emission lines. Although the survey is in its initial stage, we have analysed two line datasets in two different clusters: Hα in Cl0024 at z=0.4 and [O II] in RXJ1257 at z = 0.9. The first is a well known intermediate redshift cluster that has been used to test the observational strategy. We reached the planned SFRs and we could deblend the [N II] component, thus being able to discriminate the AGN population from the star-forming galaxies. Also the spectral resolution is allowing us to exploit the data for dynamical analysis. The second target is a recently discovered cluster, that we have studied regarding its FIR and [O II] emission. The [O II] observations are revealing a fainter and less massive sample, when compared with the FIR emitters, showing two different populations of star-forming galaxies. The cluster emitters have shown that no evident correlation exist between the SFR (or sSFR) and the environment. Nevertheless, we have found that both samples, FIR- and [O II]-emitters, are concentrated in the areas of intermediate to even high local density. Additionally, we explored the morphological properties of the cluster galaxies using the non-parametric galSVM code.

  10. Improving mental health among ultra-poor children: Two-year outcomes of a cluster-randomized trial in Burkina Faso.

    Science.gov (United States)

    Ismayilova, Leyla; Karimli, Leyla; Sanson, Jo; Gaveras, Eleni; Nanema, Rachel; Tô-Camier, Alexice; Chaffin, Josh

    2018-07-01

    There is limited evidence about interventions improving child mental health in francophone West Africa. Behavioral mental health interventions alone may have limited effects on children's emotional well-being in families living in abject poverty, especially in low-income countries. This study tests the effects of economic intervention, alone and in combination with a family-focused component, on the mental health of children from ultra-poor households in rural Burkina Faso. The three-arm cluster randomized trial included children in the age range of 10-15 years old (N = 360), from twelve villages in Nord region of Burkina Faso (ClinicalTrial.gov ID: NCT02415933). Villages were randomized (4 villages/120 households per arm) to the waitlist arm, the economic intervention utilizing the Graduation approach (Trickle Up/TU arm), or to the economic strengthening plus family coaching component (TU + arm). Intervention effects were tested using repeated-measures mixed-effects regressions that account for the clustered nature of the data. Children from the TU + arm showed a reduction in depressive symptoms at 12 months (medium effect size Cohen's d = -0.41, p = .001) and 24 months (d = -0.39, p = .025), compared to the control condition and the economic intervention alone (at 12 months d = -0.22, p = .020). Small effect size improvements in self-esteem were detected in the TU + group, compared to the control arm at 12 months (d = 0.21) and to the TU arm at 24 months (d = 0.21). Trauma symptoms significantly reduced in the TU + group at 12 months (Incidence Risk Ratio/IRR = 0.62, 95% CI = 0.41, 0.92, p = .042), compared to the control group. Integrating psychosocial intervention involving all family members with economic empowerment strategies may be an innovative approach for improving emotional well-being among children living in extreme poverty. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Cluster Randomized Trail of the uptake of a take-home Infant dose ...

    African Journals Online (AJOL)

    Objective: To test whether a single take home dose of infant nevirapine increased infant uptake without decreasing institutional deliveries. Design: Cluster randomized post-test only study with control group. Setting: Ten hospitals in urban areas of Coast, Rift Valley, and Western provinces, Kenya. Participants: Pregnant ...

  12. The implementation of two stages clustering (k-means clustering and adaptive neuro fuzzy inference system) for prediction of medicine need based on medical data

    Science.gov (United States)

    Husein, A. M.; Harahap, M.; Aisyah, S.; Purba, W.; Muhazir, A.

    2018-03-01

    Medication planning aim to get types, amount of medicine according to needs, and avoid the emptiness medicine based on patterns of disease. In making the medicine planning is still rely on ability and leadership experience, this is due to take a long time, skill, difficult to obtain a definite disease data, need a good record keeping and reporting, and the dependence of the budget resulted in planning is not going well, and lead to frequent lack and excess of medicines. In this research, we propose Adaptive Neuro Fuzzy Inference System (ANFIS) method to predict medication needs in 2016 and 2017 based on medical data in 2015 and 2016 from two source of hospital. The framework of analysis using two approaches. The first phase is implementing ANFIS to a data source, while the second approach we keep using ANFIS, but after the process of clustering from K-Means algorithm, both approaches are calculated values of Root Mean Square Error (RMSE) for training and testing. From the testing result, the proposed method with better prediction rates based on the evaluation analysis of quantitative and qualitative compared with existing systems, however the implementation of K-Means Algorithm against ANFIS have an effect on the timing of the training process and provide a classification accuracy significantly better without clustering.

  13. Fuzzy C-Means Clustering Model Data Mining For Recognizing Stock Data Sampling Pattern

    Directory of Open Access Journals (Sweden)

    Sylvia Jane Annatje Sumarauw

    2007-06-01

    Full Text Available Abstract Capital market has been beneficial to companies and investor. For investors, the capital market provides two economical advantages, namely deviden and capital gain, and a non-economical one that is a voting .} hare in Shareholders General Meeting. But, it can also penalize the share owners. In order to prevent them from the risk, the investors should predict the prospect of their companies. As a consequence of having an abstract commodity, the share quality will be determined by the validity of their company profile information. Any information of stock value fluctuation from Jakarta Stock Exchange can be a useful consideration and a good measurement for data analysis. In the context of preventing the shareholders from the risk, this research focuses on stock data sample category or stock data sample pattern by using Fuzzy c-Me, MS Clustering Model which providing any useful information jar the investors. lite research analyses stock data such as Individual Index, Volume and Amount on Property and Real Estate Emitter Group at Jakarta Stock Exchange from January 1 till December 31 of 204. 'he mining process follows Cross Industry Standard Process model for Data Mining (CRISP,. DM in the form of circle with these steps: Business Understanding, Data Understanding, Data Preparation, Modelling, Evaluation and Deployment. At this modelling process, the Fuzzy c-Means Clustering Model will be applied. Data Mining Fuzzy c-Means Clustering Model can analyze stock data in a big database with many complex variables especially for finding the data sample pattern, and then building Fuzzy Inference System for stimulating inputs to be outputs that based on Fuzzy Logic by recognising the pattern. Keywords: Data Mining, AUz..:y c-Means Clustering Model, Pattern Recognition

  14. Evaluation of immunization coverage by lot quality assurance sampling compared with 30-cluster sampling in a primary health centre in India.

    Science.gov (United States)

    Singh, J; Jain, D C; Sharma, R S; Verghese, T

    1996-01-01

    The immunization coverage of infants, children and women residing in a primary health centre (PHC) area in Rajasthan was evaluated both by lot quality assurance sampling (LQAS) and by the 30-cluster sampling method recommended by WHO's Expanded Programme on Immunization (EPI). The LQAS survey was used to classify 27 mutually exclusive subunits of the population, defined as residents in health subcentre areas, on the basis of acceptable or unacceptable levels of immunization coverage among infants and their mothers. The LQAS results from the 27 subcentres were also combined to obtain an overall estimate of coverage for the entire population of the primary health centre, and these results were compared with the EPI cluster survey results. The LQAS survey did not identify any subcentre with a level of immunization among infants high enough to be classified as acceptable; only three subcentres were classified as having acceptable levels of tetanus toxoid (TT) coverage among women. The estimated overall coverage in the PHC population from the combined LQAS results showed that a quarter of the infants were immunized appropriately for their ages and that 46% of their mothers had been adequately immunized with TT. Although the age groups and the periods of time during which the children were immunized differed for the LQAS and EPI survey populations, the characteristics of the mothers were largely similar. About 57% (95% CI, 46-67) of them were found to be fully immunized with TT by 30-cluster sampling, compared with 46% (95% CI, 41-51) by stratified random sampling. The difference was not statistically significant. The field work to collect LQAS data took about three times longer, and cost 60% more than the EPI survey. The apparently homogeneous and low level of immunization coverage in the 27 subcentres makes this an impractical situation in which to apply LQAS, and the results obtained were therefore not particularly useful. However, if LQAS had been applied by local

  15. Assessing the feasibility of interrupting the transmission of soil-transmitted helminths through mass drug administration: The DeWorm3 cluster randomized trial protocol.

    Science.gov (United States)

    Ásbjörnsdóttir, Kristjana Hrönn; Ajjampur, Sitara S Rao; Anderson, Roy M; Bailey, Robin; Gardiner, Iain; Halliday, Katherine E; Ibikounle, Moudachirou; Kalua, Khumbo; Kang, Gagandeep; Littlewood, D Timothy J; Luty, Adrian J F; Means, Arianna Rubin; Oswald, William; Pullan, Rachel L; Sarkar, Rajiv; Schär, Fabian; Szpiro, Adam; Truscott, James E; Werkman, Marleen; Yard, Elodie; Walson, Judd L

    2018-01-01

    Current control strategies for soil-transmitted helminths (STH) emphasize morbidity control through mass drug administration (MDA) targeting preschool- and school-age children, women of childbearing age and adults in certain high-risk occupations such as agricultural laborers or miners. This strategy is effective at reducing morbidity in those treated but, without massive economic development, it is unlikely it will interrupt transmission. MDA will therefore need to continue indefinitely to maintain benefit. Mathematical models suggest that transmission interruption may be achievable through MDA alone, provided that all age groups are targeted with high coverage. The DeWorm3 Project will test the feasibility of interrupting STH transmission using biannual MDA targeting all age groups. Study sites (population ≥80,000) have been identified in Benin, Malawi and India. Each site will be divided into 40 clusters, to be randomized 1:1 to three years of twice-annual community-wide MDA or standard-of-care MDA, typically annual school-based deworming. Community-wide MDA will be delivered door-to-door, while standard-of-care MDA will be delivered according to national guidelines. The primary outcome is transmission interruption of the STH species present at each site, defined as weighted cluster-level prevalence ≤2% by quantitative polymerase chain reaction (qPCR), 24 months after the final round of MDA. Secondary outcomes include the endline prevalence of STH, overall and by species, and the endline prevalence of STH among children under five as an indicator of incident infections. Secondary analyses will identify cluster-level factors associated with transmission interruption. Prevalence will be assessed using qPCR of stool samples collected from a random sample of cluster residents at baseline, six months after the final round of MDA and 24 months post-MDA. A smaller number of individuals in each cluster will be followed with annual sampling to monitor trends in

  16. An inexact mixed risk-aversion two-stage stochastic programming model for water resources management under uncertainty.

    Science.gov (United States)

    Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L

    2015-02-01

    Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.

  17. On the prior probabilities for two-stage Bayesian estimates

    International Nuclear Information System (INIS)

    Kohut, P.

    1992-01-01

    The method of Bayesian inference is reexamined for its applicability and for the required underlying assumptions in obtaining and using prior probability estimates. Two different approaches are suggested to determine the first-stage priors in the two-stage Bayesian analysis which avoid certain assumptions required for other techniques. In the first scheme, the prior is obtained through a true frequency based distribution generated at selected intervals utilizing actual sampling of the failure rate distributions. The population variability distribution is generated as the weighed average of the frequency distributions. The second method is based on a non-parametric Bayesian approach using the Maximum Entropy Principle. Specific features such as integral properties or selected parameters of prior distributions may be obtained with minimal assumptions. It is indicated how various quantiles may also be generated with a least square technique

  18. Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G.

    1993-11-01

    The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.

  19. Could the clinical interpretability of subgroups detected using clustering methods be improved by using a novel two-stage approach?

    DEFF Research Database (Denmark)

    Kent, Peter; Stochkendahl, Mette Jensen; Wulff Christensen, Henrik

    2015-01-01

    participation, psychological factors, biomarkers and imaging. However, such ‘whole person’ research may result in data-driven subgroups that are complex, difficult to interpret and challenging to recognise clinically. This paper describes a novel approach to applying statistical clustering techniques that may...... potential benefits but requires broad testing, in multiple patient samples, to determine its clinical value. The usefulness of the approach is likely to be context-specific, depending on the characteristics of the available data and the research question being asked of it....

  20. Nano-scale clusters formed in the early stage of phase decomposition of Al-Mg-Si alloys

    Energy Technology Data Exchange (ETDEWEB)

    Hirosawa, S.; Sato, T. [Dept. of Metallurgy and Ceramics Science, Tokyo Inst. of Tech. (Japan)

    2005-07-01

    The formation of nano-scale clusters (nanoclusters) prior to the precipitation of the strengthening {beta}'' phase significantly influences two-step aging behavior of Al-Mg-Si alloys. In this work, the existence of two kinds of nanoclusters has been verified in the early stage of phase decomposition by differential scanning calorimetry (DSC) and three-dimensional atom probe (3DAP). Pre-aging treatment at 373 K before natural aging was also found to form preferentially one of the two nanoclusters, resulting in the remarkable restoration of age-hardenability at paint-bake temperatures. Such microstructural control by means of optimized heat-treatments; i.e. nanocluster assist processing (NCAP), possesses great potential for enabling Al-Mg-Si alloys to be used more widely as a body-sheet material of automobiles. (orig.)

  1. The effectiveness of non-pyrethroid insecticide-treated durable wall lining to control malaria in rural Tanzania: study protocol for a two-armed cluster randomized trial

    Directory of Open Access Journals (Sweden)

    George Mtove

    2016-07-01

    Full Text Available Abstract Background Despite considerable reductions in malaria achieved by scaling-up long-lasting insecticidal nets (LLINs and indoor residual spraying (IRS, maintaining sustained community protection remains operationally challenging. Increasing insecticide resistance also threatens to jeopardize the future of both strategies. Non-pyrethroid insecticide­treated wall lining (ITWL may represent an alternate or complementary control method and a potential tool to manage insecticide resistance. To date no study has demonstrated whether ITWL can reduce malaria transmission nor provide additional protection beyond the current best practice of universal coverage (UC of LLINs and prompt case management. Methods/design A two-arm cluster randomized controlled trial will be conducted in rural Tanzania to assess whether non-pyrethroid ITWL and UC of LLINs provide added protection against malaria infection in children, compared to UC of LLINs alone. Stratified randomization based on malaria prevalence will be used to select 22 village clusters per arm. All 44 clusters will receive LLINs and half will also have ITWL installed on interior house walls. Study children, aged 6 months to 11 years old, will be enrolled from each cluster and followed monthly to estimate cumulative incidence of malaria parasitaemia (primary endpoint, time to first malaria episode and prevalence of anaemia before and after intervention. Entomological inoculation rate will be estimated using indoor CDC light traps and outdoor tent traps followed by detection of Anopheles gambiae species, sporozoite infection, insecticide resistance and blood meal source. ITWL bioefficacy and durability will be monitored using WHO cone bioassays and household surveys, respectively. Social and cultural factors influencing community and household ITWL acceptability will be explored through focus-group discussions and in-depth interviews. Cost-effectiveness, compared between study arms, will be

  2. Home-based versus mobile clinic HIV testing and counseling in rural Lesotho: a cluster-randomized trial.

    Science.gov (United States)

    Labhardt, Niklaus Daniel; Motlomelo, Masetsibi; Cerutti, Bernard; Pfeiffer, Karolin; Kamele, Mashaete; Hobbins, Michael A; Ehmer, Jochen

    2014-12-01

    The success of HIV programs relies on widely accessible HIV testing and counseling (HTC) services at health facilities as well as in the community. Home-based HTC (HB-HTC) is a popular community-based approach to reach persons who do not test at health facilities. Data comparing HB-HTC to other community-based HTC approaches are very limited. This trial compares HB-HTC to mobile clinic HTC (MC-HTC). The trial was powered to test the hypothesis of higher HTC uptake in HB-HTC campaigns than in MC-HTC campaigns. Twelve clusters were randomly allocated to HB-HTC or MC-HTC. The six clusters in the HB-HTC group received 30 1-d multi-disease campaigns (five villages per cluster) that delivered services by going door-to-door, whereas the six clusters in MC-HTC group received campaigns involving community gatherings in the 30 villages with subsequent service provision in mobile clinics. Time allocation and human resources were standardized and equal in both groups. All individuals accessing the campaigns with unknown HIV status or whose last HIV test was >12 wk ago and was negative were eligible. All outcomes were assessed at the individual level. Statistical analysis used multivariable logistic regression. Odds ratios and p-values were adjusted for gender, age, and cluster effect. Out of 3,197 participants from the 12 clusters, 2,563 (80.2%) were eligible (HB-HTC: 1,171; MC-HTC: 1,392). The results for the primary outcomes were as follows. Overall HTC uptake was higher in the HB-HTC group than in the MC-HTC group (92.5% versus 86.7%; adjusted odds ratio [aOR]: 2.06; 95% CI: 1.18-3.60; p = 0. 011). Among adolescents and adults ≥ 12 y, HTC uptake did not differ significantly between the two groups; however, in children versus 58.7%; aOR: 4.91; 95% CI: 2.41-10.0; pindividuals in the HB-HTC and in the MC-HTC arms, respectively, linked to HIV care within 1 mo after testing positive. Findings for secondary outcomes were as follows: HB-HTC reached more first-time testers

  3. RandomSpot: A web-based tool for systematic random sampling of virtual slides.

    Science.gov (United States)

    Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E

    2015-01-01

    This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.

  4. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Ladan Jamshidy

    2016-01-01

    Full Text Available Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  5. Potts Model with Invisible Colors : Random-Cluster Representation and Pirogov–Sinai Analysis

    NARCIS (Netherlands)

    Enter, Aernout C.D. van; Iacobelli, Giulio; Taati, Siamak

    We study a recently introduced variant of the ferromagnetic Potts model consisting of a ferromagnetic interaction among q “visible” colors along with the presence of r non-interacting “invisible” colors. We introduce a random-cluster representation for the model, for which we prove the existence of

  6. Two stages of Kondo effect and competition between RKKY and Kondo in Gd-based intermetallic compound

    Energy Technology Data Exchange (ETDEWEB)

    Vaezzadeh, Mehdi [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of)]. E-mail: mehdi@kntu.ac.ir; Yazdani, Ahmad [Tarbiat Modares University, P.O. Box 14155-4838, Tehran (Iran, Islamic Republic of); Vaezzadeh, Majid [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of); Daneshmand, Gissoo [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of); Kanzeghi, Ali [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of)

    2006-05-01

    The magnetic behavior of Gd-based intermetallic compound (Gd{sub 2}Al{sub (1-x)}Au{sub x}) in the form of the powder and needle, is investigated. All the samples are an orthorhombic crystal structure. Only the compound with x=0.4 shows the Kondo effect (other compounds have a normal behavior). Although, for the compound in the form of powder, with x=0.4, the susceptibility measurement {chi}(T) shows two different stages. Moreover for (T>T{sub K2}) a fall of the value of {chi}(T) is observable, which indicates a weak presence of ferromagnetic phase. About the two stages of Kondo effect, we observe at the first (T{sub K1}) an increase of {chi}(T) and in the second stage (T{sub K2}) a new remarkable decrease of {chi}(T) (T{sub K1}>T{sub K2}). For the sample in the form of needles, the first stage is observable only under high magnetic field. This first stage could be corresponds to a narrow resonance between Kondo cloud and itinerant electron. The second stage, which is remarkably visible for the sample in the form of the powder, can be attribute to a complete polarization of Kondo cloud. Observation of these two Kondo stages could be due to the weak presence of RKKY contribution.

  7. Two-Stage Centrifugal Fan

    Science.gov (United States)

    Converse, David

    2011-01-01

    Fan designs are often constrained by envelope, rotational speed, weight, and power. Aerodynamic performance and motor electrical performance are heavily influenced by rotational speed. The fan used in this work is at a practical limit for rotational speed due to motor performance characteristics, and there is no more space available in the packaging for a larger fan. The pressure rise requirements keep growing. The way to ordinarily accommodate a higher DP is to spin faster or grow the fan rotor diameter. The invention is to put two radially oriented stages on a single disk. Flow enters the first stage from the center; energy is imparted to the flow in the first stage blades, the flow is redirected some amount opposite to the direction of rotation in the fixed stators, and more energy is imparted to the flow in the second- stage blades. Without increasing either rotational speed or disk diameter, it is believed that as much as 50 percent more DP can be achieved with this design than with an ordinary, single-stage centrifugal design. This invention is useful primarily for fans having relatively low flow rates with relatively high pressure rise requirements.

  8. Sensitivity Sampling Over Dynamic Geometric Data Streams with Applications to $k$-Clustering

    OpenAIRE

    Song, Zhao; Yang, Lin F.; Zhong, Peilin

    2018-01-01

    Sensitivity based sampling is crucial for constructing nearly-optimal coreset for $k$-means / median clustering. In this paper, we provide a novel data structure that enables sensitivity sampling over a dynamic data stream, where points from a high dimensional discrete Euclidean space can be either inserted or deleted. Based on this data structure, we provide a one-pass coreset construction for $k$-means %and M-estimator clustering using space $\\widetilde{O}(k\\mathrm{poly}(d))$ over $d$-dimen...

  9. Two-stage single-volume exchange transfusion in severe hemolytic disease of the newborn.

    Science.gov (United States)

    Abbas, Wael; Attia, Nayera I; Hassanein, Sahar M A

    2012-07-01

    Evaluation of two-stage single-volume exchange transfusion (TSSV-ET) in decreasing the post-exchange rebound increase in serum bilirubin level, with subsequent reduction of the need for repeated exchange transfusions. The study included 104 neonates with hyperbilirubinemia needing exchange transfusion. They were randomly enrolled into two equal groups, each group comprised 52 neonates. TSSV-ET was performed for the 52 neonates and the traditional single-stage double-volume exchange transfusion (SSDV-ET) was performed to 52 neonates. TSSV-ET significantly lowered rebound serum bilirubin level (12.7 ± 1.1 mg/dL), compared to SSDV-ET (17.3 ± 1.7 mg/dL), p < 0.001. Need for repeated exchange transfusions was significantly lower in TSSV-ET group (13.5%), compared to 32.7% in SSDV-ET group, p < 0.05. No significant difference was found between the two groups as regards the morbidity (11.5% and 9.6%, respectively) and the mortality (1.9% for both groups). Two-stage single-volume exchange transfusion proved to be more effective in reducing rebound serum bilirubin level post-exchange and in decreasing the need for repeated exchange transfusions.

  10. Symptom Clusters in People Living with HIV Attending Five Palliative Care Facilities in Two Sub-Saharan African Countries: A Hierarchical Cluster Analysis.

    Science.gov (United States)

    Moens, Katrien; Siegert, Richard J; Taylor, Steve; Namisango, Eve; Harding, Richard

    2015-01-01

    Symptom research across conditions has historically focused on single symptoms, and the burden of multiple symptoms and their interactions has been relatively neglected especially in people living with HIV. Symptom cluster studies are required to set priorities in treatment planning, and to lessen the total symptom burden. This study aimed to identify and compare symptom clusters among people living with HIV attending five palliative care facilities in two sub-Saharan African countries. Data from cross-sectional self-report of seven-day symptom prevalence on the 32-item Memorial Symptom Assessment Scale-Short Form were used. A hierarchical cluster analysis was conducted using Ward's method applying squared Euclidean Distance as the similarity measure to determine the clusters. Contingency tables, X2 tests and ANOVA were used to compare the clusters by patient specific characteristics and distress scores. Among the sample (N=217) the mean age was 36.5 (SD 9.0), 73.2% were female, and 49.1% were on antiretroviral therapy (ART). The cluster analysis produced five symptom clusters identified as: 1) dermatological; 2) generalised anxiety and elimination; 3) social and image; 4) persistently present; and 5) a gastrointestinal-related symptom cluster. The patients in the first three symptom clusters reported the highest physical and psychological distress scores. Patient characteristics varied significantly across the five clusters by functional status (worst functional physical status in cluster one, ppeople living with HIV with longitudinally collected symptom data to test cluster stability and identify common symptom trajectories is recommended.

  11. RRW: repeated random walks on genome-scale protein networks for local cluster discovery

    Directory of Open Access Journals (Sweden)

    Can Tolga

    2009-09-01

    Full Text Available Abstract Background We propose an efficient and biologically sensitive algorithm based on repeated random walks (RRW for discovering functional modules, e.g., complexes and pathways, within large-scale protein networks. Compared to existing cluster identification techniques, RRW implicitly makes use of network topology, edge weights, and long range interactions between proteins. Results We apply the proposed technique on a functional network of yeast genes and accurately identify statistically significant clusters of proteins. We validate the biological significance of the results using known complexes in the MIPS complex catalogue database and well-characterized biological processes. We find that 90% of the created clusters have the majority of their catalogued proteins belonging to the same MIPS complex, and about 80% have the majority of their proteins involved in the same biological process. We compare our method to various other clustering techniques, such as the Markov Clustering Algorithm (MCL, and find a significant improvement in the RRW clusters' precision and accuracy values. Conclusion RRW, which is a technique that exploits the topology of the network, is more precise and robust in finding local clusters. In addition, it has the added flexibility of being able to find multi-functional proteins by allowing overlapping clusters.

  12. Multi-stage pulsed laser deposition of aluminum nitride at different temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Duta, L. [National Institute for Lasers, Plasma, and Radiation Physics, 409 Atomistilor Street, 077125 Magurele (Romania); Stan, G.E. [National Institute of Materials Physics, 105 bis Atomistilor Street, 077125 Magurele (Romania); Stroescu, H.; Gartner, M.; Anastasescu, M. [Institute of Physical Chemistry “Ilie Murgulescu”, Romanian Academy, 202 Splaiul Independentei, 060021 Bucharest (Romania); Fogarassy, Zs. [Research Institute for Technical Physics and Materials Science, Hungarian Academy of Sciences, Konkoly Thege Miklos u. 29-33, H-1121 Budapest (Hungary); Mihailescu, N. [National Institute for Lasers, Plasma, and Radiation Physics, 409 Atomistilor Street, 077125 Magurele (Romania); Szekeres, A., E-mail: szekeres@issp.bas.bg [Institute of Solid State Physics, Bulgarian Academy of Sciences, Tzarigradsko Chaussee 72, Sofia 1784 (Bulgaria); Bakalova, S. [Institute of Solid State Physics, Bulgarian Academy of Sciences, Tzarigradsko Chaussee 72, Sofia 1784 (Bulgaria); Mihailescu, I.N., E-mail: ion.mihailescu@inflpr.ro [National Institute for Lasers, Plasma, and Radiation Physics, 409 Atomistilor Street, 077125 Magurele (Romania)

    2016-06-30

    Highlights: • Multi-stage pulsed laser deposition of aluminum nitride at different temperatures. • 800 °C seed film boosts the next growth of crystalline structures at lower temperature. • Two-stage deposited AlN samples exhibit randomly oriented wurtzite structures. • Band gap energy values increase with deposition temperature. • Correlation was observed between single- and multi-stage AlN films. - Abstract: We report on multi-stage pulsed laser deposition of aluminum nitride (AlN) on Si (1 0 0) wafers, at different temperatures. The first stage of deposition was carried out at 800 °C, the optimum temperature for AlN crystallization. In the second stage, the deposition was conducted at lower temperatures (room temperature, 350 °C or 450 °C), in ambient Nitrogen, at 0.1 Pa. The synthesized structures were analyzed by grazing incidence X-ray diffraction (GIXRD), transmission electron microscopy (TEM), atomic force microscopy and spectroscopic ellipsometry (SE). GIXRD measurements indicated that the two-stage deposited AlN samples exhibited a randomly oriented wurtzite structure with nanosized crystallites. The peaks were shifted to larger angles, indicative for smaller inter-planar distances. Remarkably, TEM images demonstrated that the high-temperature AlN “seed” layers (800 °C) promoted the growth of poly-crystalline AlN structures at lower deposition temperatures. When increasing the deposition temperature, the surface roughness of the samples exhibited values in the range of 0.4–2.3 nm. SE analyses showed structures which yield band gap values within the range of 4.0–5.7 eV. A correlation between the results of single- and multi-stage AlN depositions was observed.

  13. Structuring communication relationships for interprofessional teamwork (SCRIPT: a cluster randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Kenaszchuk Chris

    2007-09-01

    Full Text Available Abstract Background Despite a burgeoning interest in using interprofessional approaches to promote effective collaboration in health care, systematic reviews find scant evidence of benefit. This protocol describes the first cluster randomized controlled trial (RCT to design and evaluate an intervention intended to improve interprofessional collaborative communication and patient-centred care. Objectives The objective is to evaluate the effects of a four-component, hospital-based staff communication protocol designed to promote collaborative communication between healthcare professionals and enhance patient-centred care. Methods The study is a multi-centre mixed-methods cluster randomized controlled trial involving twenty clinical teaching teams (CTTs in general internal medicine (GIM divisions of five Toronto tertiary-care hospitals. CTTs will be randomly assigned either to receive an intervention designed to improve interprofessional collaborative communication, or to continue usual communication practices. Non-participant naturalistic observation, shadowing, and semi-structured, qualitative interviews were conducted to explore existing patterns of interprofessional collaboration in the CTTs, and to support intervention development. Interviews and shadowing will continue during intervention delivery in order to document interactions between the intervention settings and adopters, and changes in interprofessional communication. The primary outcome is the rate of unplanned hospital readmission. Secondary outcomes are length of stay (LOS; adherence to evidence-based prescription drug therapy; patients' satisfaction with care; self-report surveys of CTT staff perceptions of interprofessional collaboration; and frequency of calls to paging devices. Outcomes will be compared on an intention-to-treat basis using adjustment methods appropriate for data from a cluster randomized design. Discussion Pre-intervention qualitative analysis revealed that a

  14. AGN Clustering in the BAT Sample

    Science.gov (United States)

    Powell, Meredith; Cappelluti, Nico; Urry, Meg; Koss, Michael; BASS Team

    2018-01-01

    We characterize the environments of local growing supermassive black holes by measuring the clustering of AGN in the Swift-BAT Spectroscopic Survey (BASS). With 548 AGN in the redshift range 0.012MASS galaxies, we constrain the halo occupation distribution (HOD) of the full sample with unprecedented sensitivity, as well as in bins of obscuration with matched luminosity distributions. In doing so, we find that AGN tend to reside in galaxy groups, agreeing with previous studies of AGN throughout a large range of luminosity and redshift. We also find evidence that obscured AGN tend to reside in denser environments than unobscured AGN.

  15. The Effectiveness of Healthy Start Home Visit Program: Cluster Randomized Controlled Trial

    Science.gov (United States)

    Leung, Cynthia; Tsang, Sandra; Heung, Kitty

    2015-01-01

    Purpose: The study reported the effectiveness of a home visit program for disadvantaged Chinese parents with preschool children, using cluster randomized controlled trial design. Method: Participants included 191 parents and their children from 24 preschools, with 84 dyads (12 preschools) in the intervention group and 107 dyads (12 preschools) in…

  16. Effect of Reassuring Information About Musculoskeletal and Mental Health Complaints at the Workplace: A Cluster Randomized Trial of the atWork Intervention.

    Science.gov (United States)

    Johnsen, Tone Langjordet; Eriksen, Hege Randi; Baste, Valborg; Indahl, Aage; Odeen, Magnus; Tveito, Torill Helene

    2018-05-21

    Purpose The purpose of this study was to investigate the possible difference between the Modified atWork intervention (MAW) and the Original atWork intervention (OAW) on sick leave and other health related outcomes. atWork is a group intervention using the workplace as an arena for distribution of evidence-based knowledge about musculoskeletal and mental health complaints. Methods A cluster randomized controlled trial with 93 kindergartens, comprising a total of 1011 employees, was conducted. Kindergartens were stratified by county and size and randomly allocated to MAW (45 clusters, 324 respondents) or OAW (48 clusters, 313 respondents). The randomization and intervention allocation processes were concealed. There was no blinding to group allocation. Primary outcome was register data on sick leave at cluster level. Secondary outcomes were health complaints, job satisfaction, social support, coping, and beliefs about musculoskeletal and mental health complaints, measured at the individual level. Results The MAW group reduced sick leave by 5.7% during the intervention year, while the OAW group had a 7.5% increase. Overall, the changes were not statistically significant, and no difference was detected between groups, based on 45 and 47 kindergartens. Compared to the OAW group, the MAW group had a smaller reduction for two of the statements concerning faulty beliefs about back pain, but believed less in the hereditary nature of depression. Conclusions The MAW did not have a different effect on sick leave at cluster level compared to the OAW. Trial registration https://Clinicaltrials.gov/ : NCT02396797. Registered March 23th, 2015.

  17. Long term effectiveness on prescribing of two multifaceted educational interventions: results of two large scale randomized cluster trials.

    Directory of Open Access Journals (Sweden)

    Nicola Magrini

    Full Text Available INTRODUCTION: Information on benefits and risks of drugs is a key element affecting doctors' prescribing decisions. Outreach visits promoting independent information have proved moderately effective in changing prescribing behaviours. OBJECTIVES: Testing the short and long-term effectiveness on general practitioners' prescribing of small groups meetings led by pharmacists. METHODS: Two cluster open randomised controlled trials (RCTs were carried out in a large scale NHS setting. Ad hoc prepared evidence based material were used considering a therapeutic area approach--TEA, with information materials on osteoporosis or prostatic hyperplasia--and a single drug oriented approach--SIDRO, with information materials on me-too drugs of 2 different classes: barnidipine or prulifloxacin. In each study, all 115 Primary Care Groups in a Northern Italy area (2.2 million inhabitants, 1737 general practitioners were randomised to educational small groups meetings, in which available evidence was provided together with drug utilization data and clinical scenarios. Main outcomes were changes in the six-months prescription of targeted drugs. Longer term results (24 and 48 months were also evaluated. RESULTS: In the TEA trial, one of the four primary outcomes showed a reduction (prescription of alfuzosin compared to tamsulosin and terazosin in benign prostatic hyperplasia: prescribing ratio -8.5%, p = 0.03. Another primary outcome (prescription of risedronate showed a reduction at 24 and 48 months (-7.6%, p = 0.02; and -9,8%, p = 0.03, but not at six months (-5.1%, p = 0.36. In the SIDRO trial both primary outcomes showed a statistically significant reduction (prescription of barnidipine -9.8%, p = 0.02; prescription of prulifloxacin -11.1%, p = 0.04, which persisted or increased over time. INTERPRETATION: These two cluster RCTs showed the large scale feasibility of a complex educational program in a NHS setting, and its potentially

  18. Relationship between damage clustering and mortality in systemic lupus erythematosus in early and late stages of the disease: cluster analyses in a large cohort from the Spanish Society of Rheumatology Lupus Registry.

    Science.gov (United States)

    Pego-Reigosa, José María; Lois-Iglesias, Ana; Rúa-Figueroa, Íñigo; Galindo, María; Calvo-Alén, Jaime; de Uña-Álvarez, Jacobo; Balboa-Barreiro, Vanessa; Ibáñez Ruan, Jesús; Olivé, Alejandro; Rodríguez-Gómez, Manuel; Fernández Nebro, Antonio; Andrés, Mariano; Erausquin, Celia; Tomero, Eva; Horcada Rubio, Loreto; Uriarte Isacelaya, Esther; Freire, Mercedes; Montilla, Carlos; Sánchez-Atrio, Ana I; Santos-Soler, Gregorio; Zea, Antonio; Díez, Elvira; Narváez, Javier; Blanco-Alonso, Ricardo; Silva-Fernández, Lucía; Ruiz-Lucea, María Esther; Fernández-Castro, Mónica; Hernández-Beriain, José Ángel; Gantes-Mora, Marian; Hernández-Cruz, Blanca; Pérez-Venegas, José; Pecondón-Español, Ángela; Marras Fernández-Cid, Carlos; Ibáñez-Barcelo, Mónica; Bonilla, Gema; Torrente-Segarra, Vicenç; Castellví, Iván; Alegre, Juan José; Calvet, Joan; Marenco de la Fuente, José Luis; Raya, Enrique; Vázquez-Rodríguez, Tomás Ramón; Quevedo-Vila, Víctor; Muñoz-Fernández, Santiago; Otón, Teresa; Rahman, Anisur; López-Longo, Francisco Javier

    2016-07-01

    To identify patterns (clusters) of damage manifestations within a large cohort of SLE patients and evaluate the potential association of these clusters with a higher risk of mortality. This is a multicentre, descriptive, cross-sectional study of a cohort of 3656 SLE patients from the Spanish Society of Rheumatology Lupus Registry. Organ damage was ascertained using the Systemic Lupus International Collaborating Clinics Damage Index. Using cluster analysis, groups of patients with similar patterns of damage manifestations were identified. Then, overall clusters were compared as well as the subgroup of patients within every cluster with disease duration shorter than 5 years. Three damage clusters were identified. Cluster 1 (80.6% of patients) presented a lower amount of individuals with damage (23.2 vs 100% in clusters 2 and 3, P Cluster 2 (11.4% of patients) was characterized by musculoskeletal damage in all patients. Cluster 3 (8.0% of patients) was the only group with cardiovascular damage, and this was present in all patients. The overall mortality rate of patients in clusters 2 and 3 was higher than that in cluster 1 (P clusters. Both in early and late stages of the disease, there was a significant association of these clusters with an increased risk of mortality. Physicians should pay special attention to the early prevention of damage in these two systems. © The Author 2016. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. A three-stage strategy for optimal price offering by a retailer based on clustering techniques

    International Nuclear Information System (INIS)

    Mahmoudi-Kohan, N.; Shayesteh, E.; Moghaddam, M. Parsa; Sheikh-El-Eslami, M.K.

    2010-01-01

    In this paper, an innovative strategy for optimal price offering to customers for maximizing the profit of a retailer is proposed. This strategy is based on load profile clustering techniques and includes three stages. For the purpose of clustering, an improved weighted fuzzy average K-means is proposed. Also, in this paper a new acceptance function for increasing the profit of the retailer is proposed. The new method is evaluated by implementation on a group of 300 customers of a 20 kV distribution network. (author)

  20. A three-stage strategy for optimal price offering by a retailer based on clustering techniques

    Energy Technology Data Exchange (ETDEWEB)

    Mahmoudi-Kohan, N.; Shayesteh, E. [Islamic Azad University (Garmsar Branch), Garmsar (Iran); Moghaddam, M. Parsa; Sheikh-El-Eslami, M.K. [Tarbiat Modares University, Tehran (Iran)

    2010-12-15

    In this paper, an innovative strategy for optimal price offering to customers for maximizing the profit of a retailer is proposed. This strategy is based on load profile clustering techniques and includes three stages. For the purpose of clustering, an improved weighted fuzzy average K-means is proposed. Also, in this paper a new acceptance function for increasing the profit of the retailer is proposed. The new method is evaluated by implementation on a group of 300 customers of a 20 kV distribution network. (author)

  1. Analysis of Aspects of Innovation in a Brazilian Cluster

    Directory of Open Access Journals (Sweden)

    Adriana Valélia Saraceni

    2012-09-01

    Full Text Available Innovation through clustering has become very important on the increased significance that interaction represents on innovation and learning process concept. This study aims to identify whereas a case analysis on innovation process in a cluster represents on the learning process. Therefore, this study is developed in two stages. First, we used a preliminary case study verifying a cluster innovation analysis and it Innovation Index, for further, exploring a combined body of theory and practice. Further, the second stage is developed by exploring the learning process concept. Both stages allowed us building a theory model for the learning process development in clusters. The main results of the model development come up with a mechanism of improvement implementation on clusters when case studies are applied.

  2. Complementary feeding: a Global Network cluster randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Pasha Omrana

    2011-01-01

    Full Text Available Abstract Background Inadequate and inappropriate complementary feeding are major factors contributing to excess morbidity and mortality in young children in low resource settings. Animal source foods in particular are cited as essential to achieve micronutrient requirements. The efficacy of the recommendation for regular meat consumption, however, has not been systematically evaluated. Methods/Design A cluster randomized efficacy trial was designed to test the hypothesis that 12 months of daily intake of beef added as a complementary food would result in greater linear growth velocity than a micronutrient fortified equi-caloric rice-soy cereal supplement. The study is being conducted in 4 sites of the Global Network for Women's and Children's Health Research located in Guatemala, Pakistan, Democratic Republic of the Congo (DRC and Zambia in communities with toddler stunting rates of at least 20%. Five clusters per country were randomized to each of the food arms, with 30 infants in each cluster. The daily meat or cereal supplement was delivered to the home by community coordinators, starting when the infants were 6 months of age and continuing through 18 months. All participating mothers received nutrition education messages to enhance complementary feeding practices delivered by study coordinators and through posters at the local health center. Outcome measures, obtained at 6, 9, 12, and 18 months by a separate assessment team, included anthropometry; dietary variety and diversity scores; biomarkers of iron, zinc and Vitamin B12 status (18 months; neurocognitive development (12 and 18 months; and incidence of infectious morbidity throughout the trial. The trial was supervised by a trial steering committee, and an independent data monitoring committee provided oversight for the safety and conduct of the trial. Discussion Findings from this trial will test the efficacy of daily intake of meat commencing at age 6 months and, if beneficial, will

  3. Cluster Analysis of the Yale Global Tic Severity Scale (YGTSS): Symptom Dimensions and Clinical Correlates in an Outpatient Youth Sample

    OpenAIRE

    Kircanski, Katharina; Woods, Douglas W.; Chang, Susanna W.; Ricketts, Emily J.; Piacentini, John C.

    2010-01-01

    Tic disorders are heterogeneous, with symptoms varying widely both within and across patients. Exploration of symptom clusters may aid in the identification of symptom dimensions of empirical and treatment import. This article presents the results of two studies investigating tic symptom clusters using a sample of 99 youth (M age = 10.7, 81% male, 77% Caucasian) diagnosed with a primary tic disorder (Tourette?s disorder or chronic tic disorder), across two university-based outpatient clinics ...

  4. Cluster Analysis of the Yale Global Tic Severity Scale (YGTSS): Symptom Dimensions and Clinical Correlates in an Outpatient Youth Sample

    Science.gov (United States)

    Kircanski, Katharina; Woods, Douglas W.; Chang, Susanna W.; Ricketts, Emily J.; Piacentini, John C.

    2010-01-01

    Tic disorders are heterogeneous, with symptoms varying widely both within and across patients. Exploration of symptom clusters may aid in the identification of symptom dimensions of empirical and treatment import. This article presents the results of two studies investigating tic symptom clusters using a sample of 99 youth (M age = 10.7, 81% male,…

  5. Efficacy of a workplace osteoporosis prevention intervention: a cluster randomized trial

    Directory of Open Access Journals (Sweden)

    Ai May Tan

    2016-08-01

    Full Text Available Abstract Background Osteoporosis is a debilitating disease. Adequate calcium consumption and physical activity are the two major modifiable risk factors. This paper describes the major outcomes and efficacy of a workplace-based targeted behaviour change intervention to improve the dietary and physical activity behaviours of working women in sedentary occupations in Singapore. Methods A cluster-randomized design was used, comparing the efficacy of a tailored intervention to standard care. Workplaces were the units of randomization and intervention. Sixteen workplaces were recruited from a pool of 97, and randomly assigned to intervention and control arms (eight workplaces in each. Women meeting specified inclusion criteria were then recruited to participate. Workplaces in the intervention arm received three participatory workshops and organization-wide educational activities. Workplaces in the control/standard care arm received print resources. Outcome measures were calcium intake (milligrams/day and physical activity level (duration: minutes/week, measured at baseline, 4 weeks and 6 months post intervention. Adjusted cluster-level analyses were conducted comparing changes in intervention versus control groups, following intention-to-treat principles and CONSORT guidelines. Results Workplaces in the intervention group reported a significantly greater increase in calcium intake and duration of load-bearing moderate to vigorous physical activity (MVPA compared with the standard care control group. Four weeks after intervention, the difference in adjusted mean calcium intake was 343.2 mg/day (95 % CI = 337.4 to 349.0, p < .0005 and the difference in adjusted mean load-bearing MVPA was 55.6 min/week (95 % CI = 54.5 to 56.6, p < .0005. Six months post intervention, the mean differences attenuated slightly to 290.5 mg/day (95 % CI = 285.3 to 295.7, p < .0005 and 50.9 min/week (95 % CI =49.3 to 52.6, p < .0005

  6. Informing resource-poor populations and the delivery of entitled health and social services in rural India: a cluster randomized controlled trial.

    Science.gov (United States)

    Pandey, Priyanka; Sehgal, Ashwini R; Riboud, Michelle; Levine, David; Goyal, Madhav

    2007-10-24

    A lack of awareness about entitled health and social services may contribute to poor delivery of such services in developing countries, especially among individuals of low socioeconomic status. To determine the impact of informing resource-poor rural populations about entitled services. Community-based, cluster randomized controlled trial conducted from May 2004 to May 2005 in 105 randomly selected village clusters in Uttar Pradesh state in India. Households (548 intervention and 497 control) were selected by a systematic sampling design, including both low-caste and mid- to high-caste households. Four to 6 public meetings were held in each intervention village cluster to disseminate information on entitled health services, entitled education services, and village governance requirements. No intervention took place in control village clusters. Visits by nurse midwife; prenatal examinations, tetanus vaccinations, and prenatal supplements received by pregnant women; vaccinations received by infants; excess school fees charged; occurrence of village council meetings; and development work in villages. At baseline, there were no significant differences in self-reported delivery of health and social services. After 1 year, intervention villagers reported better delivery of several services compared with control villagers: in a multivariate analysis, 30% more prenatal examinations (95% confidence interval [CI], 17%-43%; P India about entitled services enhanced the delivery of health and social services among both low- and mid- to high-caste households. Interventions that emphasize educating resource-poor populations about entitled services may improve the delivery of such services. clinicaltrials.gov Identifier: NCT00421291.

  7. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  8. Applications of Cluster Analysis to the Creation of Perfectionism Profiles: A Comparison of two Clustering Approaches

    Directory of Open Access Journals (Sweden)

    Jocelyn H Bolin

    2014-04-01

    Full Text Available Although traditional clustering methods (e.g., K-means have been shown to be useful in the social sciences it is often difficult for such methods to handle situations where clusters in the population overlap or are ambiguous. Fuzzy clustering, a method already recognized in many disciplines, provides a more flexible alternative to these traditional clustering methods. Fuzzy clustering differs from other traditional clustering methods in that it allows for a case to belong to multiple clusters simultaneously. Unfortunately, fuzzy clustering techniques remain relatively unused in the social and behavioral sciences. The purpose of this paper is to introduce fuzzy clustering to these audiences who are currently relatively unfamiliar with the technique. In order to demonstrate the advantages associated with this method, cluster solutions of a common perfectionism measure were created using both fuzzy clustering and K-means clustering, and the results compared. Results of these analyses reveal that different cluster solutions are found by the two methods, and the similarity between the different clustering solutions depends on the amount of cluster overlap allowed for in fuzzy clustering.

  9. Applications of cluster analysis to the creation of perfectionism profiles: a comparison of two clustering approaches.

    Science.gov (United States)

    Bolin, Jocelyn H; Edwards, Julianne M; Finch, W Holmes; Cassady, Jerrell C

    2014-01-01

    Although traditional clustering methods (e.g., K-means) have been shown to be useful in the social sciences it is often difficult for such methods to handle situations where clusters in the population overlap or are ambiguous. Fuzzy clustering, a method already recognized in many disciplines, provides a more flexible alternative to these traditional clustering methods. Fuzzy clustering differs from other traditional clustering methods in that it allows for a case to belong to multiple clusters simultaneously. Unfortunately, fuzzy clustering techniques remain relatively unused in the social and behavioral sciences. The purpose of this paper is to introduce fuzzy clustering to these audiences who are currently relatively unfamiliar with the technique. In order to demonstrate the advantages associated with this method, cluster solutions of a common perfectionism measure were created using both fuzzy clustering and K-means clustering, and the results compared. Results of these analyses reveal that different cluster solutions are found by the two methods, and the similarity between the different clustering solutions depends on the amount of cluster overlap allowed for in fuzzy clustering.

  10. Eliminating Survivor Bias in Two-stage Instrumental Variable Estimators.

    Science.gov (United States)

    Vansteelandt, Stijn; Walter, Stefan; Tchetgen Tchetgen, Eric

    2018-07-01

    Mendelian randomization studies commonly focus on elderly populations. This makes the instrumental variables analysis of such studies sensitive to survivor bias, a type of selection bias. A particular concern is that the instrumental variable conditions, even when valid for the source population, may be violated for the selective population of individuals who survive the onset of the study. This is potentially very damaging because Mendelian randomization studies are known to be sensitive to bias due to even minor violations of the instrumental variable conditions. Interestingly, the instrumental variable conditions continue to hold within certain risk sets of individuals who are still alive at a given age when the instrument and unmeasured confounders exert additive effects on the exposure, and moreover, the exposure and unmeasured confounders exert additive effects on the hazard of death. In this article, we will exploit this property to derive a two-stage instrumental variable estimator for the effect of exposure on mortality, which is insulated against the above described selection bias under these additivity assumptions.

  11. Clustering algorithm in initialization of multi-hop wireless sensor networks

    NARCIS (Netherlands)

    Guo, Peng; Tao, Jiang; Zhang, Kui; Chen, Hsiao-Hwa

    2009-01-01

    In most application scenarios of wireless sensor networks (WSN), sensor nodes are usually deployed randomly and do not have any knowledge about the network environment or even their ID's at the initial stage of their operations. In this paper, we address the clustering problems with a newly deployed

  12. Directional clustering in highest energy cosmic rays

    International Nuclear Information System (INIS)

    Goldberg, Haim; Weiler, Thomas J.

    2001-01-01

    An unexpected degree of small-scale clustering is observed in highest-energy cosmic ray events. Some directional clustering can be expected due to purely statistical fluctuations for sources distributed randomly in the sky. This creates a background for events originating in clustered sources. We derive analytic formulas to estimate the probability of random cluster configurations, and use these formulas to study the strong potential of the HiRes, Auger, Telescope Array and EUSO-OWL-AirWatch facilities for deciding whether any observed clustering is most likely due to nonrandom sources. For a detailed comparison to data, our analytical approach cannot compete with Monte Carlo simulations, including experimental systematics. However, our derived formulas do offer two advantages: (i) easy assessment of the significance of any observed clustering, and most importantly, (ii) an explicit dependence of cluster probabilities on the chosen angular bin size

  13. Randomized branch sampling to estimatefruit production in Pecan trees cv. ‘Barton’

    Directory of Open Access Journals (Sweden)

    Filemom Manoel Mokochinski

    Full Text Available ABSTRACT: Sampling techniques to quantify the production of fruits are still very scarce and create a gap in crop development research. This study was conducted in a rural property in the county of Cachoeira do Sul - RS to estimate the efficiency of randomized branch sampling (RBS in quantifying the production of pecan fruit at three different ages (5,7 and 10 years. Two selection techniques were tested: the probability proportional to the diameter (PPD and the uniform probability (UP techniques, which were performed on nine trees, three from each age and randomly chosen. The RBS underestimated fruit production for all ages, and its main drawback was the high sampling error (125.17% - PPD and 111.04% - UP. The UP was regarded as more efficient than the PPD, though both techniques estimated similar production and similar experimental errors. In conclusion, we reported that branch sampling was inaccurate for this case study, requiring new studies to produce estimates with smaller sampling error.

  14. Genetic diversity of two Tunisian sheep breeds using random ...

    African Journals Online (AJOL)

    Random amplified polymorphic DNA (RAPD) markers were used to study genetic diversity and population structure in six sheep populations belonging to two native Tunisian breeds (the Barbarine and the Western thin tail). A total of 96 samples were typed using eight RAPD primers. 62 bands were scored, of which 44 ...

  15. Protogalaxy interactions in newly formed clusters: Galaxy luminosities, colors, and intergalactic gas

    International Nuclear Information System (INIS)

    Silk, J.

    1978-01-01

    The role of protogalaxy interactions in galactic evolution is studied during the formation of galaxy clusters. In the early stages of the collapse, coalescent encounters of protogalaxies lead to the development of a galactic luminosity function. Once galaxies acquire appreciable random motions, mutual collisions between galaxies in rich clusters will trigger the collapse of interstellar clouds to form stars. This provides both a source for enriched intracluster gas and an interpretation of the correlation between luminosity and color for cluster elliptical galaxies. Other observational consequences that are considered include optical, X-ray, and diffuse nonthermal radio emission from newly formed clusters of galaxies

  16. A Two-Stage Framework for 3D Face Reconstruction from RGBD Images.

    Science.gov (United States)

    Wang, Kangkan; Wang, Xianwang; Pan, Zhigeng; Liu, Kai

    2014-08-01

    This paper proposes a new approach for 3D face reconstruction with RGBD images from an inexpensive commodity sensor. The challenges we face are: 1) substantial random noise and corruption are present in low-resolution depth maps; and 2) there is high degree of variability in pose and face expression. We develop a novel two-stage algorithm that effectively maps low-quality depth maps to realistic face models. Each stage is targeted toward a certain type of noise. The first stage extracts sparse errors from depth patches through the data-driven local sparse coding, while the second stage smooths noise on the boundaries between patches and reconstructs the global shape by combining local shapes using our template-based surface refinement. Our approach does not require any markers or user interaction. We perform quantitative and qualitative evaluations on both synthetic and real test sets. Experimental results show that the proposed approach is able to produce high-resolution 3D face models with high accuracy, even if inputs are of low quality, and have large variations in viewpoint and face expression.

  17. Evaluation of primary immunization coverage of infants under universal immunization programme in an urban area of bangalore city using cluster sampling and lot quality assurance sampling techniques.

    Science.gov (United States)

    K, Punith; K, Lalitha; G, Suman; Bs, Pradeep; Kumar K, Jayanth

    2008-07-01

    Is LQAS technique better than cluster sampling technique in terms of resources to evaluate the immunization coverage in an urban area? To assess and compare the lot quality assurance sampling against cluster sampling in the evaluation of primary immunization coverage. Population-based cross-sectional study. Areas under Mathikere Urban Health Center. Children aged 12 months to 23 months. 220 in cluster sampling, 76 in lot quality assurance sampling. Percentages and Proportions, Chi square Test. (1) Using cluster sampling, the percentage of completely immunized, partially immunized and unimmunized children were 84.09%, 14.09% and 1.82%, respectively. With lot quality assurance sampling, it was 92.11%, 6.58% and 1.31%, respectively. (2) Immunization coverage levels as evaluated by cluster sampling technique were not statistically different from the coverage value as obtained by lot quality assurance sampling techniques. Considering the time and resources required, it was found that lot quality assurance sampling is a better technique in evaluating the primary immunization coverage in urban area.

  18. The Orientation of Gastric Biopsy Samples Improves the Inter-observer Agreement of the OLGA Staging System.

    Science.gov (United States)

    Cotruta, Bogdan; Gheorghe, Cristian; Iacob, Razvan; Dumbrava, Mona; Radu, Cristina; Bancila, Ion; Becheanu, Gabriel

    2017-12-01

    Evaluation of severity and extension of gastric atrophy and intestinal metaplasia is recommended to identify subjects with a high risk for gastric cancer. The inter-observer agreement for the assessment of gastric atrophy is reported to be low. The aim of the study was to evaluate the inter-observer agreement for the assessment of severity and extension of gastric atrophy using oriented and unoriented gastric biopsy samples. Furthermore, the quality of biopsy specimens in oriented and unoriented samples was analyzed. A total of 35 subjects with dyspeptic symptoms addressed for gastrointestinal endoscopy that agreed to enter the study were prospectively enrolled. The OLGA/OLGIM gastric biopsies protocol was used. From each subject two sets of biopsies were obtained (four from the antrum, two oriented and two unoriented, two from the gastric incisure, one oriented and one unoriented, four from the gastric body, two oriented and two unoriented). The orientation of the biopsy samples was completed using nitrocellulose filters (Endokit®, BioOptica, Milan, Italy). The samples were blindly examined by two experienced pathologists. Inter-observer agreement was evaluated using kappa statistic for inter-rater agreement. The quality of histopathology specimens taking into account the identification of lamina propria was analyzed in oriented vs. unoriented samples. The samples with detectable lamina propria mucosae were defined as good quality specimens. Categorical data was analyzed using chi-square test and a two-sided p value <0.05 was considered statistically significant. A total of 350 biopsy samples were analyzed (175 oriented / 175 unoriented). The kappa index values for oriented/unoriented OLGA 0/I/II/III and IV stages have been 0.62/0.13, 0.70/0.20, 0.61/0.06, 0.62/0.46, and 0.77/0.50, respectively. For OLGIM 0/I/II/III stages the kappa index values for oriented/unoriented samples were 0.83/0.83, 0.88/0.89, 0.70/0.88 and 0.83/1, respectively. No case of OLGIM IV

  19. A new combined strategy to implement a community occupational therapy intervention: designing a cluster randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Adang Eddy

    2011-03-01

    Full Text Available Abstract Background Even effective interventions for people with dementia and their caregivers require specific implementation efforts. A pilot study showed that the highly effective community occupational therapy in dementia (COTiD program was not implemented optimally due to various barriers. To decrease these barriers and make implementation of the program more effective a combined implementation (CI strategy was developed. In our study we will compare the effectiveness of this CI strategy with the usual educational (ED strategy. Methods In this cluster randomized, single-blinded, controlled trial, each cluster consists of at least two occupational therapists, a manager, and a physician working at Dutch healthcare organizations that deliver community occupational therapy. Forty-five clusters, stratified by healthcare setting (nursing home, hospital, mental health service, have been allocated randomly to either the intervention group (CI strategy or the control group (ED strategy. The study population consists of the professionals included in each cluster and community-dwelling people with dementia and their caregivers. The primary outcome measures are the use of community OT, the adherence of OTs to the COTiD program, and the cost effectiveness of implementing the COTiD program in outpatient care. Secondary outcome measures are patient and caregiver outcomes and knowledge of managers, physicians and OTs about the COTiD program. Discussion Implementation research is fairly new in the field of occupational therapy, making this a unique study. This study does not only evaluate the effects of the CI-strategy on professionals, but also the effects of professionals' degree of implementation on client and caregiver outcomes. Clinical trials registration NCT01117285

  20. Method for detecting clusters of possible uranium deposits

    International Nuclear Information System (INIS)

    Conover, W.J.; Bement, T.R.; Iman, R.L.

    1978-01-01

    When a two-dimensional map contains points that appear to be scattered somewhat at random, a question that often arises is whether groups of points that appear to cluster are merely exhibiting ordinary behavior, which one can expect with any random distribution of points, or whether the clusters are too pronounced to be attributable to chance alone. A method for detecting clusters along a straight line is applied to the two-dimensional map of 214 Bi anomalies observed as part of the National Uranium Resource Evaluation Program in the Lubbock, Texas, region. Some exact probabilities associated with this method are computed and compared with two approximate methods. The two methods for approximating probabilities work well in the cases examined and can be used when it is not feasible to obtain the exact probabilities

  1. A random spatial sampling method in a rural developing nation

    Science.gov (United States)

    Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas

    2014-01-01

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...

  2. A Cluster-Randomized Trial of Restorative Practices: An Illustration to Spur High-Quality Research and Evaluation.

    Science.gov (United States)

    Acosta, Joie D; Chinman, Matthew; Ebener, Patricia; Phillips, Andrea; Xenakis, Lea; Malone, Patrick S

    2016-01-01

    Restorative Practices in schools lack rigorous evaluation studies. As an example of rigorous school-based research, this paper describes the first randomized control trial of restorative practices to date, the Study of Restorative Practices. It is a 5-year, cluster-randomized controlled trial (RCT) of the Restorative Practices Intervention (RPI) in 14 middle schools in Maine to assess whether RPI impacts both positive developmental outcomes and problem behaviors and whether the effects persist during the transition from middle to high school. The two-year RPI intervention began in the 2014-2015 school year. The study's rationale and theoretical concerns are discussed along with methodological concerns including teacher professional development. The theoretical rationale and description of the methods from this study may be useful to others conducting rigorous research and evaluation in this area.

  3. Two-Stage Performance Engineering of Container-based Virtualization

    Directory of Open Access Journals (Sweden)

    Zheng Li

    2018-02-01

    Full Text Available Cloud computing has become a compelling paradigm built on compute and storage virtualization technologies. The current virtualization solution in the Cloud widely relies on hypervisor-based technologies. Given the recent booming of the container ecosystem, the container-based virtualization starts receiving more attention for being a promising alternative. Although the container technologies are generally considered to be lightweight, no virtualization solution is ideally resource-free, and the corresponding performance overheads will lead to negative impacts on the quality of Cloud services. To facilitate understanding container technologies from the performance engineering’s perspective, we conducted two-stage performance investigations into Docker containers as a concrete example. At the first stage, we used a physical machine with “just-enough” resource as a baseline to investigate the performance overhead of a standalone Docker container against a standalone virtual machine (VM. With findings contrary to the related work, our evaluation results show that the virtualization’s performance overhead could vary not only on a feature-by-feature basis but also on a job-to-job basis. Moreover, the hypervisor-based technology does not come with higher performance overhead in every case. For example, Docker containers particularly exhibit lower QoS in terms of storage transaction speed. At the ongoing second stage, we employed a physical machine with “fair-enough” resource to implement a container-based MapReduce application and try to optimize its performance. In fact, this machine failed in affording VM-based MapReduce clusters in the same scale. The performance tuning results show that the effects of different optimization strategies could largely be related to the data characteristics. For example, LZO compression can bring the most significant performance improvement when dealing with text data in our case.

  4. The random cluster model and a new integration identity

    International Nuclear Information System (INIS)

    Chen, L C; Wu, F Y

    2005-01-01

    We evaluate the free energy of the random cluster model at its critical point for 0 -1 (√q/2) is a rational number. As a by-product, our consideration leads to a closed-form evaluation of the integral 1/(4π 2 ) ∫ 0 2π dΘ ∫ 0 2π dΦ ln[A+B+C - AcosΘ - BcosΦ - Ccos(Θ+Φ)] = -ln(2S) + (2/π)[Ti 2 (AS) + Ti 2 (BS) + Ti 2 (CS)], which arises in lattice statistics, where A, B, C ≥ 0 and S=1/√(AB + BC + CA)

  5. A Cluster-Randomized Trial of Restorative Practices: An Illustration to Spur High-Quality Research and Evaluation

    Science.gov (United States)

    Acosta, Joie D.; Chinman, Matthew; Ebener, Patricia; Phillips, Andrea; Xenakis, Lea; Malone, Patrick S.

    2016-01-01

    Restorative practices in schools lack rigorous evaluation studies. As an example of rigorous school-based research, this article describes the first randomized control trial of restorative practices to date, the Study of Restorative Practices. It is a 5-year, cluster-randomized controlled trial (RCT) of the Restorative Practices Intervention (RPI)…

  6. Unsupervised classification of multivariate geostatistical data: Two algorithms

    Science.gov (United States)

    Romary, Thomas; Ors, Fabien; Rivoirard, Jacques; Deraisme, Jacques

    2015-12-01

    With the increasing development of remote sensing platforms and the evolution of sampling facilities in mining and oil industry, spatial datasets are becoming increasingly large, inform a growing number of variables and cover wider and wider areas. Therefore, it is often necessary to split the domain of study to account for radically different behaviors of the natural phenomenon over the domain and to simplify the subsequent modeling step. The definition of these areas can be seen as a problem of unsupervised classification, or clustering, where we try to divide the domain into homogeneous domains with respect to the values taken by the variables in hand. The application of classical clustering methods, designed for independent observations, does not ensure the spatial coherence of the resulting classes. Image segmentation methods, based on e.g. Markov random fields, are not adapted to irregularly sampled data. Other existing approaches, based on mixtures of Gaussian random functions estimated via the expectation-maximization algorithm, are limited to reasonable sample sizes and a small number of variables. In this work, we propose two algorithms based on adaptations of classical algorithms to multivariate geostatistical data. Both algorithms are model free and can handle large volumes of multivariate, irregularly spaced data. The first one proceeds by agglomerative hierarchical clustering. The spatial coherence is ensured by a proximity condition imposed for two clusters to merge. This proximity condition relies on a graph organizing the data in the coordinates space. The hierarchical algorithm can then be seen as a graph-partitioning algorithm. Following this interpretation, a spatial version of the spectral clustering algorithm is also proposed. The performances of both algorithms are assessed on toy examples and a mining dataset.

  7. The Infant Feeding Activity and Nutrition Trial (INFANT an early intervention to prevent childhood obesity: Cluster-randomised controlled trial

    Directory of Open Access Journals (Sweden)

    Campbell Karen

    2008-03-01

    Full Text Available Abstract Background Multiple factors combine to support a compelling case for interventions that target the development of obesity-promoting behaviours (poor diet, low physical activity and high sedentary behaviour from their inception. These factors include the rapidly increasing prevalence of fatness throughout childhood, the instigation of obesity-promoting behaviours in infancy, and the tracking of these behaviours from childhood through to adolescence and adulthood. The Infant Feeding Activity and Nutrition Trial (INFANT aims to determine the effectiveness of an early childhood obesity prevention intervention delivered to first-time parents. The intervention, conducted with parents over the infant's first 18 months of life, will use existing social networks (first-time parent's groups and an anticipatory guidance framework focusing on parenting skills which support the development of positive diet and physical activity behaviours, and reduced sedentary behaviours in infancy. Methods/Design This cluster-randomised controlled trial, with first-time parent groups as the unit of randomisation, will be conducted with a sample of 600 first-time parents and their newborn children who attend the first-time parents' group at Maternal and Child Health Centres. Using a two-stage sampling process, local government areas in Victoria, Australia will be randomly selected at the first stage. At the second stage, a proportional sample of first-time parent groups within selected local government areas will be randomly selected and invited to participate. Informed consent will be obtained and groups will then be randomly allocated to the intervention or control group. Discussion The early years hold promise as a time in which obesity prevention may be most effective. To our knowledge this will be the first randomised trial internationally to demonstrate whether an early health promotion program delivered to first-time parents in their existing social groups

  8. An optimal design of cluster spacing intervals for staged fracturing in horizontal shale gas wells based on the optimal SRVs

    Directory of Open Access Journals (Sweden)

    Lan Ren

    2017-09-01

    Full Text Available When horizontal well staged cluster fracturing is applied in shale gas reservoirs, the cluster spacing is essential to fracturing performance. If the cluster spacing is too small, the stimulated area between major fractures will be overlapped, and the efficiency of fracturing stimulation will be decreased. If the cluster spacing is too large, the area between major fractures cannot be stimulated completely and reservoir recovery extent will be adversely impacted. At present, cluster spacing design is mainly based on the static model with the potential reservoir stimulation area as the target, and there is no cluster spacing design method in accordance with the actual fracturing process and targets dynamic stimulated reservoir volume (SRV. In this paper, a dynamic SRV calculation model for cluster fracture propagation was established by analyzing the coupling mechanisms among fracture propagation, fracturing fluid loss and stress. Then, the cluster spacing was optimized to reach the target of the optimal SRVs. This model was applied for validation on site in the Jiaoshiba shale gasfield in the Fuling area of the Sichuan Basin. The key geological engineering parameters influencing the optimal cluster spacing intervals were analyzed. The reference charts for the optimal cluster spacing design were prepared based on the geological characteristics of south and north blocks in the Jiaoshiba shale gasfield. It is concluded that the cluster spacing optimal design method proposed in this paper is of great significance in overcoming the blindness in current cluster perforation design and guiding the optimal design of volume fracturing in shale gas reservoirs. Keywords: Shale gas, Horizontal well, Staged fracturing, Cluster spacing, Reservoir, Stimulated reservoir volume (SRV, Mathematical model, Optimal method, Sichuan basin, Jiaoshiba shale gasfield

  9. The impact of nurse-driven targeted HIV screening in 8 emergency departments: study protocol for the DICI-VIH cluster-randomized two-period crossover trial.

    Science.gov (United States)

    Leblanc, Judith; Rousseau, Alexandra; Hejblum, Gilles; Durand-Zaleski, Isabelle; de Truchis, Pierre; Lert, France; Costagliola, Dominique; Simon, Tabassome; Crémieux, Anne-Claude

    2016-02-01

    In 2010, to reduce late HIV diagnosis, the French national health agency endorsed non-targeted HIV screening in health care settings. Despite these recommendations, non-targeted screening has not been implemented and only physician-directed diagnostic testing is currently performed. A survey conducted in 2010 in 29 French Emergency Departments (EDs) showed that non-targeted nurse-driven screening was feasible though only a few new HIV diagnoses were identified, predominantly among high-risk groups. A strategy targeting high-risk groups combined with current practice could be shown to be feasible, more efficient and cost-effective than current practice alone. DICI-VIH (acronym for nurse-driven targeted HIV screening) is a multicentre, cluster-randomized, two-period crossover trial. The primary objective is to compare the effectiveness of 2 strategies for diagnosing HIV among adult patients visiting EDs: nurse-driven targeted HIV screening combined with current practice (physician-directed diagnostic testing) versus current practice alone. Main secondary objectives are to compare access to specialist consultation and how early HIV diagnosis occurs in the course of the disease between the 2 groups, and to evaluate the implementation, acceptability and cost-effectiveness of nurse-driven targeted screening. The 2 strategies take place during 2 randomly assigned periods in 8 EDs of metropolitan Paris, where 42 % of France's new HIV patients are diagnosed every year. All patients aged 18 to 64, not presenting secondary to HIV exposure are included. During the intervention period, patients are invited to fill a 7-item questionnaire (country of birth, sexual partners and injection drug use) in order to select individuals who are offered a rapid test. If the rapid test is reactive, a follow-up visit with an infectious disease specialist is scheduled within 72 h. Assuming an 80 % statistical power and a 5 % type 1 error, with 1.04 and 3.38 new diagnoses per 10,000 patients in

  10. Theory of boundary-free two-dimensional dust clusters

    International Nuclear Information System (INIS)

    Tsytovich, V.N.; Gousein-zade, N.G.; Morfill, G.E.

    2006-01-01

    It is shown theoretically that a stable two-dimensional (2D) grain cluster can exist in plasmas without external confinement if the shadow attraction of grains is taken into account. These are considered as boundary-free clusters. The equilibrium radius of the clusters is investigated numerically. It is found that it is rapidly decreasing with an increase of the attraction coefficient and with an increase of the number of grains N in the cluster. Comparison of energies of one shell cluster containing N grains with the energies of a cluster with N-1 grains in the shell and an additional one grain in the center as functions of the attraction coefficient is used to find the magic numbers for new shell creation. It is demonstrated that a dissociation of the cluster in several smaller clusters requires less energy than a removal of one of the grains from the cluster. The computations were performed for the Debye screening and for the nonlinear screening models and show that the structure of the clusters is sensitive to the type of screening. Frequencies of all collective modes of the 2D boundary-free clusters are calculated up to N=7 grains in the cluster for the case where all grains form one shell cluster and for the case where N=6 grains form one shell cluster and one of the grains is located at the center of the cluster. The frequencies of the modes increase with a decrease of the cluster radius. Stable and unstable modes are investigated as a function of the attraction coefficient. The presence of instability indicates that this type of equilibrium cluster does not correspond to the minimum energy in all directions and will be converted into another stable configuration. The universal magic number N m of grains in one shell cluster, such that for N=N m +1 the modes of the shell start to be unstable and the cluster converts to the cluster with N m grains in the shell and one grain in the center, is found for both the Yukawa screening and for the nonlinear screening

  11. Evaluation of primary immunization coverage of infants under universal immunization programme in an urban area of Bangalore city using cluster sampling and lot quality assurance sampling techniques

    Directory of Open Access Journals (Sweden)

    Punith K

    2008-01-01

    Full Text Available Research Question: Is LQAS technique better than cluster sampling technique in terms of resources to evaluate the immunization coverage in an urban area? Objective: To assess and compare the lot quality assurance sampling against cluster sampling in the evaluation of primary immunization coverage. Study Design: Population-based cross-sectional study. Study Setting: Areas under Mathikere Urban Health Center. Study Subjects: Children aged 12 months to 23 months. Sample Size: 220 in cluster sampling, 76 in lot quality assurance sampling. Statistical Analysis: Percentages and Proportions, Chi square Test. Results: (1 Using cluster sampling, the percentage of completely immunized, partially immunized and unimmunized children were 84.09%, 14.09% and 1.82%, respectively. With lot quality assurance sampling, it was 92.11%, 6.58% and 1.31%, respectively. (2 Immunization coverage levels as evaluated by cluster sampling technique were not statistically different from the coverage value as obtained by lot quality assurance sampling techniques. Considering the time and resources required, it was found that lot quality assurance sampling is a better technique in evaluating the primary immunization coverage in urban area.

  12. Exergaming and older adult cognition: a cluster randomized clinical trial.

    Science.gov (United States)

    Anderson-Hanley, Cay; Arciero, Paul J; Brickman, Adam M; Nimon, Joseph P; Okuma, Naoko; Westen, Sarah C; Merz, Molly E; Pence, Brandt D; Woods, Jeffrey A; Kramer, Arthur F; Zimmerman, Earl A

    2012-02-01

    Dementia cases may reach 100 million by 2050. Interventions are sought to curb or prevent cognitive decline. Exercise yields cognitive benefits, but few older adults exercise. Virtual reality-enhanced exercise or "exergames" may elicit greater participation. To test the following hypotheses: (1) stationary cycling with virtual reality tours ("cybercycle") will enhance executive function and clinical status more than traditional exercise; (2) exercise effort will explain improvement; and (3) brain-derived neurotrophic growth factor (BDNF) will increase. Multi-site cluster randomized clinical trial (RCT) of the impact of 3 months of cybercycling versus traditional exercise, on cognitive function in older adults. Data were collected in 2008-2010; analyses were conducted in 2010-2011. 102 older adults from eight retirement communities enrolled; 79 were randomized and 63 completed. A recumbent stationary ergometer was utilized; virtual reality tours and competitors were enabled on the cybercycle. Executive function (Color Trails Difference, Stroop C, Digits Backward); clinical status (mild cognitive impairment; MCI); exercise effort/fitness; and plasma BDNF. Intent-to-treat analyses, controlling for age, education, and cluster randomization, revealed a significant group X time interaction for composite executive function (p=0.002). Cybercycling yielded a medium effect over traditional exercise (d=0.50). Cybercyclists had a 23% relative risk reduction in clinical progression to MCI. Exercise effort and fitness were comparable, suggesting another underlying mechanism. A significant group X time interaction for BDNF (p=0.05) indicated enhanced neuroplasticity among cybercyclists. Cybercycling older adults achieved better cognitive function than traditional exercisers, for the same effort, suggesting that simultaneous cognitive and physical exercise has greater potential for preventing cognitive decline. This study is registered at Clinicaltrials.gov NCT01167400. Copyright

  13. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    Science.gov (United States)

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  14. Targeted HIV Screening in Eight Emergency Departments: The DICI-VIH Cluster-Randomized Two-Period Crossover Trial.

    Science.gov (United States)

    Leblanc, Judith; Hejblum, Gilles; Costagliola, Dominique; Durand-Zaleski, Isabelle; Lert, France; de Truchis, Pierre; Verbeke, Geert; Rousseau, Alexandra; Piquet, Hélène; Simon, François; Pateron, Dominique; Simon, Tabassome; Crémieux, Anne-Claude

    2017-10-30

    This study compares the effectiveness and cost-effectiveness of nurse-driven targeted HIV screening alongside physician-directed diagnostic testing (intervention strategy) with diagnostic testing alone (control strategy) in 8 emergency departments. In this cluster-randomized, 2-period, crossover trial, 18- to 64-year-old patients presenting for reasons other than potential exposure to HIV were included. The strategy applied first was randomly assigned. During both periods, diagnostic testing was prescribed by physicians following usual care. During the intervention periods, patients were asked to complete a self-administered questionnaire. According to their answers, the triage nurse suggested performing a rapid test to patients belonging to a high-risk group. The primary outcome was the proportion of new diagnoses among included patients, which further refers to effectiveness. A secondary outcome was the intervention's incremental cost (health care system perspective) per additional diagnosis. During the intervention periods, 74,161 patients were included, 16,468 completed the questionnaire, 4,341 belonged to high-risk groups, and 2,818 were tested by nurses, yielding 13 new diagnoses. Combined with 9 diagnoses confirmed through 97 diagnostic tests, 22 new diagnoses were established. During the control periods, 74,166 patients were included, 92 were tested, and 6 received a new diagnosis. The proportion of new diagnoses among included patients was higher during the intervention than in the control periods (3.0 per 10,000 versus 0.8 per 10,000; difference 2.2 per 10,000, 95% CI 1.3 to 3.6; relative risk 3.7, 95% CI 1.4 to 9.8). The incremental cost was €1,324 per additional new diagnosis. The combined strategy of targeted screening and diagnostic testing was effective. Copyright © 2017 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  15. [Acupotomy and acupuncture in the treatment of avascular necrosis of femoral head at the early and middle stages:a clinical randomized controlled trial].

    Science.gov (United States)

    Wang, Zhanyou; Zhou, Xuelong; Xie, Lishuang; Liang, Dongyue; Wang, Ying; Zhang, Hong-An; Zheng, Jinghong

    2016-10-12

    To compare the efficacy difference between acupotomy and acupuncture in the treatment of avascular necrosis of femoral head at the early and middle stages. The randomized controlled prospective study method was adopted. Sixty cases of avascular necrosis of femoral head at Ficat-ArletⅠto Ⅱ stages were randomized into an acupotomy group (32 cases) and an acupuncture group (28 cases) by the third part. In the acupotomy group, the acupotomy was adopted for the loose solution at the treatment sites of hip joint, once every two weeks, totally for 3 times. In the acupuncture group, ashi points around the hip joint were selected and stimulated with warm acupuncture therapy, once every day, for 6 weeks. Harris hip score was observed before and after treatment. The efficacy was evaluated in the two groups. Harris hip score was improved significantly after treatment in the two groups (both P avascular necrosis of femoral head at the early and middle stages.

  16. Pressure ulcer multidisciplinary teams via telemedicine: a pragmatic cluster randomized stepped wedge trial in long term care.

    Science.gov (United States)

    Stern, Anita; Mitsakakis, Nicholas; Paulden, Mike; Alibhai, Shabbir; Wong, Josephine; Tomlinson, George; Brooker, Ann-Sylvia; Krahn, Murray; Zwarenstein, Merrick

    2014-02-24

    The study was conducted to determine the clinical and cost effectiveness of enhanced multi-disciplinary teams (EMDTs) vs. 'usual care' for the treatment of pressure ulcers in long term care (LTC) facilities in Ontario, Canada We conducted a multi-method study: a pragmatic cluster randomized stepped-wedge trial, ethnographic observation and in-depth interviews, and an economic evaluation. Long term care facilities (clusters) were randomly allocated to start dates of the intervention. An advance practice nurse (APN) with expertise in skin and wound care visited intervention facilities to educate staff on pressure ulcer prevention and treatment, supported by an off-site hospital based expert multi-disciplinary wound care team via email, telephone, or video link as needed. The primary outcome was rate of reduction in pressure ulcer surface area (cm2/day) measured on before and after standard photographs by an assessor blinded to facility allocation. Secondary outcomes were time to healing, probability of healing, pressure ulcer incidence, pressure ulcer prevalence, wound pain, hospitalization, emergency department visits, utility, and cost. 12 of 15 eligible LTC facilities were randomly selected to participate and randomized to start date of the intervention following the stepped wedge design. 137 residents with a total of 259 pressure ulcers (stage 2 or greater) were recruited over the 17 month study period. No statistically significant differences were found between control and intervention periods on any of the primary or secondary outcomes. The economic evaluation demonstrated a mean reduction in direct care costs of $650 per resident compared to 'usual care'. The qualitative study suggested that onsite support by APN wound specialists was welcomed, and is responsible for reduced costs through discontinuation of expensive non evidence based treatments. Insufficient allocation of nursing home staff time to wound care may explain the lack of impact on healing

  17. 40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...

  18. Cluster chemical ionization for improved confidence level in sample identification by gas chromatography/mass spectrometry.

    Science.gov (United States)

    Fialkov, Alexander B; Amirav, Aviv

    2003-01-01

    Upon the supersonic expansion of helium mixed with vapor from an organic solvent (e.g. methanol), various clusters of the solvent with the sample molecules can be formed. As a result of 70 eV electron ionization of these clusters, cluster chemical ionization (cluster CI) mass spectra are obtained. These spectra are characterized by the combination of EI mass spectra of vibrationally cold molecules in the supersonic molecular beam (cold EI) with CI-like appearance of abundant protonated molecules, together with satellite peaks corresponding to protonated or non-protonated clusters of sample compounds with 1-3 solvent molecules. Like CI, cluster CI preferably occurs for polar compounds with high proton affinity. However, in contrast to conventional CI, for non-polar compounds or those with reduced proton affinity the cluster CI mass spectrum converges to that of cold EI. The appearance of a protonated molecule and its solvent cluster peaks, plus the lack of protonation and cluster satellites for prominent EI fragments, enable the unambiguous identification of the molecular ion. In turn, the insertion of the proper molecular ion into the NIST library search of the cold EI mass spectra eliminates those candidates with incorrect molecular mass and thus significantly increases the confidence level in sample identification. Furthermore, molecular mass identification is of prime importance for the analysis of unknown compounds that are absent in the library. Examples are given with emphasis on the cluster CI analysis of carbamate pesticides, high explosives and unknown samples, to demonstrate the usefulness of Supersonic GC/MS (GC/MS with supersonic molecular beam) in the analysis of these thermally labile compounds. Cluster CI is shown to be a practical ionization method, due to its ease-of-use and fast instrumental conversion between EI and cluster CI, which involves the opening of only one valve located at the make-up gas path. The ease-of-use of cluster CI is analogous

  19. Influence of Cu(NO32 initiation additive in two-stage mode conditions of coal pyrolytic decomposition

    Directory of Open Access Journals (Sweden)

    Larionov Kirill

    2017-01-01

    Full Text Available Two-stage process (pyrolysis and oxidation of brown coal sample with Cu(NO32 additive pyrolytic decomposition was studied. Additive was introduced by using capillary wetness impregnation method with 5% mass concentration. Sample reactivity was studied by thermogravimetric analysis with staged gaseous medium supply (argon and air at heating rate 10 °C/min and intermediate isothermal soaking. The initiative additive introduction was found to significantly reduce volatile release temperature and accelerate thermal decomposition of sample. Mass-spectral analysis results reveal that significant difference in process characteristics is connected to volatile matter release stage which is initiated by nitrous oxide produced during copper nitrate decomposition.

  20. Effects of Nurse-Led Multifactorial Care to Prevent Disability in Community-Living Older People : Cluster Randomized Trial

    NARCIS (Netherlands)

    Suijker, Jacqueline J.; van Rijn, Marjon; Buurman, Bianca M.; ter Riet, Gerben; van Charante, Eric P. Moll; de Rooij, Sophia E.

    2016-01-01

    Background To evaluate the effects of nurse-led multifactorial care to prevent disability in community-living older people. Methods In a cluster randomized trail, 11 practices (n = 1,209 participants) were randomized to the intervention group, and 13 practices (n = 1,074 participants) were

  1. Two-stage free electron laser research

    Science.gov (United States)

    Segall, S. B.

    1984-10-01

    KMS Fusion, Inc. began studying the feasibility of two-stage free electron lasers for the Office of Naval Research in June, 1980. At that time, the two-stage FEL was only a concept that had been proposed by Luis Elias. The range of parameters over which such a laser could be successfully operated, attainable power output, and constraints on laser operation were not known. The primary reason for supporting this research at that time was that it had the potential for producing short-wavelength radiation using a relatively low voltage electron beam. One advantage of a low-voltage two-stage FEL would be that shielding requirements would be greatly reduced compared with single-stage short-wavelength FEL's. If the electron energy were kept below about 10 MeV, X-rays, generated by electrons striking the beam line wall, would not excite neutron resonance in atomic nuclei. These resonances cause the emission of neutrons with subsequent induced radioactivity. Therefore, above about 10 MeV, a meter or more of concrete shielding is required for the system, whereas below 10 MeV, a few millimeters of lead would be adequate.

  2. Coupling methods for multistage sampling

    OpenAIRE

    Chauvet, Guillaume

    2015-01-01

    Multistage sampling is commonly used for household surveys when there exists no sampling frame, or when the population is scattered over a wide area. Multistage sampling usually introduces a complex dependence in the selection of the final units, which makes asymptotic results quite difficult to prove. In this work, we consider multistage sampling with simple random without replacement sampling at the first stage, and with an arbitrary sampling design for further stages. We consider coupling ...

  3. Clustering of low-valence particles: structure and kinetics.

    Science.gov (United States)

    Markova, Olga; Alberts, Jonathan; Munro, Edwin; Lenne, Pierre-François

    2014-08-01

    We compute the structure and kinetics of two systems of low-valence particles with three or six freely oriented bonds in two dimensions. The structure of clusters formed by trivalent particles is complex with loops and holes, while hexavalent particles self-organize into regular and compact structures. We identify the elementary structures which compose the clusters of trivalent particles. At initial stages of clustering, the clusters of trivalent particles grow with a power-law time dependence. Yet at longer times fusion and fission of clusters equilibrates and clusters form a heterogeneous phase with polydispersed sizes. These results emphasize the role of valence in the kinetics and stability of finite-size clusters.

  4. Dynamic connectivity algorithms for Monte Carlo simulations of the random-cluster model

    International Nuclear Information System (INIS)

    Elçi, Eren Metin; Weigel, Martin

    2014-01-01

    We review Sweeny's algorithm for Monte Carlo simulations of the random cluster model. Straightforward implementations suffer from the problem of computational critical slowing down, where the computational effort per edge operation scales with a power of the system size. By using a tailored dynamic connectivity algorithm we are able to perform all operations with a poly-logarithmic computational effort. This approach is shown to be efficient in keeping online connectivity information and is of use for a number of applications also beyond cluster-update simulations, for instance in monitoring droplet shape transitions. As the handling of the relevant data structures is non-trivial, we provide a Python module with a full implementation for future reference.

  5. Recruitment to online therapies for depression: pilot cluster randomized controlled trial.

    Science.gov (United States)

    Jones, Ray B; Goldsmith, Lesley; Hewson, Paul; Williams, Christopher J

    2013-03-05

    Raising awareness of online cognitive behavioral therapy (CBT) could benefit many people with depression, but we do not know how purchasing online advertising compares to placing free links from relevant local websites in increasing uptake. To pilot a cluster randomized controlled trial (RCT) comparing purchase of Google AdWords with placing free website links in raising awareness of online CBT resources for depression in order to better understand research design issues. We compared two online interventions with a control without intervention. The pilot RCT had 4 arms, each with 4 British postcode areas: (A) geographically targeted AdWords, (B) adverts placed on local websites by contacting website owners and requesting links be added, (C) both interventions, (D) control. Participants were directed to our research project website linking to two freely available online CBT resource sites (Moodgym and Living Life To The Full (LLTTF)) and two other depression support sites. We used data from (1) AdWords, (2) Google Analytics for our project website and for LLTTF, and (3) research project website. We compared two outcomes: (1) numbers with depression accessing the research project website, and then chose an onward link to one of the two CBT websites, and (2) numbers registering with LLTTF. We documented costs, and explored intervention and assessment methods to make general recommendations to inform researchers aiming to use similar methodologies in future studies. Trying to place local website links appeared much less cost effective than AdWords and although may prove useful for service delivery, was not worth pursuing in the context of the current study design. Our AdWords intervention was effective in recruiting people to the project website but our location targeting "leaked" and was not as geographically specific as claimed. The impact on online CBT was also diluted by offering participants other choices of destinations. Measuring the impact on LLTTF use was

  6. Recruitment to Online Therapies for Depression: Pilot Cluster Randomized Controlled Trial

    OpenAIRE

    Jones, Ray B; Goldsmith, Lesley; Hewson, Paul; Williams, Christopher J

    2013-01-01

    Background Raising awareness of online cognitive behavioral therapy (CBT) could benefit many people with depression, but we do not know how purchasing online advertising compares to placing free links from relevant local websites in increasing uptake. Objective To pilot a cluster randomized controlled trial (RCT) comparing purchase of Google AdWords with placing free website links in raising awareness of online CBT resources for depression in order to better understand research design issues....

  7. Making birthing safe for Pakistan women: a cluster randomized trial

    Directory of Open Access Journals (Sweden)

    Khan Muhammad

    2012-07-01

    Full Text Available Abstract Background Two out of three neonatal deaths occur in just 10 countries and Pakistan stands third among them. Maternal mortality is also high with most deaths occurring during labor, birth, and first few hours after birth. Enhanced access and utilization of skilled delivery and emergency obstetric care is the demonstrated strategy in reducing maternal and neonatal mortality. This trial aims to compare reduction in neonate mortality and utilization of available safe birthing and Emergency Obstetric and Neonatal Care services among pregnant mothers receiving ‘structured birth planning’, and/or ‘transport facilitation’ compared to routine care. Methods A pragmatic cluster randomized trial, with qualitative and economic studies, will be conducted in Jhang, Chiniot and Khanewal districts of Punjab, Pakistan, from February 2011 to May 2013. At least 29,295 pregnancies will be registered in the three arms, seven clusters per arm; 1 structured birth planning and travel facilitation, 2 structured birth planning, and 3 control arm. Trial will be conducted through the Lady Health Worker program. Main outcomes are difference in neonatal mortality and service utilization; maternal mortality being the secondary outcome. Cluster level analysis will be done according to intention-to-treat. Discussion A nationwide network of about 100,000 lady health workers is already involved in antenatal and postnatal care of pregnant women. They also act as “gatekeepers” for the child birthing services. This gate keeping role mainly includes counseling and referral for skill birth attendance and travel arrangements for emergency obstetric care (if required. The review of current arrangements and practices show that the care delivery process needs enhancement to include adequate information provision as well as informed “decision” making and planned “action” by the pregnant women. The proposed three-year research is to develop, through national

  8. Effectiveness of a group diabetes education programme in underserved communities in South Africa: pragmatic cluster randomized control trial.

    Science.gov (United States)

    Mash, Bob; Levitt, Naomi; Steyn, Krisela; Zwarenstein, Merrick; Rollnick, Stephen

    2012-12-24

    Diabetes is an important contributor to the burden of disease in South Africa and prevalence rates as high as 33% have been recorded in Cape Town. Previous studies show that quality of care and health outcomes are poor. The development of an effective education programme should impact on self-care, lifestyle change and adherence to medication; and lead to better control of diabetes, fewer complications and better quality of life. Pragmatic cluster randomized controlled trialParticipants: Type 2 diabetic patients attending 45 public sector community health centres in Cape TownInterventions: The intervention group will receive 4 sessions of group diabetes education delivered by a health promotion officer in a guiding style. The control group will receive usual care which consists of ad hoc advice during consultations and occasional educational talks in the waiting room. To evaluate the effectiveness of the group diabetes education programmeOutcomes: diabetes self-care activities, 5% weight loss, 1% reduction in HbA1c. self-efficacy, locus of control, mean blood pressure, mean weight loss, mean waist circumference, mean HbA1c, mean total cholesterol, quality of lifeRandomisation: Computer generated random numbersBlinding: Patients, health promoters and research assistants could not be blinded to the health centre's allocationNumbers randomized: Seventeen health centres (34 in total) will be randomly assigned to either control or intervention groups. A sample size of 1360 patients in 34 clusters of 40 patients will give a power of 80% to detect the primary outcomes with 5% precision. Altogether 720 patients were recruited in the intervention arm and 850 in the control arm giving a total of 1570. The study will inform policy makers and managers of the district health system, particularly in low to middle income countries, if this programme can be implemented more widely. Pan African Clinical Trial Registry PACTR201205000380384.

  9. Effects of Plyometric and Cluster Resistance Training on Explosive Power and Maximum Strength in Karate Players

    Directory of Open Access Journals (Sweden)

    Mohsen Aminaei

    2017-08-01

    Full Text Available The purpose of this study was to investigate the effects of plyometric and cluster resistance training on explosive power and maximum strength in karate players. Eighteen women, karate players (age mean ± SD 18.22 ± 3.02 years, mean height 163 ± 0.63cm, and mean body mass 53.25 ± 7.34 kg were selected as volunteer samples. They were divided into two groups with respect to their recorded one repetition maximum squat exercise: [1] plyometric training (PT=9 and [2] Cluster training (CT=9 groups and performed a 9-week resistance training protocol that included three stages; [1] General fitness (2 weeks, [2] Strength (4 weeks and [3] Power (3 weeks. Each group performed strength and power trainings for 7 weeks in stage two and three with owned protocol. The subjects were evaluated three times before stage one and after two and three stages for maximum strength and power. Data was analyzed using two way Repeated Measures (ANOVA at a significance level of (P≤0.05. The statistical analysis showed that training stages on all research variables had a significant impact. The maximum strength of the pre-test, post-test strength and post-test power were in cluster group: 29.05 ± 1.54; 32.89 ± 2.80 and 48.74 ± 4.33w and in plyometric group were 26.98 ± 1.54; 38.48 ± 2.80 and 49.82 ± 4.33w respectively. The explosive power of the pre-test, post-test strength and post-test power in cluster group were 359.32±36.20; 427.91±34.56 and 460.55±36.80w and in plyometric group were 333.90±36.20; 400.33±34.56 and 465.20±36.80w respectively. However, there were not statistically significant differences in research variables between resistance cluster and plyometric training groups after 7 weeks. The results indicated both cluster and plyometric training program seems to improve physical fitness elements at the same levels.

  10. Merger types forming the Virgo cluster in recent gigayears

    Science.gov (United States)

    Olchanski, M.; Sorce, J. G.

    2018-06-01

    Context. As our closest cluster-neighbor, the Virgo cluster of galaxies is intensely studied by observers to unravel the mysteries of galaxy evolution within clusters. At this stage, cosmological numerical simulations of the cluster are useful to efficiently test theories and calibrate models. However, it is not trivial to select the perfect simulacrum of the Virgo cluster to fairly compare in detail its observed and simulated galaxy populations that are affected by the type and history of the cluster. Aims: Determining precisely the properties of Virgo for a later selection of simulated clusters becomes essential. It is still not clear how to access some of these properties, such as the past history of the Virgo cluster from current observations. Therefore, directly producing effective simulacra of the Virgo cluster is inevitable. Methods: Efficient simulacra of the Virgo cluster can be obtained via simulations that resemble the local Universe down to the cluster scale. In such simulations, Virgo-like halos form in the proper local environment and permit assessing the most probable formation history of the cluster. Studies based on these simulations have already revealed that the Virgo cluster has had a quiet merging history over the last seven gigayears and that the cluster accretes matter along a preferential direction. Results: This paper reveals that in addition such Virgo halos have had on average only one merger larger than about a tenth of their mass at redshift zero within the last four gigayears. This second branch (by opposition to main branch) formed in a given sub-region and merged recently (within the last gigayear). These properties are not shared with a set of random halos within the same mass range. Conclusions: This study extends the validity of the scheme used to produce the Virgo simulacra down to the largest sub-halos of the Virgo cluster. It opens up great prospects for detailed comparisons with observations, including substructures and

  11. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    OpenAIRE

    He, Xinhua; Hu, Wenfa

    2017-01-01

    Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total c...

  12. One-stage vs two-stage cartilage repair: a current review

    Directory of Open Access Journals (Sweden)

    Daniel Meyerkort

    2010-10-01

    Full Text Available Daniel Meyerkort, David Wood, Ming-Hao ZhengCenter for Orthopaedic Research, School of Surgery and Pathology, University of Western Australia, Perth, AustraliaIntroduction: Articular cartilage has a poor capacity for regeneration if damaged. Various methods have been used to restore the articular surface, improve pain, function, and slow progression to osteoarthritis.Method: A PubMed review was performed on 18 March, 2010. Search terms included “autologous chondrocyte implantation (ACI” and “microfracture” or “mosaicplasty”. The aim of this review was to determine if 1-stage or 2-stage procedures for cartilage repair produced different functional outcomes.Results: The main procedures currently used are ACI and microfracture. Both first-generation ACI and microfracture result in clinical and functional improvement with no significant differences. A significant increase in functional outcome has been observed in second-generation procedures such as Hyalograft C, matrix-induced ACI, and ChondroCelect compared with microfracture. ACI results in a higher percentage of patients with clinical improvement than mosaicplasty; however, these results may take longer to achieve.Conclusion: Clinical and functional improvements have been demonstrated with ACI, microfracture, mosaicplasty, and synthetic cartilage constructs. Heterogeneous products and lack of good-quality randomized-control trials make product comparison difficult. Future developments involve scaffolds, gene therapy, growth factors, and stem cells to create a single-stage procedure that results in hyaline articular cartilage.Keywords: autologous chondrocyte implantation, microfracture, cartilage repair

  13. Does clinical equipoise apply to cluster randomized trials in health research?

    Science.gov (United States)

    2011-01-01

    This article is part of a series of papers examining ethical issues in cluster randomized trials (CRTs) in health research. In the introductory paper in this series, Weijer and colleagues set out six areas of inquiry that must be addressed if the cluster trial is to be set on a firm ethical foundation. This paper addresses the third of the questions posed, namely, does clinical equipoise apply to CRTs in health research? The ethical principle of beneficence is the moral obligation not to harm needlessly and, when possible, to promote the welfare of research subjects. Two related ethical problems have been discussed in the CRT literature. First, are control groups that receive only usual care unduly disadvantaged? Second, when accumulating data suggests the superiority of one intervention in a trial, is there an ethical obligation to act? In individually randomized trials involving patients, similar questions are addressed by the concept of clinical equipoise, that is, the ethical requirement that, at the start of a trial, there be a state of honest, professional disagreement in the community of expert practitioners as to the preferred treatment. Since CRTs may not involve physician-researchers and patient-subjects, the applicability of clinical equipoise to CRTs is uncertain. Here we argue that clinical equipoise may be usefully grounded in a trust relationship between the state and research subjects, and, as a result, clinical equipoise is applicable to CRTs. Clinical equipoise is used to argue that control groups receiving only usual care are not disadvantaged so long as the evidence supporting the experimental and control interventions is such that experts would disagree as to which is preferred. Further, while data accumulating during the course of a CRT may favor one intervention over another, clinical equipoise supports continuing the trial until the results are likely to be broadly convincing, often coinciding with the planned completion of the trial

  14. A stage is a stage is a stage: a direct comparison of two scoring systems.

    Science.gov (United States)

    Dawson, Theo L

    2003-09-01

    L. Kohlberg (1969) argued that his moral stages captured a developmental sequence specific to the moral domain. To explore that contention, the author compared stage assignments obtained with the Standard Issue Scoring System (A. Colby & L. Kohlberg, 1987a, 1987b) and those obtained with a generalized content-independent stage-scoring system called the Hierarchical Complexity Scoring System (T. L. Dawson, 2002a), on 637 moral judgment interviews (participants' ages ranged from 5 to 86 years). The correlation between stage scores produced with the 2 systems was .88. Although standard issue scoring and hierarchical complexity scoring often awarded different scores up to Kohlberg's Moral Stage 2/3, from his Moral Stage 3 onward, scores awarded with the two systems predominantly agreed. The author explores the implications for developmental research.

  15. Random vs. systematic sampling from administrative databases involving human subjects.

    Science.gov (United States)

    Hagino, C; Lo, R J

    1998-09-01

    Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.

  16. Pitch Correlogram Clustering for Fast Speaker Identification

    Directory of Open Access Journals (Sweden)

    Nitin Jhanwar

    2004-12-01

    Full Text Available Gaussian mixture models (GMMs are commonly used in text-independent speaker identification systems. However, for large speaker databases, their high computational run-time limits their use in online or real-time speaker identification situations. Two-stage identification systems, in which the database is partitioned into clusters based on some proximity criteria and only a single-cluster GMM is run in every test, have been suggested in literature to speed up the identification process. However, most clustering algorithms used have shown limited success, apparently because the clustering and GMM feature spaces used are derived from similar speech characteristics. This paper presents a new clustering approach based on the concept of a pitch correlogram that captures frame-to-frame pitch variations of a speaker rather than short-time spectral characteristics like cepstral coefficient, spectral slopes, and so forth. The effectiveness of this two-stage identification process is demonstrated on the IVIE corpus of 110 speakers. The overall system achieves a run-time advantage of 500% as well as a 10% reduction of error in overall speaker identification.

  17. Non-random intrachromosomal distribution of radiation-induced chromatid aberrations in Vicia faba. [Aberration clustering

    Energy Technology Data Exchange (ETDEWEB)

    Schubert, I; Rieger, R [Akademie der Wissenschaften der DDR, Gatersleben. Zentralinst. fuer Genetik und Kulturpflanzenforschung

    1976-04-01

    A reconstructed karyotype of Vicia faba, with all chromosomes individually distinguishable, was treated with X-rays, fast neutrons, (/sup 3/H) uridine (/sup 3/HU). The distribution within metaphase chromosomes of induced chromatid aberrations was non-random for all agents used. Aberration clustering, in part agent specific, occurred in chromosome segments containing heterochromatin as defined by the presence of G bands. The pattern of aberration clustering found after treatment with /sup 3/HU did not allow the recognition of chromosome regions active in transcription during treatment. Furthermore, it was impossible to obtain unambiguous indications of the presence of AT- and GC-base clusters from the patterns of /sup 3/HT- and /sup 3/HC-induced chromatid aberrations, respectively. Possible reasons underlying these observations are discussed.

  18. Effekt of a two-stage nursing assesment and intervention - a randomized intervention study

    DEFF Research Database (Denmark)

    Rosted, Elizabeth Emilie; Poulsen, Ingrid; Hendriksen, Carsten

    % of geriatric patients have complex and often unresolved caring needs. The objective was to examine the effect of a two-stage nursing assessment and intervention to address the patients uncompensated problems given just after discharge from ED and one and six months after. Method: We conducted a prospective...... nursing assessment comprising a checklist of 10 physical, mental, medical and social items. The focus was on unresolved problems which require medical intervention, new or different home care services, or comprehensive geriatric assessment. Following this the nurses made relevant referrals...... to the geriatric outpatient clinic, community health centre, primary physician or arrangements with next-of-kin. Findings: Primary endpoints will be presented as unplanned readmission to ED; admission to nursing home; and death. Secondary endpoints will be presented as physical function; depressive symptoms...

  19. An improved initialization center k-means clustering algorithm based on distance and density

    Science.gov (United States)

    Duan, Yanling; Liu, Qun; Xia, Shuyin

    2018-04-01

    Aiming at the problem of the random initial clustering center of k means algorithm that the clustering results are influenced by outlier data sample and are unstable in multiple clustering, a method of central point initialization method based on larger distance and higher density is proposed. The reciprocal of the weighted average of distance is used to represent the sample density, and the data sample with the larger distance and the higher density are selected as the initial clustering centers to optimize the clustering results. Then, a clustering evaluation method based on distance and density is designed to verify the feasibility of the algorithm and the practicality, the experimental results on UCI data sets show that the algorithm has a certain stability and practicality.

  20. Two-stage thermal/nonthermal waste treatment process

    International Nuclear Information System (INIS)

    Rosocha, L.A.; Anderson, G.K.; Coogan, J.J.; Kang, M.; Tennant, R.A.; Wantuck, P.J.

    1993-01-01

    An innovative waste treatment technology is being developed in Los Alamos to address the destruction of hazardous organic wastes. The technology described in this report uses two stages: a packed bed reactor (PBR) in the first stage to volatilize and/or combust liquid organics and a silent discharge plasma (SDP) reactor to remove entrained hazardous compounds in the off-gas to even lower levels. We have constructed pre-pilot-scale PBR-SDP apparatus and tested the two stages separately and in combined modes. These tests are described in the report

  1. Massive Star Clusters in Ongoing Galaxy Interactions: Clues to Cluster Formation

    Science.gov (United States)

    Keel, William C.; Borne, Kirk D.

    2003-09-01

    We present HST WFPC2 observations, supplemented by ground-based Hα data, of the star-cluster populations in two pairs of interacting galaxies selected for being in very different kinds of encounters seen at different stages. Dynamical information and n-body simulations provide the details of encounter geometry, mass ratio, and timing. In NGC 5752/4 we are seeing a weak encounter, well past closest approach, after about 2.5×108 yr. The large spiral NGC 5754 has a normal population of disk clusters, while the fainter companion NGC 5752 exhibits a rich population of luminous clusters with a flatter luminosity function. The strong, ongoing encounter in NGC 6621/2, seen about 1.0×108 yr past closest approach between roughly equal-mass galaxies, has produced an extensive population of luminous clusters, particularly young and luminous in a small region between the two nuclei. This region is dynamically interesting, with such a strong perturbation in the velocity field that the rotation curve reverses sign. From these results, in comparison with other strongly interacting systems discussed in the literature, cluster formation requires a threshold level of perturbation, with stage of the interaction a less important factor. The location of the most active star formation in NGC 6621/2 draws attention to a possible role for the Toomre stability threshold in shaping star formation in interacting galaxies. The rich cluster populations in NGC 5752 and NGC 6621 show that direct contact between gas-rich galaxy disks is not a requirement to form luminous clusters and that they can be triggered by processes happening within a single galaxy disk (albeit triggered by external perturbations). Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.

  2. A cluster-based randomized controlled trial promoting community participation in arsenic mitigation efforts in Bangladesh

    OpenAIRE

    George, Christine Marie; van Geen, Alexander; Slavkovich, Vesna; Singha, Ashit; Levy, Diane; Islam, Tariqul; Ahmed, Kazi Matin; Moon-Howard, Joyce; Tarozzi, Alessandro; Liu, Xinhua; Factor-Litvak, Pam; Graziano, Joseph

    2012-01-01

    Abstract Objective To reduce arsenic (As) exposure, we evaluated the effectiveness of training community members to perform water arsenic (WAs) testing and provide As education compared to sending representatives from outside communities to conduct these tasks. Methods We conducted a cluster based randomized controlled trial of 20 villages in Singair, Bangladesh. Fifty eligible respondents were randomly selected in each village. In 10 villages, a community member provided As education and WAs...

  3. Point-of-care cluster randomized trial in stroke secondary prevention using electronic health records

    NARCIS (Netherlands)

    Dregan, Alex; van Staa, Tjeerd P; McDermott, Lisa; McCann, Gerard; Ashworth, Mark; Charlton, Judith; Wolfe, Charles D A; Rudd, Anthony; Yardley, Lucy; Gulliford, Martin C

    BACKGROUND AND PURPOSE: The aim of this study was to evaluate whether the remote introduction of electronic decision support tools into family practices improves risk factor control after first stroke. This study also aimed to develop methods to implement cluster randomized trials in stroke using

  4. Planck/SDSS Cluster Mass and Gas Scaling Relations for a Volume-Complete redMaPPer Sample

    Science.gov (United States)

    Jimeno, Pablo; Diego, Jose M.; Broadhurst, Tom; De Martino, I.; Lazkoz, Ruth

    2018-04-01

    Using Planck satellite data, we construct Sunyaev-Zel'dovich (SZ) gas pressure profiles for a large, volume-complete sample of optically selected clusters. We have defined a sample of over 8,000 redMaPPer clusters from the Sloan Digital Sky Survey (SDSS), within the volume-complete redshift region 0.100 trend towards larger break radius with increasing cluster mass. Our SZ-based masses fall ˜16% below the mass-richness relations from weak lensing, in a similar fashion as the "hydrostatic bias" related with X-ray derived masses. Finally, we derive a tight Y500-M500 relation over a wide range of cluster mass, with a power law slope equal to 1.70 ± 0.07, that agrees well with the independent slope obtained by the Planck team with an SZ-selected cluster sample, but extends to lower masses with higher precision.

  5. Pain management in cancer center inpatients: a cluster randomized trial to evaluate a systematic integrated approach—The Edinburgh Pain Assessment and Management Tool

    OpenAIRE

    Fallon, M; Walker, J; Colvin, L; Rodriguez, A; Murray, G; Sharpe, M

    2018-01-01

    Purpose Pain is suboptimally managed in patients with cancer. We aimed to compare the effect of a policy of adding a clinician-delivered bedside pain assessment and management tool (Edinburgh Pain Assessment and management Tool [EPAT]) to usual care (UC) versus UC alone on pain outcomes. Patients and Methods In a two-arm, parallel group, cluster randomized (1:1) trial, we observed pain outcomes in 19 cancer centers in the United Kingdom and then randomly assigned the centers to eithe...

  6. Structuring communication relationships for interprofessional teamwork (SCRIPT): a cluster randomized controlled trial

    OpenAIRE

    Zwarenstein, Merrick; Reeves, Scott; Russell, Ann; Kenaszchuk, Chris; Conn, Lesley Gotlib; Miller, Karen-Lee; Lingard, Lorelei; Thorpe, Kevin E

    2007-01-01

    Abstract Background Despite a burgeoning interest in using interprofessional approaches to promote effective collaboration in health care, systematic reviews find scant evidence of benefit. This protocol describes the first cluster randomized controlled trial (RCT) to design and evaluate an intervention intended to improve interprofessional collaborative communication and patient-centred care. Objectives The objective is to evaluate the effects of a four-component, hospital-based staff commun...

  7. The ellipticities of a sample of globular clusters in M31

    International Nuclear Information System (INIS)

    Lupton, R.H.

    1989-01-01

    Images for a sample of 18 globular clusters in M31 have been obtained. The mean ellipticity on the sky in the range 7-14 pc (2-4 arcsec) is 0.08 + or - 0.02 and 0.12 + or - 0.01 in the range 14-21 pc (4-6 arcsec), with corresponding true ellipticities of 0.12 and 0.18. The difference between the inner and outer parts is significant at a 99 percent level. The flattening of the inner parts is statistically indistinguishable from that of the Galactic globular clusters, while the outer parts are flatter than the Galactic clusters at a 99.8 percent confidence level. There is a significant anticorrelation of ellipticity with line strength; such a correlation may in retrospect also be seen in the Galactic globular cluster system. For the M31 data, this anticorrelation is stronger in the inner parts of the galaxy. 30 refs

  8. SUCCESS FACTORS IN GROWING SMBs: A STUDY OF TWO INDUSTRIES AT TWO STAGES OF DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Tor Jarl Trondsen

    2002-01-01

    Full Text Available The study attempts to identify factors for growing SMBs. An evolutionary phase approach has been used. The study also aims to find out if there are common and different denominators for newer and older firms that can affect their profitability. The study selects a sampling frame that isolates two groups of firms in two industries at two stages of development. A variety of organizational and structural data was collected and analyzed. Amongst the conclusions that may be drawn from the study are that it is not easy to find a common definition of success, it is important to stratify SMBs when studying them, an evolutionary stage approach helps to compare firms with roughly the same external and internal dynamics and each industry has its own set of success variables.The study has identified three success variables for older firms that reflect contemporary strategic thinking such as crafting a good strategy and changing it only incrementally, building core competencies and outsourcing the rest, and keeping up with innovation and honing competitive skills.

  9. Superfluid response of two-dimensional parahydrogen clusters in confinement

    Energy Technology Data Exchange (ETDEWEB)

    Idowu, Saheed; Boninsegni, Massimo [Department of Physics, University of Alberta, Edmonton, Alberta T6G 2E7 (Canada)

    2015-04-07

    We study by computer simulations the effect of confinement on the superfluid properties of small two-dimensional (2D) parahydrogen clusters. For clusters of fewer than twenty molecules, the superfluid response in the low temperature limit is found to remain comparable in magnitude to that of free clusters, within a rather wide range of depth and size of the confining well. The resilience of the superfluid response is attributable to the “supersolid” character of these clusters. We investigate the possibility of establishing a bulk 2D superfluid “cluster crystal” phase of p-H{sub 2}, in which a global superfluid response would arise from tunnelling of molecules across adjacent unit cells. The computed energetics suggests that for clusters of about ten molecules, such a phase may be thermodynamically stable against the formation of the equilibrium insulating crystal, for values of the cluster crystal lattice constant possibly allowing tunnelling across adjacent unit cells.

  10. B =5 Skyrmion as a two-cluster system

    Science.gov (United States)

    Gudnason, Sven Bjarke; Halcrow, Chris

    2018-06-01

    The classical B =5 Skyrmion can be approximated by a two-cluster system in which a B =1 Skyrmion is attached to a core B =4 Skyrmion. We quantize this system, allowing the B =1 to freely orbit the core. The configuration space is 11 dimensional but simplifies significantly after factoring out the overall spin and isospin degrees of freedom. We exactly solve the free quantum problem and then include an interaction potential between the Skyrmions numerically. The resulting energy spectrum is compared to the corresponding nuclei—the helium-5/lithium-5 isodoublet. We find approximate parity doubling not seen in the experimental data. In addition, we fail to obtain the correct ground-state spin. The framework laid out for this two-cluster system can readily be modified for other clusters and in particular for other B =4 n +1 nuclei, of which B =5 is the simplest example.

  11. Propensity score to detect baseline imbalance in cluster randomized trials: the role of the c-statistic.

    Science.gov (United States)

    Leyrat, Clémence; Caille, Agnès; Foucher, Yohann; Giraudeau, Bruno

    2016-01-22

    Despite randomization, baseline imbalance and confounding bias may occur in cluster randomized trials (CRTs). Covariate imbalance may jeopardize the validity of statistical inferences if they occur on prognostic factors. Thus, the diagnosis of a such imbalance is essential to adjust statistical analysis if required. We developed a tool based on the c-statistic of the propensity score (PS) model to detect global baseline covariate imbalance in CRTs and assess the risk of confounding bias. We performed a simulation study to assess the performance of the proposed tool and applied this method to analyze the data from 2 published CRTs. The proposed method had good performance for large sample sizes (n =500 per arm) and when the number of unbalanced covariates was not too small as compared with the total number of baseline covariates (≥40% of unbalanced covariates). We also provide a strategy for pre selection of the covariates needed to be included in the PS model to enhance imbalance detection. The proposed tool could be useful in deciding whether covariate adjustment is required before performing statistical analyses of CRTs.

  12. Comparison of Oone-Stage Free Gracilis Muscle Flap With Two-Stage Method in Chronic Facial Palsy

    Directory of Open Access Journals (Sweden)

    J Ghaffari

    2007-08-01

    Full Text Available Background:Rehabilitation of facial paralysis is one of the greatest challenges faced by reconstructive surgeons today. The traditional method for treatment of patients with facial palsy is the two-stage free gracilis flap which has a long latency period of between the two stages of surgery.Methods: In this paper, we prospectively compared the results of the one-stage gracilis flap method with the two -stage technique.Results:Out of 41 patients with facial palsy refered to Hazrat-e-Fatemeh Hospital 31 were selected from whom 22 underwent two- stage and 9 one-stage method treatment. The two groups were identical according to age,sex,intensity of illness, duration, and chronicity of illness. Mean duration of follow up was 37 months. There was no significant relation between the two groups regarding the symmetry of face in repose, smiling, whistling and nasolabial folds. Frequency of complications was equal in both groups. The postoperative surgeons and patients' satisfaction were equal in both groups. There was no significant difference between the mean excursion of muscle flap in one-stage (9.8 mm and two-stage groups (8.9 mm. The ratio of contraction of the affected side compared to the normal side was similar in both groups. The mean time of the initial contraction of the muscle flap in the one-stage group (5.5 months had a significant difference (P=0.001 with the two-stage one (6.5 months.The study revealed a highly significant difference (P=0.0001 between the mean waiting period from the first operation to the beginning of muscle contraction in one-stage(5.5 monthsand two-stage groups(17.1 months.Conclusion:It seems that the results and complication of the two methods are the same,but the one-stage method requires less time for facial reanimation,and is costeffective because it saves time and decreases hospitalization costs.

  13. Effectiveness of individualized fall prevention program in geriatric rehabilitation hospital setting: a cluster randomized trial.

    Science.gov (United States)

    Aizen, Efraim; Lutsyk, Galina; Wainer, Lea; Carmeli, Sarit

    2015-10-01

    There is no conclusive evidence that hospital fall prevention programs can reduce the number of falls. We aimed to investigate the effect of a targeted individualized falls prevention program in a geriatric rehabilitation hospital. This was a two-stage cluster-controlled trial carried out in five geriatric rehabilitation wards. Participants were 752 patients with mean age 83.2 years. The intervention was a two-phase targeted intervention falls prevention program. The intervention included an assessment of patient's risk by a risk assessment tool and an individual management that includes medical, behavioral, cognitive and environmental modifications. Patients with moderate risk received additionally orientation guidance, and mobility restriction. Patients determined as high risk were additionally placed under permanent personal supervision. Outcome measures were falls during hospital stay. In both stages of the trial, intervention and control wards were almost similar at baseline for individual patient characteristics. Overall, 37 falls occurred during the study. No significant difference was found in fall rates during follow-up between intervention and control wards: 1.306 falls per 1000 bed days in the intervention groups and 1.763-1.826 falls per 1000 bed days in the control groups. The adjusted hazard ratio for falls in the intervention groups was 1.36 (95 % confidence interval 0.89-1.77) (P = 0.08) in the first stage and 1.27 (95 % confidence interval 0.92-1.67) (P = 0.12) in the second stage. These results suggest that in a geriatric rehabilitation hospital a targeted individualized intervention falls prevention program is not effective in reducing falls.

  14. Statistical Methods and Tools for Hanford Staged Feed Tank Sampling

    Energy Technology Data Exchange (ETDEWEB)

    Fountain, Matthew S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Brigantic, Robert T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Peterson, Reid A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-10-01

    This report summarizes work conducted by Pacific Northwest National Laboratory to technically evaluate the current approach to staged feed sampling of high-level waste (HLW) sludge to meet waste acceptance criteria (WAC) for transfer from tank farms to the Hanford Waste Treatment and Immobilization Plant (WTP). The current sampling and analysis approach is detailed in the document titled Initial Data Quality Objectives for WTP Feed Acceptance Criteria, 24590-WTP-RPT-MGT-11-014, Revision 0 (Arakali et al. 2011). The goal of this current work is to evaluate and provide recommendations to support a defensible, technical and statistical basis for the staged feed sampling approach that meets WAC data quality objectives (DQOs).

  15. Home-based versus mobile clinic HIV testing and counseling in rural Lesotho: a cluster-randomized trial.

    Directory of Open Access Journals (Sweden)

    Niklaus Daniel Labhardt

    2014-12-01

    Full Text Available The success of HIV programs relies on widely accessible HIV testing and counseling (HTC services at health facilities as well as in the community. Home-based HTC (HB-HTC is a popular community-based approach to reach persons who do not test at health facilities. Data comparing HB-HTC to other community-based HTC approaches are very limited. This trial compares HB-HTC to mobile clinic HTC (MC-HTC.The trial was powered to test the hypothesis of higher HTC uptake in HB-HTC campaigns than in MC-HTC campaigns. Twelve clusters were randomly allocated to HB-HTC or MC-HTC. The six clusters in the HB-HTC group received 30 1-d multi-disease campaigns (five villages per cluster that delivered services by going door-to-door, whereas the six clusters in MC-HTC group received campaigns involving community gatherings in the 30 villages with subsequent service provision in mobile clinics. Time allocation and human resources were standardized and equal in both groups. All individuals accessing the campaigns with unknown HIV status or whose last HIV test was >12 wk ago and was negative were eligible. All outcomes were assessed at the individual level. Statistical analysis used multivariable logistic regression. Odds ratios and p-values were adjusted for gender, age, and cluster effect. Out of 3,197 participants from the 12 clusters, 2,563 (80.2% were eligible (HB-HTC: 1,171; MC-HTC: 1,392. The results for the primary outcomes were as follows. Overall HTC uptake was higher in the HB-HTC group than in the MC-HTC group (92.5% versus 86.7%; adjusted odds ratio [aOR]: 2.06; 95% CI: 1.18-3.60; p = 0. 011. Among adolescents and adults ≥ 12 y, HTC uptake did not differ significantly between the two groups; however, in children <12 y, HTC uptake was higher in the HB-HTC arm (87.5% versus 58.7%; aOR: 4.91; 95% CI: 2.41-10.0; p<0.001. Out of those who took up HTC, 114 (4.9% tested HIV-positive, 39 (3.6% in the HB-HTC arm and 75 (6.2% in the MC-HTC arm (aOR: 0.64; 95% CI

  16. A cluster randomized controlled trial testing the effectiveness of Houvast: A strengths-based intervention for homeless young adults

    NARCIS (Netherlands)

    Krabbenborg, M.A.M.; Boersma, S.N.; Veld, W.M. van der; Hulst, B. van; Vollebergh, W.A.M.; Wolf, J.R.L.M.

    2017-01-01

    Objective: To test the effectiveness of Houvast: a strengths-based intervention for homeless young adults. Method: A cluster randomized controlled trial was conducted with 10 Dutch shelter facilities randomly allocated to an intervention and a control group. Homeless young adults were interviewed

  17. A New Two-Stage Approach to Short Term Electrical Load Forecasting

    Directory of Open Access Journals (Sweden)

    Dragan Tasić

    2013-04-01

    Full Text Available In the deregulated energy market, the accuracy of load forecasting has a significant effect on the planning and operational decision making of utility companies. Electric load is a random non-stationary process influenced by a number of factors which make it difficult to model. To achieve better forecasting accuracy, a wide variety of models have been proposed. These models are based on different mathematical methods and offer different features. This paper presents a new two-stage approach for short-term electrical load forecasting based on least-squares support vector machines. With the aim of improving forecasting accuracy, one more feature was added to the model feature set, the next day average load demand. As this feature is unknown for one day ahead, in the first stage, forecasting of the next day average load demand is done and then used in the model in the second stage for next day hourly load forecasting. The effectiveness of the presented model is shown on the real data of the ISO New England electricity market. The obtained results confirm the validity advantage of the proposed approach.

  18. Comparison of two modes of vitamin B12 supplementation on neuroconduction and cognitive function among older people living in Santiago, Chile: a cluster randomized controlled trial. a study protocol [ISRCTN 02694183

    Directory of Open Access Journals (Sweden)

    Brito Alex

    2011-09-01

    Full Text Available Abstract Background Older people have a high risk of vitamin B12 deficiency; this can lead to varying degrees of cognitive and neurological impairment. CBL deficiency may present as macrocytic anemia, subacute combined degeneration of the spinal cord, or as neuropathy, but is often asymptomatic in older people. Less is known about subclinical vitamin B12 deficiency and concurrent neuroconduction and cognitive impairment. A Programme of Complementary Feeding for the Older Population (PACAM in Chile delivers 2 complementary fortified foods that provide approximately 1.4 μg/day of vitamin B12 (2.4 μg/day elderly RDA. The aim of the present study is to assess whether supplementation with vitamin B12 will improve neuroconduction and cognitive function in older people who have biochemical evidence of vitamin B12 insufficiency in the absence of clinical deficiency. Methods We designed a cluster double-blind placebo-controlled trial involving community dwelling people aged 70-79 living in Santiago, Chile. We randomized 15 clusters (health centers involving 300 people (20 per cluster. Each cluster will be randomly assigned to one of three arms: a a 1 mg vitamin B12 pill taken daily and a routine PACAM food; b a placebo pill and the milk-PACAM food fortified to provide 1 mg of vitamin B12; c the routine PACAM food and a placebo pill. The study has been designed as an 18 month follow up period. The primary outcomes assessed at baseline, 4, 9 and 18 months will be: serum levels of vitamin B12, neuroconduction and cognitive function. Conclusions In view of the high prevalence of vitamin B12 deficiency in later life, the present study has potential public health interest because since it will measure the impact of the existing program of complementary feeding as compared to two options that provide higher vitamin B12 intakes that might potentially may contribute in preserving neurophysiologic and cognitive function and thus improve quality of life for older

  19. Classification of Autism Spectrum Disorder Using Random Support Vector Machine Cluster

    Directory of Open Access Journals (Sweden)

    Xia-an Bi

    2018-02-01

    Full Text Available Autism spectrum disorder (ASD is mainly reflected in the communication and language barriers, difficulties in social communication, and it is a kind of neurological developmental disorder. Most researches have used the machine learning method to classify patients and normal controls, among which support vector machines (SVM are widely employed. But the classification accuracy of SVM is usually low, due to the usage of a single SVM as classifier. Thus, we used multiple SVMs to classify ASD patients and typical controls (TC. Resting-state functional magnetic resonance imaging (fMRI data of 46 TC and 61 ASD patients were obtained from the Autism Brain Imaging Data Exchange (ABIDE database. Only 84 of 107 subjects are utilized in experiments because the translation or rotation of 7 TC and 16 ASD patients has surpassed ±2 mm or ±2°. Then the random SVM cluster was proposed to distinguish TC and ASD. The results show that this method has an excellent classification performance based on all the features. Furthermore, the accuracy based on the optimal feature set could reach to 96.15%. Abnormal brain regions could also be found, such as inferior frontal gyrus (IFG (orbital and opercula part, hippocampus, and precuneus. It is indicated that the method of random SVM cluster may apply to the auxiliary diagnosis of ASD.

  20. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-03-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer support, and one resolution enhancing step with nonsmooth mixed regularization. The first step is strictly direct and of sampling type, and it faithfully detects the scatterer support. The second step is an innovative application of nonsmooth mixed regularization, and it accurately resolves the scatterer size as well as intensities. The nonsmooth model can be efficiently solved by a semi-smooth Newton-type method. Numerical results for two- and three-dimensional examples indicate that the new approach is accurate, computationally efficient, and robust with respect to data noise. © 2012 Elsevier Inc.

  1. An inexact two-stage stochastic robust programming for residential micro-grid management-based on random demand

    International Nuclear Information System (INIS)

    Ji, L.; Niu, D.X.; Huang, G.H.

    2014-01-01

    In this paper a stochastic robust optimization problem of residential micro-grid energy management is presented. Combined cooling, heating and electricity technology (CCHP) is introduced to satisfy various energy demands. Two-stage programming is utilized to find the optimal installed capacity investment and operation control of CCHP (combined cooling heating and power). Moreover, interval programming and robust stochastic optimization methods are exploited to gain interval robust solutions under different robustness levels which are feasible for uncertain data. The obtained results can help micro-grid managers minimizing the investment and operation cost with lower system failure risk when facing fluctuant energy market and uncertain technology parameters. The different robustness levels reflect the risk preference of micro-grid manager. The proposed approach is applied to residential area energy management in North China. Detailed computational results under different robustness level are presented and analyzed for providing investment decision and operation strategies. - Highlights: • An inexact two-stage stochastic robust programming model for CCHP management. • The energy market and technical parameters uncertainties were considered. • Investment decision, operation cost, and system safety were analyzed. • Uncertainties expressed as discrete intervals and probability distributions

  2. A long-term evaluation of the stage of change approach and compensable injury outcomes - a cluster-randomised trial.

    Science.gov (United States)

    Rothmore, Paul; Aylward, Paul; Gray, Jodi; Karnon, Jonathan

    2017-05-01

    This study investigated the long-term injury outcomes for workers in companies from a range of industries which had been randomly allocated to receive ergonomics interventions tailored according to the stage of change (SOC) approach or standard ergonomics advice. Differences in compensable injury outcomes between the groups were analysed using logistic regression models. Questionnaire results from face-to-face interviews to assess musculoskeletal pain and discomfort (MSPD), job satisfaction and other factors were also analysed. Although not significant at the 0.05 level, after adjusting for workgroup clustering, workers in receipt of tailored advice were 55% (OR = 0.45, 95% CI = 0.19-1.08) less likely to report a compensable injury than those in receipt of standard ergonomics advice. Workload, job satisfaction and MSPD were significantly correlated with injury outcomes. The observed outcomes support the potential value of the SOC approach, as well as highlighting the need to consider workload, job satisfaction and MSPD when planning injury prevention programmes. Practitioner Summary: This study investigated compensable injury outcomes for workers who had received ergonomics advice tailored according to the stage of change (SOC) approach compared with standard ergonomics advice. The results support the potential value of the SOC approach and highlight the need to consider workload, job satisfaction and musculoskeletal pain and discomfort when planning injury prevention interventions.

  3. Clustering, randomness, and regularity in cloud fields: 2. Cumulus cloud fields

    Science.gov (United States)

    Zhu, T.; Lee, J.; Weger, R. C.; Welch, R. M.

    1992-12-01

    During the last decade a major controversy has been brewing concerning the proper characterization of cumulus convection. The prevailing view has been that cumulus clouds form in clusters, in which cloud spacing is closer than that found for the overall cloud field and which maintains its identity over many cloud lifetimes. This "mutual protection hypothesis" of Randall and Huffman (1980) has been challenged by the "inhibition hypothesis" of Ramirez et al. (1990) which strongly suggests that the spatial distribution of cumuli must tend toward a regular distribution. A dilemma has resulted because observations have been reported to support both hypotheses. The present work reports a detailed analysis of cumulus cloud field spatial distributions based upon Landsat, Advanced Very High Resolution Radiometer, and Skylab data. Both nearest-neighbor and point-to-cloud cumulative distribution function statistics are investigated. The results show unequivocally that when both large and small clouds are included in the cloud field distribution, the cloud field always has a strong clustering signal. The strength of clustering is largest at cloud diameters of about 200-300 m, diminishing with increasing cloud diameter. In many cases, clusters of small clouds are found which are not closely associated with large clouds. As the small clouds are eliminated from consideration, the cloud field typically tends towards regularity. Thus it would appear that the "inhibition hypothesis" of Ramirez and Bras (1990) has been verified for the large clouds. However, these results are based upon the analysis of point processes. A more exact analysis also is made which takes into account the cloud size distributions. Since distinct clouds are by definition nonoverlapping, cloud size effects place a restriction upon the possible locations of clouds in the cloud field. The net effect of this analysis is that the large clouds appear to be randomly distributed, with only weak tendencies towards

  4. Fit 5 Kids TV reduction program for Latino preschoolers: A cluster randomized controlled trial

    Science.gov (United States)

    Reducing Latino preschoolers' TV viewing is needed to reduce their risk of obesity and other chronic diseases. This study's objective was to evaluate the Fit 5 Kids (F5K) TV reduction program's impact on Latino preschooler's TV viewing. The study design was a cluster randomized controlled trial (RCT...

  5. Dynamic Load Balanced Clustering using Elitism based Random Immigrant Genetic Approach for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    K. Mohaideen Pitchai

    2017-07-01

    Full Text Available Wireless Sensor Network (WSN consists of a large number of small sensors with restricted energy. Prolonged network lifespan, scalability, node mobility and load balancing are important needs for several WSN applications. Clustering the sensor nodes is an efficient technique to reach these goals. WSN have the characteristics of topology dynamics because of factors like energy conservation and node movement that leads to Dynamic Load Balanced Clustering Problem (DLBCP. In this paper, Elitism based Random Immigrant Genetic Approach (ERIGA is proposed to solve DLBCP which adapts to topology dynamics. ERIGA uses the dynamic Genetic Algorithm (GA components for solving the DLBCP. The performance of load balanced clustering process is enhanced with the help of this dynamic GA. As a result, the ERIGA achieves to elect suitable cluster heads which balances the network load and increases the lifespan of the network.

  6. Modeling of correlated data with informative cluster sizes: An evaluation of joint modeling and within-cluster resampling approaches.

    Science.gov (United States)

    Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S

    2017-08-01

    Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.

  7. THE GEMINI/HST CLUSTER PROJECT: STRUCTURAL AND PHOTOMETRIC PROPERTIES OF GALAXIES IN THREE z = 0.28-0.89 CLUSTERS

    International Nuclear Information System (INIS)

    Chiboucas, Kristin; Joergensen, Inger; Barr, Jordi; Collobert, Maela; Davies, Roger; Flint, Kathleen

    2009-01-01

    We present the data processing and analysis techniques we are using to determine the structural and photometric properties of galaxies in our Gemini/HST Galaxy Cluster Project sample. The goal of this study is to understand cluster galaxy evolution in terms of scaling relations and structural properties of cluster galaxies at redshifts 0.15 1/4 law and Sersic function two-dimensional surface brightness profiles to each of the galaxies in our sample. Using simulated galaxies, we test how the assumed profile affects the derived parameters and how the uncertainties affect our Fundamental Plane results. We find that while fitting galaxies that have Sersic index n 1/4 law profiles systematically overestimates the galaxy radius and flux, the combination of profile parameters that enter the Fundamental Plane has uncertainties that are small. Average systematic offsets and associated random uncertainties in magnitude and log r e for n>2 galaxies fitted with r 1/4 law profiles are -0.1 ± 0.3 and 0.1 ± 0.2, respectively. The combination of effective radius and surface brightness, log r e - βlog (I) e , that enters the Fundamental Plane produces offsets smaller than -0.02 ± 0.10. This systematic error is insignificant and independent of galaxy magnitude or size. A catalog of photometry and surface brightness profile parameters is presented for three of the clusters in our sample, RX J0142.0+2131, RX J0152.7-1357, and RX J1226.9+3332 at redshifts 0.28, 0.83, and 0.89, respectively.

  8. Generating a robust prediction model for stage I lung adenocarcinoma recurrence after surgical resection.

    Science.gov (United States)

    Wu, Yu-Chung; Wei, Nien-Chih; Hung, Jung-Jyh; Yeh, Yi-Chen; Su, Li-Jen; Hsu, Wen-Hu; Chou, Teh-Ying

    2017-10-03

    Lung cancer mortality remains high even after successful resection. Adjuvant treatment benefits stage II and III patients, but not stage I patients, and most studies fail to predict recurrence in stage I patients. Our study included 211 lung adenocarcinoma patients (stages I-IIIA; 81% stage I) who received curative resections at Taipei Veterans General Hospital between January 2001 and December 2012. We generated a prediction model using 153 samples, with validation using an additional 58 clinical outcome-blinded samples. Gene expression profiles were generated using formalin-fixed, paraffin-embedded tissue samples and microarrays. Data analysis was performed using a supervised clustering method. The prediction model generated from mixed stage samples successfully separated patients at high vs. low risk for recurrence. The validation tests hazard ratio (HR = 4.38) was similar to that of the training tests (HR = 4.53), indicating a robust training process. Our prediction model successfully distinguished high- from low-risk stage IA and IB patients, with a difference in 5-year disease-free survival between high- and low-risk patients of 42% for stage IA and 45% for stage IB ( p model for identifying lung adenocarcinoma patients at high risk for recurrence who may benefit from adjuvant therapy. Our prediction performance of the difference in disease free survival between high risk and low risk groups demonstrates more than two fold improvement over earlier published results.

  9. Two-dimensional random arrays for real time volumetric imaging

    DEFF Research Database (Denmark)

    Davidsen, Richard E.; Jensen, Jørgen Arendt; Smith, Stephen W.

    1994-01-01

    real time volumetric imaging system, which employs a wide transmit beam and receive mode parallel processing to increase image frame rate. Depth-of-field comparisons were made from simulated on-axis and off-axis beamplots at ranges from 30 to 160 mm for both coaxial and offset transmit and receive......Two-dimensional arrays are necessary for a variety of ultrasonic imaging techniques, including elevation focusing, 2-D phase aberration correction, and real time volumetric imaging. In order to reduce system cost and complexity, sparse 2-D arrays have been considered with element geometries...... selected ad hoc, by algorithm, or by random process. Two random sparse array geometries and a sparse array with a Mills cross receive pattern were simulated and compared to a fully sampled aperture with the same overall dimensions. The sparse arrays were designed to the constraints of the Duke University...

  10. Effect of ammoniacal nitrogen on one-stage and two-stage anaerobic digestion of food waste

    International Nuclear Information System (INIS)

    Ariunbaatar, Javkhlan; Scotto Di Perta, Ester; Panico, Antonio; Frunzo, Luigi; Esposito, Giovanni; Lens, Piet N.L.; Pirozzi, Francesco

    2015-01-01

    Highlights: • Almost 100% of the biomethane potential of food waste was recovered during AD in a two-stage CSTR. • Recirculation of the liquid fraction of the digestate provided the necessary buffer in the AD reactors. • A higher OLR (0.9 gVS/L·d) led to higher accumulation of TAN, which caused more toxicity. • A two-stage reactor is more sensitive to elevated concentrations of ammonia. • The IC 50 of TAN for the AD of food waste amounts to 3.8 g/L. - Abstract: This research compares the operation of one-stage and two-stage anaerobic continuously stirred tank reactor (CSTR) systems fed semi-continuously with food waste. The main purpose was to investigate the effects of ammoniacal nitrogen on the anaerobic digestion process. The two-stage system gave more reliable operation compared to one-stage due to: (i) a better pH self-adjusting capacity; (ii) a higher resistance to organic loading shocks; and (iii) a higher conversion rate of organic substrate to biomethane. Also a small amount of biohydrogen was detected from the first stage of the two-stage reactor making this system attractive for biohythane production. As the digestate contains ammoniacal nitrogen, re-circulating it provided the necessary alkalinity in the systems, thus preventing an eventual failure by volatile fatty acids (VFA) accumulation. However, re-circulation also resulted in an ammonium accumulation, yielding a lower biomethane production. Based on the batch experimental results the 50% inhibitory concentration of total ammoniacal nitrogen on the methanogenic activities was calculated as 3.8 g/L, corresponding to 146 mg/L free ammonia for the inoculum used for this research. The two-stage system was affected by the inhibition more than the one-stage system, as it requires less alkalinity and the physically separated methanogens are more sensitive to inhibitory factors, such as ammonium and propionic acid

  11. Strengthening referral of sick children from the private health sector and its impact on referral uptake in Uganda: a cluster randomized controlled trial protocol

    Directory of Open Access Journals (Sweden)

    Esther Buregyeya

    2016-11-01

    Full Text Available Abstract Background Uganda’s under-five mortality is high, currently estimated at 66/1000 live births. Poor referral of sick children that seek care from the private sector is one of the contributory factors. The proposed intervention aims to improve referral and uptake of referral advice for children that seek care from private facilities (registered drug shops/private clinics. Methods/Design A cluster randomized design will be applied to test the intervention in Mukono District, central Uganda. A sample of study clusters will implement the intervention. The intervention will consist of three components: i raising awareness in the community: village health teams will discuss the importance of referral and encourage households to save money, ii training and supervision of providers in the private sector to diagnose, treat and refer sick children, iii regular meetings between the public and private providers (convened by the district health team to discuss the referral system. Twenty clusters will be included in the study, randomized in the ratio of 1:1. A minimum of 319 sick children per cluster and the total number of sick children to be recruited from all clusters will be 8910; adjusting for a 10 % loss to follow up and possible withdrawal of private outlets. Discussion The immediate sustainable impact will be appropriate treatment of sick children. The intervention is likely to impact on private sector practices since the scope of the services they provide will have expanded. The proposed study is also likely to have an impact on families as; i they may appreciate the importance of timely referral on child illness management, ii the cost savings related to reduced morbidity will be used by household to access other social services. The linkage between the private and public sectors will create a potential avenue for delivery of other public health interventions and improved working relations in the two sectors. Further, improved quality of

  12. Staging of gastric adenocarcinoma using two-phase spiral CT: correlation with pathologic staging

    International Nuclear Information System (INIS)

    Seo, Tae Seok; Lee, Dong Ho; Ko, Young Tae; Lim, Joo Won

    1998-01-01

    To correlate the preoperative staging of gastric adenocarcinoma using two-phase spiral CT with pathologic staging. One hundred and eighty patients with gastric cancers confirmed during surgery underwent two-phase spiral CT, and were evaluated retrospectively. CT scans were obtained in the prone position after ingestion of water. Scans were performed 35 and 80 seconds after the start of infusion of 120mL of non-ionic contrast material with the speed of 3mL/sec. Five mm collimation, 7mm/sec table feed and 5mm reconstruction interval were used. T-and N-stage were determined using spiral CT images, without knowledge of the pathologic results. Pathologic staging was later compared with CT staging. Pathologic T-stage was T1 in 70 cases(38.9%), T2 in 33(18.3%), T3 in 73(40.6%), and T4 in 4(2.2%). Type-I or IIa elevated lesions accouted for 10 of 70 T1 cases(14.3%) and flat or depressed lesions(type IIb, IIc, or III) for 60(85.7%). Pathologic N-stage was NO in 85 cases(47.2%), N1 in 42(23.3%), N2 in 31(17.2%), and N3 in 22(12,2%). The detection rate of early gastric cancer using two-phase spiral CT was 100.0%(10 of 10 cases) among elevated lesions and 78.3%(47 of 60 cases) among flat or depressed lesions. With regard to T-stage, there was good correlation between CT image and pathology in 86 of 180 cases(47.8%). Overstaging occurred in 23.3%(42 of 180 cases) and understaging in 28.9%(52 of 180 cases). With regard to N-stage, good correlation between CT image and pathology was noted in 94 of 180 cases(52.2%). The rate of understaging(31.7%, 57 of 180 cases) was higher than that of overstaging(16.1%, 29 of 180 cases)(p<0.001). The detection rate of early gastric cancer using two-phase spiral CT was 81.4%, and there was no significant difference in detectability between elevated and depressed lesions. Two-phase spiral CT for determing the T-and N-stage of gastric cancer was not effective;it was accurate in abont 50% of cases understaging tended to occur.=20

  13. Mediastinal lymph node dissection versus mediastinal lymph node sampling for early stage non-small cell lung cancer: a systematic review and meta-analysis.

    Science.gov (United States)

    Huang, Xiongfeng; Wang, Jianmin; Chen, Qiao; Jiang, Jielin

    2014-01-01

    This systematic review and meta-analysis aimed to evaluate the overall survival, local recurrence, distant metastasis, and complications of mediastinal lymph node dissection (MLND) versus mediastinal lymph node sampling (MLNS) in stage I-IIIA non-small cell lung cancer (NSCLC) patients. A systematic search of published literature was conducted using the main databases (MEDLINE, PubMed, EMBASE, and Cochrane databases) to identify relevant randomized controlled trials that compared MLND vs. MLNS in NSCLC patients. Methodological quality of included randomized controlled trials was assessed according to the criteria from the Cochrane Handbook for Systematic Review of Interventions (Version 5.1.0). Meta-analysis was performed using The Cochrane Collaboration's Review Manager 5.3. The results of the meta-analysis were expressed as hazard ratio (HR) or risk ratio (RR), with their corresponding 95% confidence interval (CI). We included results reported from six randomized controlled trials, with a total of 1,791 patients included in the primary meta-analysis. Compared to MLNS in NSCLC patients, there was no statistically significant difference in MLND on overall survival (HR = 0.77, 95% CI 0.55 to 1.08; P = 0.13). In addition, the results indicated that local recurrence rate (RR = 0.93, 95% CI 0.68 to 1.28; P = 0.67), distant metastasis rate (RR = 0.88, 95% CI 0.74 to 1.04; P = 0.15), and total complications rate (RR = 1.10, 95% CI 0.67 to 1.79; P = 0.72) were similar, no significant difference found between the two groups. Results for overall survival, local recurrence rate, and distant metastasis rate were similar between MLND and MLNS in early stage NSCLC patients. There was no evidence that MLND increased complications compared with MLNS. Whether or not MLND is superior to MLNS for stage II-IIIA remains to be determined.

  14. Frequency analysis of a two-stage planetary gearbox using two different methodologies

    Science.gov (United States)

    Feki, Nabih; Karray, Maha; Khabou, Mohamed Tawfik; Chaari, Fakher; Haddar, Mohamed

    2017-12-01

    This paper is focused on the characterization of the frequency content of vibration signals issued from a two-stage planetary gearbox. To achieve this goal, two different methodologies are adopted: the lumped-parameter modeling approach and the phenomenological modeling approach. The two methodologies aim to describe the complex vibrations generated by a two-stage planetary gearbox. The phenomenological model describes directly the vibrations as measured by a sensor fixed outside the fixed ring gear with respect to an inertial reference frame, while results from a lumped-parameter model are referenced with respect to a rotating frame and then transferred into an inertial reference frame. Two different case studies of the two-stage planetary gear are adopted to describe the vibration and the corresponding spectra using both models. Each case presents a specific geometry and a specific spectral structure.

  15. Does routine psychosocial screening improve referral to psychosocial care providers and patient-radiotherapist communication? A cluster randomized controlled trial.

    Science.gov (United States)

    Braeken, Anna P B M; Lechner, Lilian; Eekers, Daniëlle B P; Houben, Ruud M A; van Gils, Francis C J M; Ambergen, Ton; Kempen, Gertrudis I J M

    2013-11-01

    This study tests whether using a screening instrument improves referral to psychosocial care providers (e.g. psychologist) and facilitates patient-radiotherapist communication. A cluster randomized controlled trial was used. Fourteen radiotherapists were randomly allocated to the experimental or control group and 568 of their patients received care in accordance with the group to which their radiotherapist was allocated. Patients in the experimental group were asked to complete a screening instrument before and at the end of the radiation treatment period. All patients were requested to complete questionnaires concerning patient-physician communication after the first consultation and concerning psychosocial care 3 and 12 months post-intervention. Patients who completed the screening instrument were referred to social workers at an earlier stage than patients who did not (Pcommunication. Our results suggest that a simple screening procedure can be valuable for the timely treatment of psychosocial problems in patients. Future efforts should be directed at appropriate timing of screening and enhancing physicians' awareness regarding the importance of identifying, discussing and treating psychosocial problems in cancer patients. Psychosocial screening can be enhanced by effective radiotherapist-patient communication. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. Distribution of age at menopause in two Danish samples

    DEFF Research Database (Denmark)

    Boldsen, J L; Jeune, B

    1990-01-01

    We analyzed the distribution of reported age at natural menopause in two random samples of Danish women (n = 176 and n = 150) to determine the shape of the distribution and to disclose any possible trends in the distribution parameters. It was necessary to correct the frequencies of the reported...... ages for the effect of differing ages at reporting. The corrected distribution of age at menopause differs from the normal distribution in the same way in both samples. Both distributions could be described by a mixture of two normal distributions. It appears that most of the parameters of the normal...... distribution mixtures remain unchanged over a 50-year time lag. The position of the distribution, that is, the mean age at menopause, however, increases slightly but significantly....

  17. Engineering practice variation through provider agreement: a cluster-randomized feasibility trial.

    Science.gov (United States)

    McCarren, Madeline; Twedt, Elaine L; Mansuri, Faizmohamed M; Nelson, Philip R; Peek, Brian T

    2014-01-01

    Minimal-risk randomized trials that can be embedded in practice could facilitate learning health-care systems. A cluster-randomized design was proposed to compare treatment strategies by assigning clusters (eg, providers) to "favor" a particular drug, with providers retaining autonomy for specific patients. Patient informed consent might be waived, broadening inclusion. However, it is not known if providers will adhere to the assignment or whether institutional review boards will waive consent. We evaluated the feasibility of this trial design. Agreeable providers were randomized to "favor" either hydrochlorothiazide or chlorthalidone when starting patients on thiazide-type therapy for hypertension. The assignment applied when the provider had already decided to start a thiazide, and providers could deviate from the strategy as needed. Prescriptions were aggregated to produce a provider strategy-adherence rate. All four institutional review boards waived documentation of patient consent. Providers (n=18) followed their assigned strategy for most of their new thiazide prescriptions (n=138 patients). In the "favor hydrochlorothiazide" group, there was 99% adherence to that strategy. In the "favor chlorthalidone" group, chlorthalidone comprised 77% of new thiazide starts, up from 1% in the pre-study period. When the assigned strategy was followed, dosing in the recommended range was 48% for hydrochlorothiazide (25-50 mg/day) and 100% for chlorthalidone (12.5-25.0 mg/day). Providers were motivated to participate by a desire to contribute to a comparative effectiveness study. A study promotional mug, provider information letter, and interactions with the site investigator were identified as most helpful in reminding providers of their study drug strategy. Providers prescribed according to an assigned drug-choice strategy most of the time for the purpose of a comparative effectiveness study. This simple design could facilitate research participation and behavior change

  18. Hospital recruitment for a pragmatic cluster-randomized clinical trial: Lessons learned from the COMPASS study.

    Science.gov (United States)

    Johnson, Anna M; Jones, Sara B; Duncan, Pamela W; Bushnell, Cheryl D; Coleman, Sylvia W; Mettam, Laurie H; Kucharska-Newton, Anna M; Sissine, Mysha E; Rosamond, Wayne D

    2018-01-26

    Pragmatic randomized clinical trials are essential to determine the effectiveness of interventions in "real-world" clinical practice. These trials frequently use a cluster-randomized methodology, with randomization at the site level. Despite policymakers' increased interest in supporting pragmatic randomized clinical trials, no studies to date have reported on the unique recruitment challenges faced by cluster-randomized pragmatic trials. We investigated key challenges and successful strategies for hospital recruitment in the Comprehensive Post-Acute Stroke Services (COMPASS) study. The COMPASS study is designed to compare the effectiveness of the COMPASS model versus usual care in improving functional outcomes, reducing the numbers of hospital readmissions, and reducing caregiver strain for patients discharged home after stroke or transient ischemic attack. This model integrates early supported discharge planning with transitional care management, including nurse-led follow-up phone calls after 2, 30, and 60 days and an in-person clinic visit at 7-14 days involving a functional assessment and neurological examination. We present descriptive statistics of the characteristics of successfully recruited hospitals compared with all eligible hospitals, reasons for non-participation, and effective recruitment strategies. We successfully recruited 41 (43%) of 95 eligible North Carolina hospitals. Leading, non-exclusive reasons for non-participation included: insufficient staff or financial resources (n = 33, 61%), lack of health system support (n = 16, 30%), and lack of support of individual decision-makers (n = 11, 20%). Successful recruitment strategies included: building and nurturing relationships, engaging team members and community partners with a diverse skill mix, identifying gatekeepers, finding mutually beneficial solutions, having a central institutional review board, sharing published pilot data, and integrating contracts and review board

  19. Further observations on comparison of immunization coverage by lot quality assurance sampling and 30 cluster sampling.

    Science.gov (United States)

    Singh, J; Jain, D C; Sharma, R S; Verghese, T

    1996-06-01

    Lot Quality Assurance Sampling (LQAS) and standard EPI methodology (30 cluster sampling) were used to evaluate immunization coverage in a Primary Health Center (PHC) where coverage levels were reported to be more than 85%. Of 27 sub-centers (lots) evaluated by LQAS, only 2 were accepted for child coverage, whereas none was accepted for tetanus toxoid (TT) coverage in mothers. LQAS data were combined to obtain an estimate of coverage in the entire population; 41% (95% CI 36-46) infants were immunized appropriately for their ages, while 42% (95% CI 37-47) of their mothers had received a second/ booster dose of TT. TT coverage in 149 contemporary mothers sampled in EPI survey was also 42% (95% CI 31-52). Although results by the two sampling methods were consistent with each other, a big gap was evident between reported coverage (in children as well as mothers) and survey results. LQAS was found to be operationally feasible, but it cost 40% more and required 2.5 times more time than the EPI survey. LQAS therefore, is not a good substitute for current EPI methodology to evaluate immunization coverage in a large administrative area. However, LQAS has potential as method to monitor health programs on a routine basis in small population sub-units, especially in areas with high and heterogeneously distributed immunization coverage.

  20. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    Science.gov (United States)

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  1. Prevalence and correlates of problematic smartphone use in a large random sample of Chinese undergraduates.

    Science.gov (United States)

    Long, Jiang; Liu, Tie-Qiao; Liao, Yan-Hui; Qi, Chang; He, Hao-Yu; Chen, Shu-Bao; Billieux, Joël

    2016-11-17

    Smartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory. A sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use. The prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations). PSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.

  2. Planck early results. VIII. The all-sky early Sunyaev-Zeldovich cluster sample

    DEFF Research Database (Denmark)

    Poutanen, T.; Natoli, P.; Polenta, G.

    2011-01-01

    We present the first all-sky sample of galaxy clusters detected blindly by the Planck satellite through the Sunyaev-Zeldovich (SZ) effect from its six highest frequencies. This early SZ (ESZ) sample is comprised of 189 candidates, which have a high signal-to-noise ratio ranging from 6 to 29. Its ...

  3. Two-year impact of community-based health screening and parenting groups on child development in Zambia: Follow-up to a cluster-randomized controlled trial.

    Science.gov (United States)

    Rockers, Peter C; Zanolini, Arianna; Banda, Bowen; Chipili, Mwaba Moono; Hughes, Robert C; Hamer, Davidson H; Fink, Günther

    2018-04-01

    Early childhood interventions have potential to offset the negative impact of early adversity. We evaluated the impact of a community-based parenting group intervention on child development in Zambia. We conducted a non-masked cluster-randomized controlled trial in Southern Province, Zambia. Thirty clusters of villages were matched based on population density and distance from the nearest health center, and randomly assigned to intervention (15 clusters, 268 caregiver-child dyads) or control (15 clusters, 258 caregiver-child dyads). Caregivers were eligible if they had a child 6 to 12 months old at baseline. In intervention clusters, caregivers were visited twice per month during the first year of the study by child development agents (CDAs) and were invited to attend fortnightly parenting group meetings. Parenting groups selected "head mothers" from their communities who were trained by CDAs to facilitate meetings and deliver a diverse parenting curriculum. The parenting group intervention, originally designed to run for 1 year, was extended, and households were visited for a follow-up assessment at the end of year 2. The control group did not receive any intervention. Intention-to-treat analysis was performed for primary outcomes measured at the year 2 follow-up: stunting and 5 domains of neurocognitive development measured using the Bayley Scales of Infant and Toddler Development-Third Edition (BSID-III). In order to show Cohen's d estimates, BSID-III composite scores were converted to z-scores by standardizing within the study population. In all, 195/268 children (73%) in the intervention group and 182/258 children (71%) in the control group were assessed at endline after 2 years. The intervention significantly reduced stunting (56/195 versus 72/182; adjusted odds ratio 0.45, 95% CI 0.22 to 0.92; p = 0.028) and had a significant positive impact on language (β 0.14, 95% CI 0.01 to 0.27; p = 0.039). The intervention did not significantly impact cognition (β 0

  4. Two-year impact of community-based health screening and parenting groups on child development in Zambia: Follow-up to a cluster-randomized controlled trial.

    Directory of Open Access Journals (Sweden)

    Peter C Rockers

    2018-04-01

    Full Text Available Early childhood interventions have potential to offset the negative impact of early adversity. We evaluated the impact of a community-based parenting group intervention on child development in Zambia.We conducted a non-masked cluster-randomized controlled trial in Southern Province, Zambia. Thirty clusters of villages were matched based on population density and distance from the nearest health center, and randomly assigned to intervention (15 clusters, 268 caregiver-child dyads or control (15 clusters, 258 caregiver-child dyads. Caregivers were eligible if they had a child 6 to 12 months old at baseline. In intervention clusters, caregivers were visited twice per month during the first year of the study by child development agents (CDAs and were invited to attend fortnightly parenting group meetings. Parenting groups selected "head mothers" from their communities who were trained by CDAs to facilitate meetings and deliver a diverse parenting curriculum. The parenting group intervention, originally designed to run for 1 year, was extended, and households were visited for a follow-up assessment at the end of year 2. The control group did not receive any intervention. Intention-to-treat analysis was performed for primary outcomes measured at the year 2 follow-up: stunting and 5 domains of neurocognitive development measured using the Bayley Scales of Infant and Toddler Development-Third Edition (BSID-III. In order to show Cohen's d estimates, BSID-III composite scores were converted to z-scores by standardizing within the study population. In all, 195/268 children (73% in the intervention group and 182/258 children (71% in the control group were assessed at endline after 2 years. The intervention significantly reduced stunting (56/195 versus 72/182; adjusted odds ratio 0.45, 95% CI 0.22 to 0.92; p = 0.028 and had a significant positive impact on language (β 0.14, 95% CI 0.01 to 0.27; p = 0.039. The intervention did not significantly impact

  5. Using Cluster Bootstrapping to Analyze Nested Data with a Few Clusters

    Science.gov (United States)

    Huang, Francis L.

    2018-01-01

    Cluster randomized trials involving participants nested within intact treatment and control groups are commonly performed in various educational, psychological, and biomedical studies. However, recruiting and retaining intact groups present various practical, financial, and logistical challenges to evaluators and often, cluster randomized trials…

  6. SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS

    OpenAIRE

    Sampath Sundaram; Ammani Sivaraman

    2010-01-01

    In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957)for various types of systematic sampling schemes available in literature, namely(i)  Balanced Systematic Sampling (BSS) of  Sethi (1965) and (ii) Modified Systematic Sampling (MSS) of Singh, Jindal, and Garg  (1968). Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic samplin...

  7. Two phase sampling

    CERN Document Server

    Ahmad, Zahoor; Hanif, Muhammad

    2013-01-01

    The development of estimators of population parameters based on two-phase sampling schemes has seen a dramatic increase in the past decade. Various authors have developed estimators of population using either one or two auxiliary variables. The present volume is a comprehensive collection of estimators available in single and two phase sampling. The book covers estimators which utilize information on single, two and multiple auxiliary variables of both quantitative and qualitative nature. Th...

  8. Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.

    Science.gov (United States)

    Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping

    2015-06-07

    Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.

  9. Two Tier Cluster Based Data Aggregation (TTCDA) in Wireless Sensor Network

    DEFF Research Database (Denmark)

    Dnyaneshwar, Mantri; Prasad, Neeli R.; Prasad, Ramjee

    2012-01-01

    Wireless Sensor Network (WSN) often used for monitoring and control applications where sensor nodes collect data and send it to the sink. Most of the nodes consume their energy in transmission of data packets without aggregation to sink, which may be located at single or multi hop distance....... The direct transmission of data packets to the sink from nodes in the network causes increased communication costs in terms of energy, average delay and network lifetime. In this context, the data aggregation techniques minimize the communication cost with efficient bandwidth utilization by decreasing...... the packet count reached at the sink. Here, we propose Two Tier Cluster based Data Aggregation (TTCDA) algorithm for the randomly distributed nodes to minimize computation and communication cost. The TTCDA is energy and bandwidth efficient since it reduces the transmission of the number of packets...

  10. Are most samples of animals systematically biased? Consistent individual trait differences bias samples despite random sampling.

    Science.gov (United States)

    Biro, Peter A

    2013-02-01

    Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.

  11. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    Directory of Open Access Journals (Sweden)

    Yanju Chen

    2015-01-01

    Full Text Available This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the second-stage programming problem is derived. As a result, the proposed two-stage model is equivalent to a single-stage model, and the analytical optimal solution of the two-stage model is obtained, which helps us to discuss the properties of the optimal solution. Finally, some numerical experiments are performed to demonstrate the new modeling idea and the effectiveness. The computational results provided by the proposed model show that the more risk-averse investor will invest more wealth in the risk-free security. They also show that the optimal invested amount in risky security increases as the risk-free return decreases and the optimal utility increases as the risk-free return increases, whereas the optimal utility increases as the transaction costs decrease. In most instances the utilities provided by the proposed two-stage model are larger than those provided by the single-stage model.

  12. Wide-bandwidth bilateral control using two-stage actuator system

    International Nuclear Information System (INIS)

    Kokuryu, Saori; Izutsu, Masaki; Kamamichi, Norihiro; Ishikawa, Jun

    2015-01-01

    This paper proposes a two-stage actuator system that consists of a coarse actuator driven by a ball screw with an AC motor (the first stage) and a fine actuator driven by a voice coil motor (the second stage). The proposed two-stage actuator system is applied to make a wide-bandwidth bilateral control system without needing expensive high-performance actuators. In the proposed system, the first stage has a wide moving range with a narrow control bandwidth, and the second stage has a narrow moving range with a wide control bandwidth. By consolidating these two inexpensive actuators with different control bandwidths in a complementary manner, a wide bandwidth bilateral control system can be constructed based on a mechanical impedance control. To show the validity of the proposed method, a prototype of the two-stage actuator system has been developed and basic performance was evaluated by experiment. The experimental results showed that a light mechanical impedance with a mass of 10 g and a damping coefficient of 2.5 N/(m/s) that is an important factor to establish good transparency in bilateral control has been successfully achieved and also showed that a better force and position responses between a master and slave is achieved by using the proposed two-stage actuator system compared with a narrow bandwidth case using a single ball screw system. (author)

  13. HUBBLE SPACE TELESCOPE PROPER MOTION (HSTPROMO) CATALOGS OF GALACTIC GLOBULAR CLUSTERS. I. SAMPLE SELECTION, DATA REDUCTION, AND NGC 7078 RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Bellini, A.; Anderson, J.; Van der Marel, R. P.; Watkins, L. L. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); King, I. R. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Bianchini, P. [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Chanamé, J. [Instituto de Astrofísica, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, Macul 782-0436, Santiago (Chile); Chandar, R. [Department of Physics and Astronomy, The University of Toledo, 2801 West Bancroft Street, Toledo, OH 43606 (United States); Cool, A. M. [Department of Physics and Astronomy, San Francisco State University, 1600 Holloway Avenue, San Francisco, CA 94132 (United States); Ferraro, F. R.; Massari, D. [Dipartimento di Fisica e Astronomia, Università di Bologna, via Ranzani 1, I-40127 Bologna (Italy); Ford, H., E-mail: bellini@stsci.edu [Department of Physics and Astronomy, The Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States)

    2014-12-20

    We present the first study of high-precision internal proper motions (PMs) in a large sample of globular clusters, based on Hubble Space Telescope (HST) data obtained over the past decade with the ACS/WFC, ACS/HRC, and WFC3/UVIS instruments. We determine PMs for over 1.3 million stars in the central regions of 22 clusters, with a median number of ∼60,000 stars per cluster. These PMs have the potential to significantly advance our understanding of the internal kinematics of globular clusters by extending past line-of-sight (LOS) velocity measurements to two- or three-dimensional velocities, lower stellar masses, and larger sample sizes. We describe the reduction pipeline that we developed to derive homogeneous PMs from the very heterogeneous archival data. We demonstrate the quality of the measurements through extensive Monte Carlo simulations. We also discuss the PM errors introduced by various systematic effects and the techniques that we have developed to correct or remove them to the extent possible. We provide in electronic form the catalog for NGC 7078 (M 15), which consists of 77,837 stars in the central 2.'4. We validate the catalog by comparison with existing PM measurements and LOS velocities and use it to study the dependence of the velocity dispersion on radius, stellar magnitude (or mass) along the main sequence, and direction in the plane of the sky (radial or tangential). Subsequent papers in this series will explore a range of applications in globular-cluster science and will also present the PM catalogs for the other sample clusters.

  14. COSMOLOGICAL CONSTRAINTS FROM GALAXY CLUSTERING AND THE MASS-TO-NUMBER RATIO OF GALAXY CLUSTERS

    International Nuclear Information System (INIS)

    Tinker, Jeremy L.; Blanton, Michael R.; Sheldon, Erin S.; Wechsler, Risa H.; Becker, Matthew R.; Rozo, Eduardo; Zu, Ying; Weinberg, David H.; Zehavi, Idit; Busha, Michael T.; Koester, Benjamin P.

    2012-01-01

    We place constraints on the average density (Ω m ) and clustering amplitude (σ 8 ) of matter using a combination of two measurements from the Sloan Digital Sky Survey: the galaxy two-point correlation function, w p (r p ), and the mass-to-galaxy-number ratio within galaxy clusters, M/N, analogous to cluster M/L ratios. Our w p (r p ) measurements are obtained from DR7 while the sample of clusters is the maxBCG sample, with cluster masses derived from weak gravitational lensing. We construct nonlinear galaxy bias models using the Halo Occupation Distribution (HOD) to fit both w p (r p ) and M/N for different cosmological parameters. HOD models that match the same two-point clustering predict different numbers of galaxies in massive halos when Ω m or σ 8 is varied, thereby breaking the degeneracy between cosmology and bias. We demonstrate that this technique yields constraints that are consistent and competitive with current results from cluster abundance studies, without the use of abundance information. Using w p (r p ) and M/N alone, we find Ω 0.5 m σ 8 = 0.465 ± 0.026, with individual constraints of Ω m = 0.29 ± 0.03 and σ 8 = 0.85 ± 0.06. Combined with current cosmic microwave background data, these constraints are Ω m = 0.290 ± 0.016 and σ 8 = 0.826 ± 0.020. All errors are 1σ. The systematic uncertainties that the M/N technique are most sensitive to are the amplitude of the bias function of dark matter halos and the possibility of redshift evolution between the SDSS Main sample and the maxBCG cluster sample. Our derived constraints are insensitive to the current level of uncertainties in the halo mass function and in the mass-richness relation of clusters and its scatter, making the M/N technique complementary to cluster abundances as a method for constraining cosmology with future galaxy surveys.

  15. COSMOLOGICAL CONSTRAINTS FROM GALAXY CLUSTERING AND THE MASS-TO-NUMBER RATIO OF GALAXY CLUSTERS

    Energy Technology Data Exchange (ETDEWEB)

    Tinker, Jeremy L.; Blanton, Michael R. [Center for Cosmology and Particle Physics, Department of Physics, New York University, New York, NY 10013 (United States); Sheldon, Erin S. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Wechsler, Risa H. [Kavli Institute for Particle Astrophysics and Cosmology, Physics Department, and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); Becker, Matthew R.; Rozo, Eduardo [Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637 (United States); Zu, Ying; Weinberg, David H. [Department of Astronomy, Ohio State University, Columbus, OH 43210 (United States); Zehavi, Idit [Department of Astronomy and CERCA, Case Western Reserve University, Cleveland, OH 44106 (United States); Busha, Michael T. [Institute for Theoretical Physics, Department of Physics, University of Zurich, CH-8057 Zurich (Switzerland); Koester, Benjamin P. [Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL 6037 (United States)

    2012-01-20

    We place constraints on the average density ({Omega}{sub m}) and clustering amplitude ({sigma}{sub 8}) of matter using a combination of two measurements from the Sloan Digital Sky Survey: the galaxy two-point correlation function, w{sub p} (r{sub p} ), and the mass-to-galaxy-number ratio within galaxy clusters, M/N, analogous to cluster M/L ratios. Our w{sub p} (r{sub p} ) measurements are obtained from DR7 while the sample of clusters is the maxBCG sample, with cluster masses derived from weak gravitational lensing. We construct nonlinear galaxy bias models using the Halo Occupation Distribution (HOD) to fit both w{sub p} (r{sub p} ) and M/N for different cosmological parameters. HOD models that match the same two-point clustering predict different numbers of galaxies in massive halos when {Omega}{sub m} or {sigma}{sub 8} is varied, thereby breaking the degeneracy between cosmology and bias. We demonstrate that this technique yields constraints that are consistent and competitive with current results from cluster abundance studies, without the use of abundance information. Using w{sub p} (r{sub p} ) and M/N alone, we find {Omega}{sup 0.5}{sub m}{sigma}{sub 8} = 0.465 {+-} 0.026, with individual constraints of {Omega}{sub m} = 0.29 {+-} 0.03 and {sigma}{sub 8} = 0.85 {+-} 0.06. Combined with current cosmic microwave background data, these constraints are {Omega}{sub m} = 0.290 {+-} 0.016 and {sigma}{sub 8} = 0.826 {+-} 0.020. All errors are 1{sigma}. The systematic uncertainties that the M/N technique are most sensitive to are the amplitude of the bias function of dark matter halos and the possibility of redshift evolution between the SDSS Main sample and the maxBCG cluster sample. Our derived constraints are insensitive to the current level of uncertainties in the halo mass function and in the mass-richness relation of clusters and its scatter, making the M/N technique complementary to cluster abundances as a method for constraining cosmology with future galaxy

  16. Two-stage precipitation of neptunium (IV) oxalate

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1983-07-01

    Neptunium (IV) oxalate was precipitated using a two-stage precipitation system. A series of precipitation experiments was used to identify the significant process variables affecting precipitate characteristics. Process variables tested were input concentrations, solubility conditions in the first stage precipitator, precipitation temperatures, and residence time in the first stage precipitator. A procedure has been demonstrated that produces neptunium (IV) oxalate particles that filter well and readily calcine to the oxide

  17. A Cluster Randomized Controlled Trial Testing the Effectiveness of Houvast: A Strengths-Based Intervention for Homeless Young Adults

    Science.gov (United States)

    Krabbenborg, Manon A. M.; Boersma, Sandra N.; van der Veld, William M.; van Hulst, Bente; Vollebergh, Wilma A. M.; Wolf, Judith R. L. M.

    2017-01-01

    Objective: To test the effectiveness of Houvast: a strengths-based intervention for homeless young adults. Method: A cluster randomized controlled trial was conducted with 10 Dutch shelter facilities randomly allocated to an intervention and a control group. Homeless young adults were interviewed when entering the facility and when care ended.…

  18. Annealing and cluster formation of defects in a cascade

    International Nuclear Information System (INIS)

    Martynenko, Yu.V.

    1975-01-01

    The behaviour of radiative defects after a dynamic cascade of atomic collisions caused by irradiation by neutrons or accelerated heavy ions if theoretically investigated. In investig.ation, apart from processes of vacancy recombination, cluster formation and interstitial atoms the diffusive ''spreading'' of point defects from the initial region is taken into account. Since interstitial atoms are more mobile, all the processes are divided into two stages: at the first stage only interstitial atoms diffuse, and vacancies are stationary; at the second stage vacancies are mobile, and interstitial atoms are either ''spread'' over the whole volume, or are united into stable clusters. The number of defects and clusters is calculated depending on energy of cascade, atomic number of the material and temperature

  19. Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge

    Science.gov (United States)

    Ma, Y. C.; Liu, H. Y.; Yan, S. B.; Yang, Y. H.; Yang, M. W.; Li, J. M.; Tang, J.

    2013-05-01

    This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency.

  20. Cluster-cluster correlations and constraints on the correlation hierarchy

    Science.gov (United States)

    Hamilton, A. J. S.; Gott, J. R., III

    1988-01-01

    The hypothesis that galaxies cluster around clusters at least as strongly as they cluster around galaxies imposes constraints on the hierarchy of correlation amplitudes in hierachical clustering models. The distributions which saturate these constraints are the Rayleigh-Levy random walk fractals proposed by Mandelbrot; for these fractal distributions cluster-cluster correlations are all identically equal to galaxy-galaxy correlations. If correlation amplitudes exceed the constraints, as is observed, then cluster-cluster correlations must exceed galaxy-galaxy correlations, as is observed.

  1. Effect of ammoniacal nitrogen on one-stage and two-stage anaerobic digestion of food waste

    Energy Technology Data Exchange (ETDEWEB)

    Ariunbaatar, Javkhlan, E-mail: jaka@unicas.it [Department of Civil and Mechanical Engineering, University of Cassino and Southern Lazio, Via Di Biasio 43, 03043 Cassino, FR (Italy); UNESCO-IHE Institute for Water Education, Westvest 7, 2611 AX Delft (Netherlands); Scotto Di Perta, Ester [Department of Civil, Architectural and Environmental Engineering, University of Naples Federico II, Via Claudio 21, 80125 Naples (Italy); Panico, Antonio [Telematic University PEGASO, Piazza Trieste e Trento, 48, 80132 Naples (Italy); Frunzo, Luigi [Department of Mathematics and Applications Renato Caccioppoli, University of Naples Federico II, Via Claudio, 21, 80125 Naples (Italy); Esposito, Giovanni [Department of Civil and Mechanical Engineering, University of Cassino and Southern Lazio, Via Di Biasio 43, 03043 Cassino, FR (Italy); Lens, Piet N.L. [UNESCO-IHE Institute for Water Education, Westvest 7, 2611 AX Delft (Netherlands); Pirozzi, Francesco [Department of Civil, Architectural and Environmental Engineering, University of Naples Federico II, Via Claudio 21, 80125 Naples (Italy)

    2015-04-15

    Highlights: • Almost 100% of the biomethane potential of food waste was recovered during AD in a two-stage CSTR. • Recirculation of the liquid fraction of the digestate provided the necessary buffer in the AD reactors. • A higher OLR (0.9 gVS/L·d) led to higher accumulation of TAN, which caused more toxicity. • A two-stage reactor is more sensitive to elevated concentrations of ammonia. • The IC{sub 50} of TAN for the AD of food waste amounts to 3.8 g/L. - Abstract: This research compares the operation of one-stage and two-stage anaerobic continuously stirred tank reactor (CSTR) systems fed semi-continuously with food waste. The main purpose was to investigate the effects of ammoniacal nitrogen on the anaerobic digestion process. The two-stage system gave more reliable operation compared to one-stage due to: (i) a better pH self-adjusting capacity; (ii) a higher resistance to organic loading shocks; and (iii) a higher conversion rate of organic substrate to biomethane. Also a small amount of biohydrogen was detected from the first stage of the two-stage reactor making this system attractive for biohythane production. As the digestate contains ammoniacal nitrogen, re-circulating it provided the necessary alkalinity in the systems, thus preventing an eventual failure by volatile fatty acids (VFA) accumulation. However, re-circulation also resulted in an ammonium accumulation, yielding a lower biomethane production. Based on the batch experimental results the 50% inhibitory concentration of total ammoniacal nitrogen on the methanogenic activities was calculated as 3.8 g/L, corresponding to 146 mg/L free ammonia for the inoculum used for this research. The two-stage system was affected by the inhibition more than the one-stage system, as it requires less alkalinity and the physically separated methanogens are more sensitive to inhibitory factors, such as ammonium and propionic acid.

  2. Relationship between symptom clusters and quality of life in patients at stages 2 to 4 chronic kidney disease in Korea.

    Science.gov (United States)

    Lee, Suk Jeong; Jeon, JaeHee

    2015-11-01

    This study was conducted to identify the relationship between symptom clusters and quality of life (QOL) in patients with stages 2 to 4 chronic kidney disease (CKD) in Korea. Using self-reported questionnaires, data were collected from 143 patients who underwent treatment for CKD at one hospital in Korea. The 17-item Patient Outcome Scale was used to measure symptoms, and the 36-item Short Form Health Survey Instrument Version 2 (SF-36v2) was used to measure the QOL. Data were analyzed using factor analysis to draw symptom clusters. Among five symptom clusters, the energy insufficiency and pain cluster was found to have the highest prevalence and greatest severity. The severity of symptom clusters showed negative correlations with both physical and mental component summary (PCS and MCS) scores. Elderly patients scored low on PCS, whereas younger patients in their 30s and 40s scored low on MCS. Negative correlations were found between symptom clusters and PCS as well as MCS. The severity of symptoms and QOL had stronger relationships with subjective perception of symptoms and psychological factors than with objective clinical indicators. As the effects of physical and psychological symptoms on the QOL in patients with stages 2 to 4 CKD were identified in this study, nurses should develop strategic nursing plans focused on symptom clusters and patients' subjective perception of symptoms rather than objective clinical indicators in order to improve the QOL in patients with CKD. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Two-stage dental implants inserted in a one-stage procedure : a prospective comparative clinical study

    NARCIS (Netherlands)

    Heijdenrijk, Kees

    2002-01-01

    The results of this study indicate that dental implants designed for a submerged implantation procedure can be used in a single-stage procedure and may be as predictable as one-stage implants. Although one-stage implant systems and two-stage.

  4. Medical Image Retrieval Based On the Parallelization of the Cluster Sampling Algorithm

    OpenAIRE

    Ali, Hesham Arafat; Attiya, Salah; El-henawy, Ibrahim

    2017-01-01

    In this paper we develop parallel cluster sampling algorithms and show that a multi-chain version is embarrassingly parallel and can be used efficiently for medical image retrieval among other applications.

  5. Reduction of self-perceived discomforts in critically ill patients in French intensive care units: study protocol for a cluster-randomized controlled trial.

    Science.gov (United States)

    Kalfon, Pierre; Mimoz, Olivier; Loundou, Anderson; Geantot, Marie-Agnès; Revel, Nathalie; Villard, Isabelle; Amour, Julien; Azoulay, Elie; Garrouste-Orgeas, Maïté; Martin, Claude; Sharshar, Tarek; Baumstarck, Karine; Auquier, Pascal

    2016-02-16

    It is now well documented that critically ill patients are exposed to stressful conditions and experience discomforts from multiple sources. Improved identification of the discomforts of patients in intensive care units (ICUs) may have implications for managing their care, including consideration of ethical issues, and may assist clinicians in choosing the most appropriate interventions. The primary objective of this study was to assess the effectiveness of a multicomponent program of discomfort reduction in critically ill patients. The secondary objectives were to assess the sustainability of the impact of the program and the potential seasonality effect. We conducted a multicenter, cluster-randomized, controlled, single (patient)-blind study involving 34 French adult ICUs. The experimental intervention was a 6-month period during which the multicomponent program was implemented in the ICU and included the following steps: identification of discomforts, immediate feedback to the healthcare team, and implementation of targeted interventions. The control intervention was a 6-month period during which any program was implemented. The primary endpoint was the monthly overall score of self-reported discomfort from the French questionnaire on discomforts in ICU patients (IPREA). The secondary endpoints were the scores of the discomfort items of IPREA. The sample size was 660 individuals to obtain 80% power to detect a 25% difference in the overall discomfort score of IPREA between the two groups (design effect: 2.9). The results of this cluster-randomized controlled study are expected to confirm that a multicomponent program of discomfort reduction may be a new strategy in the management of care for critically ill patients. ClinicalTrials.gov NCT02442934, registered 11 May 2015.

  6. Community-wide intervention and population-level physical activity: a 5-year cluster randomized trial

    Science.gov (United States)

    Kamada, Masamitsu; Kitayuguchi, Jun; Abe, Takafumi; Taguri, Masataka; Inoue, Shigeru; Ishikawa, Yoshiki; Bauman, Adrian; Lee, I-Min; Miyachi, Motohiko; Kawachi, Ichiro

    2018-01-01

    Abstract Background Evidence from a limited number of short-term trials indicates the difficulty in achieving population-level improvements in physical activity (PA) through community-wide interventions (CWIs). We sought to evaluate the effectiveness of a 5-year CWI for promoting PA in middle-aged and older adults using a cluster randomized design. Methods We randomized 12 communities in Unnan, Japan, to either intervention (9) or control (3). Additionally, intervention communities were randomly allocated to three subgroups by different PA types promoted. Randomly sampled residents aged 40–79 years responded to the baseline survey (n = 4414; 74%) and were followed at 1, 3 and 5 years (78–83% response rate). The intervention was a 5-year CWI using social marketing to promote PA. The primary outcome was a change in recommended levels of PA. Results Compared with control communities, adults achieving recommended levels of PA increased in intervention communities [adjusted change difference = 4.6 percentage points (95% confidence interval: 0.4, 8.8)]. The intervention was effective for promoting all types of recommended PAs, i.e. aerobic (walking, 6.4%), flexibility (6.1%) and muscle-strengthening activities (5.7%). However, a bundled approach, which attempted to promote all forms of PAs above simultaneously, was not effective (1.3–3.4%, P ≥ 0.138). Linear dose–response relationships between the CWI awareness and changes in PA were observed (P ≤ 0.02). Pain intensity decreased in shoulder (intervention and control) and lower back (intervention only) but there was little change difference in all musculoskeletal pain outcomes between the groups. Conclusions The 5-year CWI using the focused social marketing strategy increased the population-level of PA. PMID:29228255

  7. Risk averse optimal operation of a virtual power plant using two stage stochastic programming

    International Nuclear Information System (INIS)

    Tajeddini, Mohammad Amin; Rahimi-Kian, Ashkan; Soroudi, Alireza

    2014-01-01

    VPP (Virtual Power Plant) is defined as a cluster of energy conversion/storage units which are centrally operated in order to improve the technical and economic performance. This paper addresses the optimal operation of a VPP considering the risk factors affecting its daily operation profits. The optimal operation is modelled in both day ahead and balancing markets as a two-stage stochastic mixed integer linear programming in order to maximize a GenCo (generation companies) expected profit. Furthermore, the CVaR (Conditional Value at Risk) is used as a risk measure technique in order to control the risk of low profit scenarios. The uncertain parameters, including the PV power output, wind power output and day-ahead market prices are modelled through scenarios. The proposed model is successfully applied to a real case study to show its applicability and the results are presented and thoroughly discussed. - Highlights: • Virtual power plant modelling considering a set of energy generating and conversion units. • Uncertainty modelling using two stage stochastic programming technique. • Risk modelling using conditional value at risk. • Flexible operation of renewable energy resources. • Electricity price uncertainty in day ahead energy markets

  8. The Impact of Combined Music and Tai Chi on Depressive Symptoms Among Community-Dwelling Older Persons: A Cluster Randomized Controlled Trial.

    Science.gov (United States)

    Liao, S J; Tan, M P; Chong, M C; Chua, Y P

    2018-05-01

    The effectiveness of pharmacological treatment may be limited in older persons. Several studies using Tai Chi or music therapy separately confirmed positive effects in the reduction of depressive symptoms. We conducted a cluster randomized controlled trial to evaluate the possible synergistic effect of combined music and Tai Chi on depressive symptoms. One hundred and seven older adults with mild to moderate depressive symptoms were recruited from Ya'an city. Fifty-five participants were cluster randomized to combined music and Tai Chi group for three months, while the other fifty-two individuals were randomized to the control group that entailed routine health education delivered monthly by community nurses. The primary outcome of depressive symptoms was measured with the Geriatric Depression Scale (GDS) at baseline and monthly for three months. At three-month follow-up, a statistically significant improvement in depressive symptoms was found in the intervention group compared with control group (F(3,315) = 69.661, P < 0.001). Following adjustments for socio-demographic data, the true effect of intervention on depressive symptoms was significant (F = 41.725, P < 0.01, η p 2 = 0.574). Combined music and Tai Chi reduced depressive symptoms among community-dwelling older persons. This represents an economically viable solution to the management of depression in highly populous developing nations.

  9. Efficient construction of two-dimensional cluster states with probabilistic quantum gates

    International Nuclear Information System (INIS)

    Chen Qing; Cheng Jianhua; Wang Kelin; Du Jiangfeng

    2006-01-01

    We propose an efficient scheme for constructing arbitrary two-dimensional (2D) cluster states using probabilistic entangling quantum gates. In our scheme, the 2D cluster state is constructed with starlike basic units generated from 1D cluster chains. By applying parallel operations, the process of generating 2D (or higher-dimensional) cluster states is significantly accelerated, which provides an efficient way to implement realistic one-way quantum computers

  10. Vitamin D status of older South Africans | Charlton | South African ...

    African Journals Online (AJOL)

    Objective: To detennine the vitamin D status of older 'coloured' South Africans who had not sustained a fracture. Design: Cross-sectional analytic study. Methods: A random sample of 200 non-institutionalised subjects in Cape Town aged ≥65 years was drawn using a two-stage cluster design. Trained fieldworkers ...

  11. Color Gradients Within Globular Clusters: Restricted Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Young-Jong Sohn

    1997-06-01

    Full Text Available The results of a restricted numerical simulation for the color gradients within globular clusters have been presented. The standard luminosity function of M3 and Salpeter's initial mass functions were used to generate model clusters as a fundamental population. Color gradients with the sample clusters for both King and power law cusp models of surface brightness distributions are discussed in the case of using the standard luminosity function. The dependence of color gradients on several parameters for the simulations with Salpeter's initial mass functions, such as slope of initial mass functions, cluster ages, metallicities, concentration parameters of King model, and slopes of power law, are also discussed. No significant radial color gradients are shown to the sample clusters which are regenerated by a random number generation technique with various parameters in both of King and power law cusp models of surface brightness distributions. Dynamical mass segregation and stellar evolution of horizontal branch stars and blue stragglers should be included for the general case of model simulations to show the observed radial color gradients within globular clusters.

  12. Note on an Identity Between Two Unbiased Variance Estimators for the Grand Mean in a Simple Random Effects Model.

    Science.gov (United States)

    Levin, Bruce; Leu, Cheng-Shiun

    2013-01-01

    We demonstrate the algebraic equivalence of two unbiased variance estimators for the sample grand mean in a random sample of subjects from an infinite population where subjects provide repeated observations following a homoscedastic random effects model.

  13. Maxillofacial growth and speech outcome after one-stage or two-stage palatoplasty in unilateral cleft lip and palate. A systematic review.

    Science.gov (United States)

    Reddy, Rajgopal R; Gosla Reddy, Srinivas; Vaidhyanathan, Anitha; Bergé, Stefaan J; Kuijpers-Jagtman, Anne Marie

    2017-06-01

    The number of surgical procedures to repair a cleft palate may play a role in the outcome for maxillofacial growth and speech. The aim of this systematic review was to investigate the relationship between the number of surgical procedures performed to repair the cleft palate and maxillofacial growth, speech and fistula formation in non-syndromic patients with unilateral cleft lip and palate. An electronic search was performed in PubMed/old MEDLINE, the Cochrane Library, EMBASE, Scopus and CINAHL databases for publications between 1960 and December 2015. Publications before 1950-journals of plastic and maxillofacial surgery-were hand searched. Additional hand searches were performed on studies mentioned in the reference lists of relevant articles. Search terms included unilateral, cleft lip and/or palate and palatoplasty. Two reviewers assessed eligibility for inclusion, extracted data, applied quality indicators and graded level of evidence. Twenty-six studies met the inclusion criteria. All were retrospective and non-randomized comparisons of one- and two-stage palatoplasty. The methodological quality of most of the studies was graded moderate to low. The outcomes concerned the comparison of one- and two-stage palatoplasty with respect to growth of the mandible, maxilla and cranial base, and speech and fistula formation. Due to the lack of high-quality studies there is no conclusive evidence of a relationship between one- or two-stage palatoplasty and facial growth, speech and fistula formation in patients with unilateral cleft lip and palate. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  14. Typical Periods for Two-Stage Synthesis by Time-Series Aggregation with Bounded Error in Objective Function

    Energy Technology Data Exchange (ETDEWEB)

    Bahl, Björn; Söhler, Theo; Hennen, Maike; Bardow, André, E-mail: andre.bardow@ltt.rwth-aachen.de [Institute of Technical Thermodynamics, RWTH Aachen University, Aachen (Germany)

    2018-01-08

    Two-stage synthesis problems simultaneously consider here-and-now decisions (e.g., optimal investment) and wait-and-see decisions (e.g., optimal operation). The optimal synthesis of energy systems reveals such a two-stage character. The synthesis of energy systems involves multiple large time series such as energy demands and energy prices. Since problem size increases with the size of the time series, synthesis of energy systems leads to complex optimization problems. To reduce the problem size without loosing solution quality, we propose a method for time-series aggregation to identify typical periods. Typical periods retain the chronology of time steps, which enables modeling of energy systems, e.g., with storage units or start-up cost. The aim of the proposed method is to obtain few typical periods with few time steps per period, while accurately representing the objective function of the full time series, e.g., cost. Thus, we determine the error of time-series aggregation as the cost difference between operating the optimal design for the aggregated time series and for the full time series. Thereby, we rigorously bound the maximum performance loss of the optimal energy system design. In an initial step, the proposed method identifies the best length of typical periods by autocorrelation analysis. Subsequently, an adaptive procedure determines aggregated typical periods employing the clustering algorithm k-medoids, which groups similar periods into clusters and selects one representative period per cluster. Moreover, the number of time steps per period is aggregated by a novel clustering algorithm maintaining chronology of the time steps in the periods. The method is iteratively repeated until the error falls below a threshold value. A case study based on a real-world synthesis problem of an energy system shows that time-series aggregation from 8,760 time steps to 2 typical periods with each 2 time steps results in an error smaller than the optimality gap of

  15. Effectiveness of a group diabetes education programme in underserved communities in South Africa: pragmatic cluster randomized control trial

    Directory of Open Access Journals (Sweden)

    Mash Bob

    2012-12-01

    Full Text Available Abstract Background Diabetes is an important contributor to the burden of disease in South Africa and prevalence rates as high as 33% have been recorded in Cape Town. Previous studies show that quality of care and health outcomes are poor. The development of an effective education programme should impact on self-care, lifestyle change and adherence to medication; and lead to better control of diabetes, fewer complications and better quality of life. Methods Trial design: Pragmatic cluster randomized controlled trial Participants: Type 2 diabetic patients attending 45 public sector community health centres in Cape Town Interventions: The intervention group will receive 4 sessions of group diabetes education delivered by a health promotion officer in a guiding style. The control group will receive usual care which consists of ad hoc advice during consultations and occasional educational talks in the waiting room. Objective: To evaluate the effectiveness of the group diabetes education programme Outcomes: Primary outcomes: diabetes self-care activities, 5% weight loss, 1% reduction in HbA1c. Secondary outcomes: self-efficacy, locus of control, mean blood pressure, mean weight loss, mean waist circumference, mean HbA1c, mean total cholesterol, quality of life Randomisation: Computer generated random numbers Blinding: Patients, health promoters and research assistants could not be blinded to the health centre’s allocation Numbers randomized: Seventeen health centres (34 in total will be randomly assigned to either control or intervention groups. A sample size of 1360 patients in 34 clusters of 40 patients will give a power of 80% to detect the primary outcomes with 5% precision. Altogether 720 patients were recruited in the intervention arm and 850 in the control arm giving a total of 1570. Discussion The study will inform policy makers and managers of the district health system, particularly in low to middle income countries, if this programme can

  16. Comparative assessment of single-stage and two-stage anaerobic digestion for the treatment of thin stillage.

    Science.gov (United States)

    Nasr, Noha; Elbeshbishy, Elsayed; Hafez, Hisham; Nakhla, George; El Naggar, M Hesham

    2012-05-01

    A comparative evaluation of single-stage and two-stage anaerobic digestion processes for biomethane and biohydrogen production using thin stillage was performed to assess the impact of separating the acidogenic and methanogenic stages on anaerobic digestion. Thin stillage, the main by-product from ethanol production, was characterized by high total chemical oxygen demand (TCOD) of 122 g/L and total volatile fatty acids (TVFAs) of 12 g/L. A maximum methane yield of 0.33 L CH(4)/gCOD(added) (STP) was achieved in the two-stage process while a single-stage process achieved a maximum yield of only 0.26 L CH(4)/gCOD(added) (STP). The separation of acidification stage increased the TVFAs to TCOD ratio from 10% in the raw thin stillage to 54% due to the conversion of carbohydrates into hydrogen and VFAs. Comparison of the two processes based on energy outcome revealed that an increase of 18.5% in the total energy yield was achieved using two-stage anaerobic digestion. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. A cluster analytic study of the Wechsler Intelligence Test for Children-IV in children referred for psychoeducational assessment due to persistent academic difficulties.

    Science.gov (United States)

    Hale, Corinne R; Casey, Joseph E; Ricciardi, Philip W R

    2014-02-01

    Wechsler Intelligence Test for Children-IV core subtest scores of 472 children were cluster analyzed to determine if reliable and valid subgroups would emerge. Three subgroups were identified. Clusters were reliable across different stages of the analysis as well as across algorithms and samples. With respect to external validity, the Globally Low cluster differed from the other two clusters on Wechsler Individual Achievement Test-II Word Reading, Numerical Operations, and Spelling subtests, whereas the latter two clusters did not differ from one another. The clusters derived have been identified in studies using previous WISC editions. Clusters characterized by poor performance on subtests historically associated with the VIQ (i.e., VCI + WMI) and PIQ (i.e., POI + PSI) did not emerge, nor did a cluster characterized by low scores on PRI subtests. Picture Concepts represented the highest subtest score in every cluster, failing to vary in a predictable manner with the other PRI subtests.

  18. Condensate from a two-stage gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Henriksen, Ulrik Birk; Hindsgaul, Claus

    2000-01-01

    Condensate, produced when gas from downdraft biomass gasifier is cooled, contains organic compounds that inhibit nitrifiers. Treatment with activated carbon removes most of the organics and makes the condensate far less inhibitory. The condensate from an optimised two-stage gasifier is so clean...... that the organic compounds and the inhibition effect are very low even before treatment with activated carbon. The moderate inhibition effect relates to a high content of ammonia in the condensate. The nitrifiers become tolerant to the condensate after a few weeks of exposure. The level of organic compounds...... and the level of inhibition are so low that condensate from the optimised two-stage gasifier can be led to the public sewer....

  19. Justification of a Monte Carlo Algorithm for the Diffusion-Growth Simulation of Helium Clusters in Materials

    International Nuclear Information System (INIS)

    Yu-Lu, Zhou; Ai-Hong, Deng; Qing, Hou; Jun, Wang

    2009-01-01

    A theoretical analysis of a Monte Carlo (MC) method for the simulation of the diffusion-growth of helium clusters in materials is presented. This analysis is based on an assumption that the diffusion-growth process consists of first stage, during which the clusters diffuse freely, and second stage in which the coalescence occurs with certain probability. Since the accuracy of MC simulation results is sensitive to the coalescence probability, the MC calculations in the second stage is studied in detail. Firstly, the coalescence probability is analytically formulated for the one-dimensional diffusion-growth case. Thereafter, the one-dimensional results are employed to justify the MC simulation. The choice of time step and the random number generator used in the MC simulation are discussed

  20. Evidence of two-stage melting of Wigner solids

    Science.gov (United States)

    Knighton, Talbot; Wu, Zhe; Huang, Jian; Serafin, Alessandro; Xia, J. S.; Pfeiffer, L. N.; West, K. W.

    2018-02-01

    Ultralow carrier concentrations of two-dimensional holes down to p =1 ×109cm-2 are realized. Remarkable insulating states are found below a critical density of pc=4 ×109cm-2 or rs≈40 . Sensitive dc V-I measurement as a function of temperature and electric field reveals a two-stage phase transition supporting the melting of a Wigner solid as a two-stage first-order transition.

  1. Two-stage liquefaction of a Spanish subbituminous coal

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, M.T.; Fernandez, I.; Benito, A.M.; Cebolla, V.; Miranda, J.L.; Oelert, H.H. (Instituto de Carboquimica, Zaragoza (Spain))

    1993-05-01

    A Spanish subbituminous coal has been processed in two-stage liquefaction in a non-integrated process. The first-stage coal liquefaction has been carried out in a continuous pilot plant in Germany at Clausthal Technical University at 400[degree]C, 20 MPa hydrogen pressure and anthracene oil as solvent. The second-stage coal liquefaction has been performed in continuous operation in a hydroprocessing unit at the Instituto de Carboquimica at 450[degree]C and 10 MPa hydrogen pressure, with two commercial catalysts: Harshaw HT-400E (Co-Mo/Al[sub 2]O[sub 3]) and HT-500E (Ni-Mo/Al[sub 2]O[sub 3]). The total conversion for the first-stage coal liquefaction was 75.41 wt% (coal d.a.f.), being 3.79 wt% gases, 2.58 wt% primary condensate and 69.04 wt% heavy liquids. The heteroatoms removal for the second-stage liquefaction was 97-99 wt% of S, 85-87 wt% of N and 93-100 wt% of O. The hydroprocessed liquids have about 70% of compounds with boiling point below 350[degree]C, and meet the sulphur and nitrogen specifications for refinery feedstocks. Liquids from two-stage coal liquefaction have been distilled, and the naphtha, kerosene and diesel fractions obtained have been characterized. 39 refs., 3 figs., 8 tabs.

  2. Optimal production lot size and reorder point of a two-stage supply chain while random demand is sensitive with sales teams' initiatives

    Science.gov (United States)

    Sankar Sana, Shib

    2016-01-01

    The paper develops a production-inventory model of a two-stage supply chain consisting of one manufacturer and one retailer to study production lot size/order quantity, reorder point sales teams' initiatives where demand of the end customers is dependent on random variable and sales teams' initiatives simultaneously. The manufacturer produces the order quantity of the retailer at one lot in which the procurement cost per unit quantity follows a realistic convex function of production lot size. In the chain, the cost of sales team's initiatives/promotion efforts and wholesale price of the manufacturer are negotiated at the points such that their optimum profits reached nearer to their target profits. This study suggests to the management of firms to determine the optimal order quantity/production quantity, reorder point and sales teams' initiatives/promotional effort in order to achieve their maximum profits. An analytical method is applied to determine the optimal values of the decision variables. Finally, numerical examples with its graphical presentation and sensitivity analysis of the key parameters are presented to illustrate more insights of the model.

  3. Clustering Approaches for Pragmatic Two-Layer IoT Architecture

    Directory of Open Access Journals (Sweden)

    J. Sathish Kumar

    2018-01-01

    Full Text Available Connecting all devices through Internet is now practical due to Internet of Things. IoT assures numerous applications in everyday life of common people, government bodies, business, and society as a whole. Collaboration among the devices in IoT to bring various applications in the real world is a challenging task. In this context, we introduce an application-based two-layer architectural framework for IoT which consists of sensing layer and IoT layer. For any real-time application, sensing devices play an important role. Both these layers are required for accomplishing IoT-based applications. The success of any IoT-based application relies on efficient communication and utilization of the devices and data acquired by the devices at both layers. The grouping of these devices helps to achieve the same, which leads to formation of cluster of devices at various levels. The clustering helps not only in collaboration but also in prolonging overall network lifetime. In this paper, we propose two clustering algorithms based on heuristic and graph, respectively. The proposed clustering approaches are evaluated on IoT platform using standard parameters and compared with different approaches reported in literature.

  4. Two-Stage Multi-Objective Collaborative Scheduling for Wind Farm and Battery Switch Station

    Directory of Open Access Journals (Sweden)

    Zhe Jiang

    2016-10-01

    Full Text Available In order to deal with the uncertainties of wind power, wind farm and electric vehicle (EV battery switch station (BSS were proposed to work together as an integrated system. In this paper, the collaborative scheduling problems of such a system were studied. Considering the features of the integrated system, three indices, which include battery swapping demand curtailment of BSS, wind curtailment of wind farm, and generation schedule tracking of the integrated system are proposed. In addition, a two-stage multi-objective collaborative scheduling model was designed. In the first stage, a day-ahead model was built based on the theory of dependent chance programming. With the aim of maximizing the realization probabilities of these three operating indices, random fluctuations of wind power and battery switch demand were taken into account simultaneously. In order to explore the capability of BSS as reserve, the readjustment process of the BSS within each hour was considered in this stage. In addition, the stored energy rather than the charging/discharging power of BSS during each period was optimized, which will provide basis for hour-ahead further correction of BSS. In the second stage, an hour-ahead model was established. In order to cope with the randomness of wind power and battery swapping demand, the proposed hour-ahead model utilized ultra-short term prediction of the wind power and the battery switch demand to schedule the charging/discharging power of BSS in a rolling manner. Finally, the effectiveness of the proposed models was validated by case studies. The simulation results indicated that the proposed model could realize complement between wind farm and BSS, reduce the dependence on power grid, and facilitate the accommodation of wind power.

  5. COSMOS--improving the quality of life in nursing home patients: protocol for an effectiveness-implementation cluster randomized clinical hybrid trial.

    Science.gov (United States)

    Husebo, Bettina S; Flo, Elisabeth; Aarsland, Dag; Selbaek, Geir; Testad, Ingelin; Gulla, Christine; Aasmul, Irene; Ballard, Clive

    2015-09-15

    Nursing home patients have complex mental and physical health problems, disabilities and social needs, combined with widespread prescription of psychotropic drugs. Preservation of their quality of life is an important goal. This can only be achieved within nursing homes that offer competent clinical conditions of treatment and care. COmmunication, Systematic assessment and treatment of pain, Medication review, Occupational therapy, Safety (COSMOS) is an effectiveness-implementation hybrid trial that combines and implements organization of activities evidence-based interventions to improve staff competence and thereby the patients' quality of life, mental health and safety. The aim of this paper is to describe the development, content and implementation process of the COSMOS trial. COSMOS includes a 2-month pilot study with 128 participants distributed among nine Norwegian nursing homes, and a 4-month multicenter, cluster randomized effectiveness-implementation clinical hybrid trial with follow-up at month 9, including 571 patients from 67 nursing home units (one unit defined as one cluster). Clusters are randomized to COSMOS intervention or current best practice (control group). The intervention group will receive a 2-day education program including written guidelines, repeated theoretical and practical training (credited education of caregivers, physicians and nursing home managers), case discussions and role play. The 1-day midway evaluation, information and interviews of nursing staff and a telephone hotline all support the implementation process. Outcome measures include quality of life in late-stage dementia, neuropsychiatric symptoms, activities of daily living, pain, depression, sleep, medication, cost-utility analysis, hospital admission and mortality. Despite complex medical and psychosocial challenges, nursing home patients are often treated by staff possessing low level skills, lacking education and in facilities with a high staff turnover

  6. Herd Clustering: A synergistic data clustering approach using collective intelligence

    KAUST Repository

    Wong, Kachun; Peng, Chengbin; Li, Yue; Chan, Takming

    2014-01-01

    , this principle is used to develop a new clustering algorithm. Inspired by herd behavior, the clustering method is a synergistic approach using collective intelligence called Herd Clustering (HC). The novel part is laid in its first stage where data instances

  7. Evaluation of sampling schemes for in-service inspection of steam generator tubing

    International Nuclear Information System (INIS)

    Hanlen, R.C.

    1990-03-01

    This report is a follow-on of work initially sponsored by the US Nuclear Regulatory Commission (Bowen et al. 1989). The work presented here is funded by EPRI and is jointly sponsored by the Electric Power Research Institute (EPRI) and the US Nuclear Regulatory Commission (NRC). The goal of this research was to evaluate fourteen sampling schemes or plans. The main criterion used for evaluating plan performance was the effectiveness for sampling, detecting and plugging defective tubes. The performance criterion was evaluated across several choices of distributions of degraded/defective tubes, probability of detection (POD) curves and eddy-current sizing models. Conclusions from this study are dependent upon the tube defect distributions, sample size, and expansion rules considered. As degraded/defective tubes form ''clusters'' (i.e., maps 6A, 8A and 13A), the smaller sample sizes provide a capability of detecting and sizing defective tubes that approaches 100% inspection. When there is little or no clustering (i.e., maps 1A, 20 and 21), sample efficiency is approximately equal to the initial sample size taken. Thee is an indication (though not statistically significant) that the systematic sampling plans are better than the random sampling plans for equivalent initial sample size. There was no indication of an effect due to modifying the threshold value for the second stage expansion. The lack of an indication is likely due to the specific tube flaw sizes considered for the six tube maps. 1 ref., 11 figs., 19 tabs

  8. Randomized study of preoperative radiation and surgery or irradiation alone in the treatment of Stage IB and IIA carcinoma of the uterine cervix

    International Nuclear Information System (INIS)

    Perez, C.A.; Camel, H.M.; Kao, M.S.; Askin, F.

    1980-01-01

    A prospective randomized study in selected patients with Stage IB and IIA carcinoma of the uterine cervix was carried out. Patients were randomized to be treated with 1) irradiation alone consisting of 1000 rad whole pelvis, additional 4000 rads to the parametria with a step wedge midline block, and two intracavitary insertions for 7500 mgh; and 2) irradiation and surgery, consisting of 2000 rad whole pelvis irradiation, one intracavitary insertion for 5000 to 6000 mgh followed in two to six weeks later by a radical hysterectomy with pelvic lymphadenectomy. The five-year, tumor-free actuarial survival for Stage IB patients treated with radiation was 87% and with preoperative radiation and surgery 82%. In Stage IIA, the actuarial five-year survival NED was 57% for the irradiation alone group and 71% for the patients treated with preoperative radiation and radical hysterectomy. Major complications of therapy were slightly higher in the patients trated with radiation alone (9.4%, consisting of one recto-vaginal fistula and one vesico-vaginal fistula and a combined recto-vesico-vaginal fistula in another patient). In the preoperative radiation group, only two ureteral strictures (4.1%) were noted. The present study shows no significant difference in therapeutic results or morbidity for invasive carcinoma of the uterine cervix Stage IB or IIA treated with irradiation alone or combined with a radical hysterectomy

  9. Evidence for the direct ejection of clusters from non-metallic solids during laser vaporization

    International Nuclear Information System (INIS)

    Bloomfield, L.A.; Yang, Y.A.; Xia, P.; Junkin, A.L.

    1991-01-01

    This paper reports on the formation of molecular scale particles or clusters of alkali halides and semiconductors during laser vaporization of solids. By measuring the abundances of cluster ions produced in several different source configurations, the authors have determined that clusters are ejected directly from the source sample and do not need to grow from atomic or molecular vapor. Using samples of mixed alkali halide powders, the authors have found that unalloyed clusters are easily produced in a source that prevents growth from occurring after the clusters leave the sample surface. However, melting the sample or encouraging growth after vaporization lead to the production of alloyed cluster species. The sizes of the ejected clusters are initially random, but the population spectrum quickly becomes structured as hot, unstable-sized clusters decay into smaller particles. In carbon, large clusters with odd number of atoms decay almost immediately. The hot even clusters also decay, but much more slowly. The longest lived clusters are the magic C 50 and C 60 fullerenes. The mass spectrum of large carbon clusters evolves in time from structureless, to only the even clusters, to primarily C 50 and C 60 . If cluster growth is encouraged, the odd clusters reappear and the population spectrum again becomes relatively structureless

  10. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile

    DEFF Research Database (Denmark)

    Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.

    2013-01-01

    and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium...

  11. One-stage and two-stage penile buccal mucosa urethroplasty

    African Journals Online (AJOL)

    G. Barbagli

    2015-12-02

    Dec 2, 2015 ... there also seems to be a trend of decreasing urethritis and an increase of instrumentation and catheter related strictures in these countries as well [4–6]. The repair of penile urethral strictures may require one- or two- stage urethroplasty [7–10]. Certainly, sexual function can be placed at risk by any surgery ...

  12. The added value of a mobile application of Community Case Management on referral, re-consultation and hospitalization rates of children aged under 5 years in two districts in Northern Malawi: study protocol for a pragmatic, stepped-wedge cluster-randomized controlled trial.

    Science.gov (United States)

    Hardy, Victoria; O'Connor, Yvonne; Heavin, Ciara; Mastellos, Nikolaos; Tran, Tammy; O'Donoghue, John; Fitzpatrick, Annette L; Ide, Nicole; Wu, Tsung-Shu Joseph; Chirambo, Griphin Baxter; Muula, Adamson S; Nyirenda, Moffat; Carlsson, Sven; Andersson, Bo; Thompson, Matthew

    2017-10-11

    There is evidence to suggest that frontline community health workers in Malawi are under-referring children to higher-level facilities. Integrating a digitized version of paper-based methods of Community Case Management (CCM) could strengthen delivery, increasing urgent referral rates and preventing unnecessary re-consultations and hospital admissions. This trial aims to evaluate the added value of the Supporting LIFE electronic Community Case Management Application (SL eCCM App) compared to paper-based CCM on urgent referral, re-consultation and hospitalization rates, in two districts in Northern Malawi. This is a pragmatic, stepped-wedge cluster-randomized trial assessing the added value of the SL eCCM App on urgent referral, re-consultation and hospitalization rates of children aged 2 months and older to up to 5 years, within 7 days of the index visit. One hundred and two health surveillance assistants (HSAs) were stratified into six clusters based on geographical location, and clusters randomized to the timing of crossover to the intervention using simple, computer-generated randomization. Training workshops were conducted prior to the control (paper-CCM) and intervention (paper-CCM + SL eCCM App) in assigned clusters. Neither participants nor study personnel were blinded to allocation. Outcome measures were determined by abstraction of clinical data from patient records 2 weeks after recruitment. A nested qualitative study explored perceptions of adherence to urgent referral recommendations and a cost evaluation determined the financial and time-related costs to caregivers of subsequent health care utilization. The trial was conducted between July 2016 and February 2017. This is the first large-scale trial evaluating the value of adding a mobile application of CCM to the assessment of children aged under 5 years. The trial will generate evidence on the potential use of mobile health for CCM in Malawi, and more widely in other low- and middle

  13. Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge

    International Nuclear Information System (INIS)

    Ma, Y C; Liu, H Y; Yan, S B; Li, J M; Tang, J; Yang, Y H; Yang, M W

    2013-01-01

    This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency. (paper)

  14. Cluster randomized trial in the general practice research database: 2. Secondary prevention after first stroke (eCRT study: study protocol for a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Dregan Alex

    2012-10-01

    Full Text Available Abstract Background The purpose of this research is to develop and evaluate methods for conducting pragmatic cluster randomized trials in a primary care electronic database. The proposal describes one application, in a less frequent chronic condition of public health importance, secondary prevention of stroke. A related protocol in antibiotic prescribing was reported previously. Methods/Design The study aims to implement a cluster randomized trial (CRT using the electronic patient records of the General Practice Research Database (GPRD as a sampling frame and data source. The specific objective of the trial is to evaluate the effectiveness of a computer-delivered intervention at enhancing the delivery of stroke secondary prevention in primary care. GPRD family practices will be allocated to the intervention or usual care. The intervention promotes the use of electronic prompts to support adherence with the recommendations of the UK Intercollegiate Stroke Working Party and NICE guidelines for the secondary prevention of stroke in primary care. Primary outcome measure will be the difference in systolic blood pressure between intervention and control trial arms at 12-month follow-up. Secondary outcomes will be differences in serum cholesterol, prescribing of antihypertensive drugs, statins, and antiplatelet therapy. The intervention will continue for 12 months. Information on the utilization of the decision-support tools will also be analyzed. Discussion The CRT will investigate the effectiveness of using a computer-delivered intervention to reduce the risk of stroke recurrence following a first stroke event. The study will provide methodological guidance on the implementation of CRTs in electronic databases in primary care. Trial registration Current Controlled Trials ISRCTN35701810

  15. X-Ray Temperatures, Luminosities, and Masses from XMM-Newton Follow-up of the First Shear-selected Galaxy Cluster Sample

    Energy Technology Data Exchange (ETDEWEB)

    Deshpande, Amruta J.; Hughes, John P. [Department of Physics and Astronomy, Rutgers the State University of New Jersey, 136 Frelinghuysen Road, Piscataway, NJ 08854 (United States); Wittman, David, E-mail: amrejd@physics.rutgers.edu, E-mail: jph@physics.rutgers.edu, E-mail: dwittman@physics.ucdavis.edu [Department of Physics, University of California, Davis, One Shields Avenue, Davis, CA 95616 (United States)

    2017-04-20

    We continue the study of the first sample of shear-selected clusters from the initial 8.6 square degrees of the Deep Lens Survey (DLS); a sample with well-defined selection criteria corresponding to the highest ranked shear peaks in the survey area. We aim to characterize the weak lensing selection by examining the sample’s X-ray properties. There are multiple X-ray clusters associated with nearly all the shear peaks: 14 X-ray clusters corresponding to seven DLS shear peaks. An additional three X-ray clusters cannot be definitively associated with shear peaks, mainly due to large positional offsets between the X-ray centroid and the shear peak. Here we report on the XMM-Newton properties of the 17 X-ray clusters. The X-ray clusters display a wide range of luminosities and temperatures; the L {sub X} − T {sub X} relation we determine for the shear-associated X-ray clusters is consistent with X-ray cluster samples selected without regard to dynamical state, while it is inconsistent with self-similarity. For a subset of the sample, we measure X-ray masses using temperature as a proxy, and compare to weak lensing masses determined by the DLS team. The resulting mass comparison is consistent with equality. The X-ray and weak lensing masses show considerable intrinsic scatter (∼48%), which is consistent with X-ray selected samples when their X-ray and weak lensing masses are independently determined.

  16. An Association between Bullying Behaviors and Alcohol Use among Middle School Students

    Science.gov (United States)

    Peleg-Oren, Neta; Cardenas, Gabriel A.; Comerford, Mary; Galea, Sandro

    2012-01-01

    Although a high prevalence of bullying behaviors among adolescents has been documented, little is known about the association between bullying behaviors and alcohol use among perpetrators or victims. This study used data from a representative two-stage cluster random sample of 44, 532 middle school adolescents in Florida. We found a high…

  17. Preoperative staging of lung cancer with PET/CT: cost-effectiveness evaluation alongside a randomized controlled trial

    International Nuclear Information System (INIS)

    Soegaard, Rikke; Fischer, Barbara Malene B.; Mortensen, Jann; Hoejgaard, Liselotte; Lassen, Ulrik

    2011-01-01

    Positron emission tomography (PET)/CT has become a widely used technology for preoperative staging of non-small cell lung cancer (NSCLC). Two recent randomized controlled trials (RCT) have established its efficacy over conventional staging, but no studies have assessed its cost-effectiveness. The objective of this study was to assess the cost-effectiveness of PET/CT as an adjunct to conventional workup for preoperative staging of NSCLC. The study was conducted alongside an RCT in which 189 patients were allocated to conventional staging (n = 91) or conventional staging + PET/CT (n = 98) and followed for 1 year after which the numbers of futile thoracotomies in each group were monitored. A full health care sector perspective was adapted for costing resource use. The outcome parameter was defined as the number needed to treat (NNT) - here number of PET/CT scans needed - to avoid one futile thoracotomy. All monetary estimates were inflated to 2010 EUR. The incremental cost of the PET/CT-based regimen was estimated at 3,927 EUR [95% confidence interval (CI) -3,331; 10,586] and the NNT at 4.92 (95% CI 3.00; 13.62). These resulted in an average incremental cost-effectiveness ratio of 19,314 EUR, which would be cost-effective at a probability of 0.90 given a willingness to pay of 50,000 EUR per avoided futile thoracotomy. When costs of comorbidity-related hospital services were excluded, the PET/CT regimen appeared dominant. Applying a full health care sector perspective, the cost-effectiveness of PET/CT for staging NSCLC seems to depend on the willingness to pay in order to avoid a futile thoracotomy. However, given that four outliers in terms of extreme comorbidity were all randomized to the PET/CT arm, there is uncertainty about the conclusion. When hospital costs of comorbidity were excluded, the PET/CT regimen was found to be both more accurate and cost saving. (orig.)

  18. STAR CLUSTER PROPERTIES IN TWO LEGUS GALAXIES COMPUTED WITH STOCHASTIC STELLAR POPULATION SYNTHESIS MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Krumholz, Mark R. [Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Adamo, Angela [Department of Astronomy, Oskar Klein Centre, Stockholm University, SE-10691 Stockholm (Sweden); Fumagalli, Michele [Institute for Computational Cosmology and Centre for Extragalactic Astronomy, Department of Physics, Durham University, South Road, Durham DH1 3LE (United Kingdom); Wofford, Aida [Institut d’Astrophysique de Paris, 98bis Boulevard Arago, F-75014 Paris (France); Calzetti, Daniela; Grasha, Kathryn [Department of Astronomy, University of Massachusetts–Amherst, Amherst, MA (United States); Lee, Janice C.; Whitmore, Bradley C.; Bright, Stacey N.; Ubeda, Leonardo [Space Telescope Science Institute, Baltimore, MD (United States); Gouliermis, Dimitrios A. [Centre for Astronomy, Institute for Theoretical Astrophysics, University of Heidelberg, Heidelberg (Germany); Kim, Hwihyun [Korea Astronomy and Space Science Institute, Daejeon (Korea, Republic of); Nair, Preethi [Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL (United States); Ryon, Jenna E. [Department of Astronomy, University of Wisconsin–Madison, Madison, WI (United States); Smith, Linda J. [European Space Agency/Space Telescope Science Institute, Baltimore, MD (United States); Thilker, David [Department of Physics and Astronomy, The Johns Hopkins University, Baltimore, MD (United States); Zackrisson, Erik, E-mail: mkrumhol@ucsc.edu, E-mail: adamo@astro.su.se [Department of Physics and Astronomy, Uppsala University, Uppsala (Sweden)

    2015-10-20

    We investigate a novel Bayesian analysis method, based on the Stochastically Lighting Up Galaxies (slug) code, to derive the masses, ages, and extinctions of star clusters from integrated light photometry. Unlike many analysis methods, slug correctly accounts for incomplete initial mass function (IMF) sampling, and returns full posterior probability distributions rather than simply probability maxima. We apply our technique to 621 visually confirmed clusters in two nearby galaxies, NGC 628 and NGC 7793, that are part of the Legacy Extragalactic UV Survey (LEGUS). LEGUS provides Hubble Space Telescope photometry in the NUV, U, B, V, and I bands. We analyze the sensitivity of the derived cluster properties to choices of prior probability distribution, evolutionary tracks, IMF, metallicity, treatment of nebular emission, and extinction curve. We find that slug's results for individual clusters are insensitive to most of these choices, but that the posterior probability distributions we derive are often quite broad, and sometimes multi-peaked and quite sensitive to the choice of priors. In contrast, the properties of the cluster population as a whole are relatively robust against all of these choices. We also compare our results from slug to those derived with a conventional non-stochastic fitting code, Yggdrasil. We show that slug's stochastic models are generally a better fit to the observations than the deterministic ones used by Yggdrasil. However, the overall properties of the cluster populations recovered by both codes are qualitatively similar.

  19. Survey research with a random digit dial national mobile phone sample in Ghana: Methods and sample quality

    Science.gov (United States)

    Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne

    2018-01-01

    Introduction Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. Materials and methods The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. Results The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. Conclusions The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in

  20. Survey research with a random digit dial national mobile phone sample in Ghana: Methods and sample quality.

    Science.gov (United States)

    L'Engle, Kelly; Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne

    2018-01-01

    Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in comparison to household surveys. Random digit dialing of mobile

  1. Survey research with a random digit dial national mobile phone sample in Ghana: Methods and sample quality.

    Directory of Open Access Journals (Sweden)

    Kelly L'Engle

    Full Text Available Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample.The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census.The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample.The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in comparison to household surveys. Random digit

  2. Prevalence and awareness of diabetes in Guinea: findings from a ...

    African Journals Online (AJOL)

    Methods: A population-based cross-sectional survey was conducted on 1 100 adults (46.6% women) ... A multi-stage cluster sample design was applied to generate ... Organization's STEPwise approach to surveillance (STEPS) tools ... income-generating activities. ... cluster random sampling technique was used to recruit.

  3. THE CLUSTERING OF ALFALFA GALAXIES: DEPENDENCE ON H I MASS, RELATIONSHIP WITH OPTICAL SAMPLES, AND CLUES OF HOST HALO PROPERTIES

    Energy Technology Data Exchange (ETDEWEB)

    Papastergis, Emmanouil; Giovanelli, Riccardo; Haynes, Martha P.; Jones, Michael G. [Center for Radiophysics and Space Research, Space Sciences Building, Cornell University, Ithaca, NY 14853 (United States); Rodríguez-Puebla, Aldo, E-mail: papastergis@astro.cornell.edu, E-mail: riccardo@astro.cornell.edu, E-mail: haynes@astro.cornell.edu, E-mail: jonesmg@astro.cornell.edu, E-mail: apuebla@astro.unam.mx [Instituto de Astronomía, Universidad Nacional Autónoma de México, A. P. 70-264, 04510 México, D.F. (Mexico)

    2013-10-10

    We use a sample of ≈6000 galaxies detected by the Arecibo Legacy Fast ALFA (ALFALFA) 21 cm survey to measure the clustering properties of H I-selected galaxies. We find no convincing evidence for a dependence of clustering on galactic atomic hydrogen (H I) mass, over the range M{sub H{sub I}} ≈ 10{sup 8.5}-10{sup 10.5} M{sub ☉}. We show that previously reported results of weaker clustering for low H I mass galaxies are probably due to finite-volume effects. In addition, we compare the clustering of ALFALFA galaxies with optically selected samples drawn from the Sloan Digital Sky Survey (SDSS). We find that H I-selected galaxies cluster more weakly than even relatively optically faint galaxies, when no color selection is applied. Conversely, when SDSS galaxies are split based on their color, we find that the correlation function of blue optical galaxies is practically indistinguishable from that of H I-selected galaxies. At the same time, SDSS galaxies with red colors are found to cluster significantly more than H I-selected galaxies, a fact that is evident in both the projected as well as the full two-dimensional correlation function. A cross-correlation analysis further reveals that gas-rich galaxies 'avoid' being located within ≈3 Mpc of optical galaxies with red colors. Next, we consider the clustering properties of halo samples selected from the Bolshoi ΛCDM simulation. A comparison with the clustering of ALFALFA galaxies suggests that galactic H I mass is not tightly related to host halo mass and that a sizable fraction of subhalos do not host H I galaxies. Lastly, we find that we can recover fairly well the correlation function of H I galaxies by just excluding halos with low spin parameter. This finding lends support to the hypothesis that halo spin plays a key role in determining the gas content of galaxies.

  4. Implementation of client versus care-provider strategies to improve external cephalic version rates: a cluster randomized controlled trial.

    Science.gov (United States)

    Vlemmix, Floortje; Rosman, Ageeth N; Rijnders, Marlies E; Beuckens, Antje; Opmeer, Brent C; Mol, Ben W J; Kok, Marjolein; Fleuren, Margot A H

    2015-05-01

    To determine the effectiveness of a client or care-provider strategy to improve the implementation of external cephalic version. Cluster randomized controlled trial. Twenty-five clusters; hospitals and their referring midwifery practices randomly selected in the Netherlands. Singleton breech presentation from 32 weeks of gestation onwards. We randomized clusters to a client strategy (written information leaflets and decision aid), a care-provider strategy (1-day counseling course focused on knowledge and counseling skills), a combined client and care-provider strategy and care-as-usual strategy. We performed an intention-to-treat analysis. Rate of external cephalic version in various strategies. Secondary outcomes were the percentage of women counseled and opting for a version attempt. The overall implementation rate of external cephalic version was 72% (1169 of 1613 eligible clients) with a range between clusters of 8-95%. Neither the client strategy (OR 0.8, 95% CI 0.4-1.5) nor the care-provider strategy (OR 1.2, 95% CI 0.6-2.3) showed significant improvements. Results were comparable when we limited the analysis to those women who were actually offered intervention (OR 0.6, 95% CI 0.3-1.4 and OR 2.0, 95% CI 0.7-4.5). Neither a client nor a care-provider strategy improved the external cephalic version implementation rate for breech presentation, neither with regard to the number of version attempts offered nor the number of women accepting the procedure. © 2015 Nordic Federation of Societies of Obstetrics and Gynecology.

  5. Adaptive cluster sampling: An efficient method for assessing inconspicuous species

    Science.gov (United States)

    Andrea M. Silletti; Joan Walker

    2003-01-01

    Restorationistis typically evaluate the success of a project by estimating the population sizes of species that have been planted or seeded. Because total census is raely feasible, they must rely on sampling methods for population estimates. However, traditional random sampling designs may be inefficient for species that, for one reason or another, are challenging to...

  6. Headache cessation by an educational intervention in grammar schools: a cluster randomized trial.

    Science.gov (United States)

    Albers, L; Heinen, F; Landgraf, M; Straube, A; Blum, B; Filippopulos, F; Lehmann, S; Mansmann, U; Berger, U; Akboga, Y; von Kries, R

    2015-02-01

    Headache is a common health problem in adolescents. There are a number of risk factors for headache in adolescents that are amenable to intervention. The aim of the study was to assess the effectiveness of a low-level headache prevention programme in the classroom setting to prevent these risk factors. In all, 1674 students in 8th-10th grade at 12 grammar schools in greater Munich, Germany, were cluster randomized into intervention and control groups. A standardized 60-min prevention lesson focusing on preventable risk factors for headache (physical inactivity, coffee consumption, alcohol consumption and smoking) and providing instructions on stress management and neck and shoulder muscle relaxation exercises was given in a classroom setting. Seven months later, students were reassessed. The main outcome parameter was headache cessation. Logistic regression models with random effects for cluster and adjustment for baseline risk factors were calculated. Nine hundred students (intervention group N = 450, control group N = 450) with headache at baseline and complete data for headache and confounders were included in the analysis. Headache cessation was observed in 9.78% of the control group compared with 16.22% in the intervention group (number needed to treat = 16). Accounting for cluster effects and confounders, the probability of headache cessation in the intervention group was 1.77 (95% confidence interval = [1.08; 2.90]) higher than in the control group. The effect was most pronounced in adolescents with tension-type headache: odds ratio = 2.11 (95% confidence interval = [1.15; 3.80]). Our study demonstrates the effectiveness of a one-time, classroom-based headache prevention programme. © 2014 EAN.

  7. Genesis of cluster associations of enterprises

    Directory of Open Access Journals (Sweden)

    Pulina Tetyana V.

    2013-03-01

    Full Text Available The goal of the article is the study of genesis of creation of cluster associations of enterprises. It considers genesis of cluster definitions. It shows and analyses components that define the “cluster” concept. Researchers from many countries offer a significant number of definitions of the “cluster” term specifically in the economic direction, but there is no single generally accepted definition as of today. This fact is the result of a significant diversity of cluster structures. The article conducts a comparative analysis of classifications of cluster associations of enterprises. It identifies advantages and shortcomings of the cluster approach both from the position of an enterprise and from the position of a regional economy administration. The article marks out specific features of the life cycle of cluster associations of enterprises, which consists of the preparatory stage and stage of commercialisation. Majority of studies consider the preparatory stage and the stage of commercialisation, which consists of the following stages: entering market with a common brand, growth, maturity and crisis – is, practically, not considered. Taking into account the fact that the main result of cluster activity is the synergetic effect from mutually beneficial co-operation and activity results facilitate ensuring competitiveness of cluster enterprises, regional and national economies, the author gives own definition of a cluster.

  8. A Gas-Spring-Loaded X-Y-Z Stage System for X-ray Microdiffraction Sample Manipulation

    International Nuclear Information System (INIS)

    Shu Deming; Cai Zhonghou; Lai, Barry

    2007-01-01

    We have designed and constructed a gas-spring-loaded x-y-z stage system for x-ray microdiffraction sample manipulation at the Advanced Photon Source XOR 2-ID-D station. The stage system includes three DC-motor-driven linear stages and a gas-spring-based heavy preloading structure, which provides antigravity forces to ensure that the stage system keeps high-positioning performance under variable goniometer orientation. Microdiffraction experiments with this new stage system showed significant sample manipulation performance improvement

  9. Effect of an educational toolkit on quality of care: a pragmatic cluster randomized trial.

    Science.gov (United States)

    Shah, Baiju R; Bhattacharyya, Onil; Yu, Catherine H Y; Mamdani, Muhammad M; Parsons, Janet A; Straus, Sharon E; Zwarenstein, Merrick

    2014-02-01

    Printed educational materials for clinician education are one of the most commonly used approaches for quality improvement. The objective of this pragmatic cluster randomized trial was to evaluate the effectiveness of an educational toolkit focusing on cardiovascular disease screening and risk reduction in people with diabetes. All 933,789 people aged ≥40 years with diagnosed diabetes in Ontario, Canada were studied using population-level administrative databases, with additional clinical outcome data collected from a random sample of 1,592 high risk patients. Family practices were randomly assigned to receive the educational toolkit in June 2009 (intervention group) or May 2010 (control group). The primary outcome in the administrative data study, death or non-fatal myocardial infarction, occurred in 11,736 (2.5%) patients in the intervention group and 11,536 (2.5%) in the control group (p = 0.77). The primary outcome in the clinical data study, use of a statin, occurred in 700 (88.1%) patients in the intervention group and 725 (90.1%) in the control group (p = 0.26). Pre-specified secondary outcomes, including other clinical events, processes of care, and measures of risk factor control, were also not improved by the intervention. A limitation is the high baseline rate of statin prescribing in this population. The educational toolkit did not improve quality of care or cardiovascular outcomes in a population with diabetes. Despite being relatively easy and inexpensive to implement, printed educational materials were not effective. The study highlights the need for a rigorous and scientifically based approach to the development, dissemination, and evaluation of quality improvement interventions. http://www.ClinicalTrials.gov NCT01411865 and NCT01026688.

  10. Evaluation of a modified two-stage inferior alveolar nerve block technique: A preliminary investigation

    Directory of Open Access Journals (Sweden)

    Ashwin Rao

    2017-01-01

    Full Text Available Introduction: The two-stage technique of inferior alveolar nerve block (IANB administration does not address the pain associated with “needle insertion” and “local anesthetic solution deposition” in the “first stage” of the injection. This study evaluated a “modified two stage technique” to the reaction of children during “needle insertion” and “local anesthetic solution deposition” during the “first stage” and compared it to the “first phase” of the IANB administered with the standard one-stage technique. Materials and Methods: This was a parallel, single-blinded comparative study. A total of 34 children (between 6 and 10 years of age were randomly divided into two groups to receive an IANB either through the modified two-stage technique (MTST (Group A; 15 children or the standard one-stage technique (SOST (Group B; 19 children. The evaluation was done using the Face Legs Activity Cry Consolability (FLACC; which is an objective scale based on the expressions of the child scale. The obtained data was analyzed using Fishers Exact test with the P value set at <0.05 as level of significance. Results: 73.7% of children in Group B indicated moderate pain during the “first phase” of SOST and no children indicated such in the “first stage” of group A. Group A had 33.3% children who scored “0” indicating relaxed/comfortable children compared to 0% in Group B. In Group A, 66.7% of children scored between 1–3 indicating mild discomfort compared to 26.3% in group B. The difference in the scores between the two groups in each category (relaxed/comfortable, mild discomfort, moderate pain was highly significant (P < 0.001. Conclusion: Reaction of children in Group A during “needle insertion” and “local anesthetic solution deposition” in the “first stage” of MTST was significantly lower than that of Group B during the “first phase” of the SOST.

  11. The clustering evolution of distant red galaxies in the GOODS-MUSIC sample

    Science.gov (United States)

    Grazian, A.; Fontana, A.; Moscardini, L.; Salimbeni, S.; Menci, N.; Giallongo, E.; de Santis, C.; Gallozzi, S.; Nonino, M.; Cristiani, S.; Vanzella, E.

    2006-07-01

    Aims.We study the clustering properties of Distant Red Galaxies (DRGs) to test whether they are the progenitors of local massive galaxies. Methods.We use the GOODS-MUSIC sample, a catalog of ~3000 Ks-selected galaxies based on VLT and HST observation of the GOODS-South field with extended multi-wavelength coverage (from 0.3 to 8~μm) and accurate estimates of the photometric redshifts to select 179 DRGs with J-Ks≥ 1.3 in an area of 135 sq. arcmin.Results.We first show that the J-Ks≥ 1.3 criterion selects a rather heterogeneous sample of galaxies, going from the targeted high-redshift luminous evolved systems, to a significant fraction of lower redshift (1mass, like groups or small galaxy clusters. Low-z DRGs, on the other hand, will likely evolve into slightly less massive field galaxies.

  12. BioCluster: Tool for Identification and Clustering of Enterobacteriaceae Based on Biochemical Data

    Directory of Open Access Journals (Sweden)

    Ahmed Abdullah

    2015-06-01

    Full Text Available Presumptive identification of different Enterobacteriaceae species is routinely achieved based on biochemical properties. Traditional practice includes manual comparison of each biochemical property of the unknown sample with known reference samples and inference of its identity based on the maximum similarity pattern with the known samples. This process is labor-intensive, time-consuming, error-prone, and subjective. Therefore, automation of sorting and similarity in calculation would be advantageous. Here we present a MATLAB-based graphical user interface (GUI tool named BioCluster. This tool was designed for automated clustering and identification of Enterobacteriaceae based on biochemical test results. In this tool, we used two types of algorithms, i.e., traditional hierarchical clustering (HC and the Improved Hierarchical Clustering (IHC, a modified algorithm that was developed specifically for the clustering and identification of Enterobacteriaceae species. IHC takes into account the variability in result of 1–47 biochemical tests within this Enterobacteriaceae family. This tool also provides different options to optimize the clustering in a user-friendly way. Using computer-generated synthetic data and some real data, we have demonstrated that BioCluster has high accuracy in clustering and identifying enterobacterial species based on biochemical test data. This tool can be freely downloaded at http://microbialgen.du.ac.bd/biocluster/.

  13. A combined community- and facility-based approach to improve pregnancy outcomes in low-resource settings: a Global Network cluster randomized trial.

    Science.gov (United States)

    Pasha, Omrana; McClure, Elizabeth M; Wright, Linda L; Saleem, Sarah; Goudar, Shivaprasad S; Chomba, Elwyn; Patel, Archana; Esamai, Fabian; Garces, Ana; Althabe, Fernando; Kodkany, Bhala; Mabeya, Hillary; Manasyan, Albert; Carlo, Waldemar A; Derman, Richard J; Hibberd, Patricia L; Liechty, Edward K; Krebs, Nancy; Hambidge, K Michael; Buekens, Pierre; Moore, Janet; Jobe, Alan H; Koso-Thomas, Marion; Wallace, Dennis D; Stalls, Suzanne; Goldenberg, Robert L

    2013-10-03

    Fetal and neonatal mortality rates in low-income countries are at least 10-fold greater than in high-income countries. These differences have been related to poor access to and poor quality of obstetric and neonatal care. This trial tested the hypothesis that teams of health care providers, administrators and local residents can address the problem of limited access to quality obstetric and neonatal care and lead to a reduction in perinatal mortality in intervention compared to control locations. In seven geographic areas in five low-income and one middle-income country, most with high perinatal mortality rates and substantial numbers of home deliveries, we performed a cluster randomized non-masked trial of a package of interventions that included community mobilization focusing on birth planning and hospital transport, community birth attendant training in problem recognition, and facility staff training in the management of obstetric and neonatal emergencies. The primary outcome was perinatal mortality at ≥28 weeks gestation or birth weight ≥1000 g. Despite extensive effort in all sites in each of the three intervention areas, no differences emerged in the primary or any secondary outcome between the intervention and control clusters. In both groups, the mean perinatal mortality was 40.1/1,000 births (P = 0.9996). Neither were there differences between the two groups in outcomes in the last six months of the project, in the year following intervention cessation, nor in the clusters that best implemented the intervention. This cluster randomized comprehensive, large-scale, multi-sector intervention did not result in detectable impact on the proposed outcomes. While this does not negate the importance of these interventions, we expect that achieving improvement in pregnancy outcomes in these settings will require substantially more obstetric and neonatal care infrastructure than was available at the sites during this trial, and without them provider training

  14. The correlation functions for the clustering of galaxies and Abell clusters

    International Nuclear Information System (INIS)

    Jones, B.J.T.; Jones, J.E.; Copenhagen Univ.

    1985-01-01

    The difference in amplitudes between the galaxy-galaxy correlation function and the correlation function between Abell clusters is a consequence of two facts. Firstly, most Abell clusters with z<0.08 lie in a relatively small volume of the sampled space, and secondly, the fraction of galaxies lying in Abell clusters differs considerably inside and outside of this volume. (The Abell clusters are confined to a smaller volume of space than are the galaxies.) We discuss the implications of this interpretation of the clustering correlation functions and present a simple model showing how such a situation may arise quite naturally in standard theories for galaxy formation. (orig.)

  15. Internal Cluster Validation on Earthquake Data in the Province of Bengkulu

    Science.gov (United States)

    Rini, D. S.; Novianti, P.; Fransiska, H.

    2018-04-01

    K-means method is an algorithm for cluster n object based on attribute to k partition, where k < n. There is a deficiency of algorithms that is before the algorithm is executed, k points are initialized randomly so that the resulting data clustering can be different. If the random value for initialization is not good, the clustering becomes less optimum. Cluster validation is a technique to determine the optimum cluster without knowing prior information from data. There are two types of cluster validation, which are internal cluster validation and external cluster validation. This study aims to examine and apply some internal cluster validation, including the Calinski-Harabasz (CH) Index, Sillhouette (S) Index, Davies-Bouldin (DB) Index, Dunn Index (D), and S-Dbw Index on earthquake data in the Bengkulu Province. The calculation result of optimum cluster based on internal cluster validation is CH index, S index, and S-Dbw index yield k = 2, DB Index with k = 6 and Index D with k = 15. Optimum cluster (k = 6) based on DB Index gives good results for clustering earthquake in the Bengkulu Province.

  16. Preventing knee injuries in adolescent female football players - design of a cluster randomized controlled trial [NCT00894595].

    Science.gov (United States)

    Hägglund, Martin; Waldén, Markus; Atroshi, Isam

    2009-06-23

    Knee injuries in football are common regardless of age, gender or playing level, but adolescent females seem to have the highest risk. The consequences after severe knee injury, for example anterior cruciate ligament (ACL) injury, are well-known, but less is known about knee injury prevention. We have designed a cluster randomized controlled trial (RCT) to evaluate the effect of a warm-up program aimed at preventing acute knee injury in adolescent female football. In this cluster randomized trial 516 teams (309 clusters) in eight regional football districts in Sweden with female players aged 13-17 years were randomized into an intervention group (260 teams) or a control group (256 teams). The teams in the intervention group were instructed to do a structured warm-up program at two training sessions per week throughout the 2009 competitive season (April to October) and those in the control group were informed to train and play as usual. Sixty-eight sports physical therapists are assigned to the clubs to assist both groups in data collection and to examine the players' acute knee injuries during the study period. Three different forms are used in the trial: (1) baseline player data form collected at the start of the trial, (2) computer-based registration form collected every month, on which one of the coaches/team leaders documents individual player exposure, and (3) injury report form on which the study therapists report acute knee injuries resulting in time loss from training or match play. The primary outcome is the incidence of ACL injury and the secondary outcomes are the incidence of any acute knee injury (except contusion) and incidence of severe knee injury (defined as injury resulting in absence of more than 4 weeks). Outcome measures are assessed after the end of the 2009 season. Prevention of knee injury is beneficial for players, clubs, insurance companies, and society. If the warm-up program is proven to be effective in reducing the incidence of knee

  17. Comparison of two-staged ORIF and limited internal fixation with external fixator for closed tibial plafond fractures.

    Science.gov (United States)

    Wang, Cheng; Li, Ying; Huang, Lei; Wang, Manyi

    2010-10-01

    To compare the results of two-staged open reduction and internal fixation (ORIF) and limited internal fixation with external fixator (LIFEF) for closed tibial plafond fractures. From January 2005 to June 2007, 56 patients with closed type B3 or C Pilon fractures were randomly allocated into groups I and II. Two-staged ORIF was performed in group I and LIFEF in group II. The outcome measures included bone union, nonunion, malunion, pin-tract infection, wound infection, osteomyelitis, ankle joint function, etc. These postoperative data were analyzed with Statistical Package for Social Sciences (SPSS) 13.0. Incidence of superficial soft tissue infection (involved in wound infection or pin-tract infection) in group I was lower than that in group II (P delayed union, and arthritis symptoms, with no statistical significance. Both groups resulted similar ankle joint function. Logistic regression analysis indicated that smoking and fracture pattern were the two factors significantly influencing the final outcomes. In the treatment of closed tibial plafond fractures, both two-staged ORIF and LIFEF offer similar results. Patients undergo LIFEF carry significantly greater radiation exposure and higher superficial soft tissue infection rate (usually occurs on pin tract and does not affect the final outcomes).

  18. Late-stage pharmaceutical R&D and pricing policies under two-stage regulation.

    Science.gov (United States)

    Jobjörnsson, Sebastian; Forster, Martin; Pertile, Paolo; Burman, Carl-Fredrik

    2016-12-01

    We present a model combining the two regulatory stages relevant to the approval of a new health technology: the authorisation of its commercialisation and the insurer's decision about whether to reimburse its cost. We show that the degree of uncertainty concerning the true value of the insurer's maximum willingness to pay for a unit increase in effectiveness has a non-monotonic impact on the optimal price of the innovation, the firm's expected profit and the optimal sample size of the clinical trial. A key result is that there exists a range of values of the uncertainty parameter over which a reduction in uncertainty benefits the firm, the insurer and patients. We consider how different policy parameters may be used as incentive mechanisms, and the incentives to invest in R&D for marginal projects such as those targeting rare diseases. The model is calibrated using data on a new treatment for cystic fibrosis. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. The Atacama Cosmology Telescope: Physical Properties and Purity of a Galaxy Cluster Sample Selected Via the Sunyaev-Zel'Dovich Effect

    Science.gov (United States)

    Menanteau, Felipe; Gonzalez, Jorge; Juin, Jean-Baptiste; Marriage, Tobias; Reese, Erik D.; Acquaviva, Viviana; Aguirre, Paula; Appel, John Willam; Baker, Andrew J.; Barrientos, L. Felipe; hide

    2010-01-01

    We present optical and X-ray properties for the first confirmed galaxy cluster sample selected by the Sunyaev-Zel'dovich Effect from 148 GHz maps over 455 square degrees of sky made with the Atacama Cosmology Telescope. These maps. coupled with multi-band imaging on 4-meter-class optical telescopes, have yielded a sample of 23 galaxy clusters with redshifts between 0.118 and 1.066. Of these 23 clusters, 10 are newly discovered. The selection of this sample is approximately mass limited and essentially independent of redshift. We provide optical positions, images, redshifts and X-ray fluxes and luminosities for the full sample, and X-ray temperatures of an important subset. The mass limit of the full sample is around 8.0 x 10(exp 14) Stellar Mass. with a number distribution that peaks around a redshift of 0.4. For the 10 highest significance SZE-selected cluster candidates, all of which are optically confirmed, the mass threshold is 1 x 10(exp 15) Stellar Mass and the redshift range is 0.167 to 1.066. Archival observations from Chandra, XMM-Newton. and ROSAT provide X-ray luminosities and temperatures that are broadly consistent with this mass threshold. Our optical follow-up procedure also allowed us to assess the purity of the ACT cluster sample. Eighty (one hundred) percent of the 148 GHz candidates with signal-to-noise ratios greater than 5.1 (5.7) are confirmed as massive clusters. The reported sample represents one of the largest SZE-selected sample of massive clusters over all redshifts within a cosmologically-significant survey volume, which will enable cosmological studies as well as future studies on the evolution, morphology, and stellar populations in the most massive clusters in the Universe.

  20. Effects of improved sanitation on diarrheal reduction for children under five in Idiofa, DR Congo: a cluster randomized trial.

    Science.gov (United States)

    Cha, Seungman; Lee, JaeEun; Seo, DongSik; Park, Byoung Mann; Mansiangi, Paul; Bernard, Kabore; Mulakub-Yazho, Guy Jerome Nkay; Famasulu, Honore Minka

    2017-09-19

    The lack of safe water and sanitation contributes to the rampancy of diarrhea in many developing countries. This study describes the design of a cluster-randomized trial in Idiofa, the Democratic Republic of the Congo, seeking evidence of the impact of improved sanitation on diarrhea for children under four. Of the 276 quartiers, 18 quartiers were randomly allocated to the intervention or control arm. Seven hundred and-twenty households were sampled and the youngest under-four child in each household was registered for this study. The primary endpoint of the study is diarrheal incidence, prevalence and duration in children under five. Material subsidies will be provided only to the households who complete pit digging plus superstructure and roof construction, regardless of their income level. This study employs a Sanitation Calendar so that the mother of each household can record the diarrheal episodes of her under-four child on a daily basis. The diary enables examination of the effect of the sanitation intervention on diarrhea duration and also resolves the limitation of the small number of clusters in the trial. In addition, the project will be monitored through the 'Sanitation Map', on which all households in the study area, including both the control and intervention arms, are registered. To avoid information bias or courtesy bias, photos will be taken of the latrine during the household visit, and a supervisor will determine well-equipped latrine uptake based on the photos. This reduces the possibility of recall bias and under- or over-estimation of diarrhea, which was the main limitation of previous studies. The study was approved by the Institutional Review Board of the School of Public Health, Kinshasa University (ESP/CE/040/15; April 13, 2015) and registered as an International Standard Randomized Controlled Trial (ISRCTN: 10,419,317) on March 13, 2015.