WorldWideScience

Sample records for two-stage cluster sampling

  1. Mixed effect regression analysis for a cluster-based two-stage outcome-auxiliary-dependent sampling design with a continuous outcome.

    Science.gov (United States)

    Xu, Wangli; Zhou, Haibo

    2012-09-01

    Two-stage design is a well-known cost-effective way for conducting biomedical studies when the exposure variable is expensive or difficult to measure. Recent research development further allowed one or both stages of the two-stage design to be outcome dependent on a continuous outcome variable. This outcome-dependent sampling feature enables further efficiency gain in parameter estimation and overall cost reduction of the study (e.g. Wang, X. and Zhou, H., 2010. Design and inference for cancer biomarker study with an outcome and auxiliary-dependent subsampling. Biometrics 66, 502-511; Zhou, H., Song, R., Wu, Y. and Qin, J., 2011. Statistical inference for a two-stage outcome-dependent sampling design with a continuous outcome. Biometrics 67, 194-202). In this paper, we develop a semiparametric mixed effect regression model for data from a two-stage design where the second-stage data are sampled with an outcome-auxiliary-dependent sample (OADS) scheme. Our method allows the cluster- or center-effects of the study subjects to be accounted for. We propose an estimated likelihood function to estimate the regression parameters. Simulation study indicates that greater study efficiency gains can be achieved under the proposed two-stage OADS design with center-effects when compared with other alternative sampling schemes. We illustrate the proposed method by analyzing a dataset from the Collaborative Perinatal Project.

  2. Two-stage sampling for acceptance testing

    Energy Technology Data Exchange (ETDEWEB)

    Atwood, C.L.; Bryan, M.F.

    1992-09-01

    Sometimes a regulatory requirement or a quality-assurance procedure sets an allowed maximum on a confidence limit for a mean. If the sample mean of the measurements is below the allowed maximum, but the confidence limit is above it, a very widespread practice is to increase the sample size and recalculate the confidence bound. The confidence level of this two-stage procedure is rarely found correctly, but instead is typically taken to be the nominal confidence level, found as if the final sample size had been specified in advance. In typical settings, the correct nominal [alpha] should be between the desired P(Type I error) and half that value. This note gives tables for the correct a to use, some plots of power curves, and an example of correct two-stage sampling.

  3. Two-stage sampling for acceptance testing

    Energy Technology Data Exchange (ETDEWEB)

    Atwood, C.L.; Bryan, M.F.

    1992-09-01

    Sometimes a regulatory requirement or a quality-assurance procedure sets an allowed maximum on a confidence limit for a mean. If the sample mean of the measurements is below the allowed maximum, but the confidence limit is above it, a very widespread practice is to increase the sample size and recalculate the confidence bound. The confidence level of this two-stage procedure is rarely found correctly, but instead is typically taken to be the nominal confidence level, found as if the final sample size had been specified in advance. In typical settings, the correct nominal {alpha} should be between the desired P(Type I error) and half that value. This note gives tables for the correct a to use, some plots of power curves, and an example of correct two-stage sampling.

  4. A two-stage cluster sampling method using gridded population data, a GIS, and Google EarthTM imagery in a population-based mortality survey in Iraq

    Directory of Open Access Journals (Sweden)

    Galway LP

    2012-04-01

    Full Text Available Abstract Background Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. Results We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Conclusion Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings.

  5. TWO-STAGE CHARACTER CLASSIFICATION : A COMBINED APPROACH OF CLUSTERING AND SUPPORT VECTOR CLASSIFIERS

    NARCIS (Netherlands)

    Vuurpijl, L.; Schomaker, L.

    2000-01-01

    This paper describes a two-stage classification method for (1) classification of isolated characters and (2) verification of the classification result. Character prototypes are generated using hierarchical clustering. For those prototypes known to sometimes produce wrong classification results, a

  6. A two-stage method for microcalcification cluster segmentation in mammography by deformable models

    Energy Technology Data Exchange (ETDEWEB)

    Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.; Karahaliou, A.; Costaridou, L., E-mail: costarid@upatras.gr [Department of Medical Physics, School of Medicine, University of Patras, Patras 26504 (Greece); Vassiou, K. [Department of Anatomy, School of Medicine, University of Thessaly, Larissa 41500 (Greece)

    2015-10-15

    Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods are applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST{sub cluster}, average of minimum distance—AMINDIST{sub cluster}) and the area overlap measure (AOM{sub cluster}). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing

  7. Two-stage replication of previous genome-wide association studies of AS3MT-CNNM2-NT5C2 gene cluster region in a large schizophrenia case-control sample from Han Chinese population.

    Science.gov (United States)

    Guan, Fanglin; Zhang, Tianxiao; Li, Lu; Fu, Dongke; Lin, Huali; Chen, Gang; Chen, Teng

    2016-10-01

    Schizophrenia is a devastating psychiatric condition with high heritability. Replicating the specific genetic variants that increase susceptibility to schizophrenia in different populations is critical to better understand schizophrenia. CNNM2 and NT5C2 are genes recently identified as susceptibility genes for schizophrenia in Europeans, but the exact mechanism by which these genes confer risk for schizophrenia remains unknown. In this study, we examined the potential for genetic susceptibility to schizophrenia of a three-gene cluster region, AS3MT-CNNM2-NT5C2. We implemented a two-stage strategy to conduct association analyses of the targeted regions with schizophrenia. A total of 8218 individuals were recruited, and 45 pre-selected single nucleotide polymorphisms (SNPs) were genotyped. Both single-marker and haplotype-based analyses were conducted in addition to imputation analysis to increase the coverage of our genetic markers. Two SNPs, rs11191419 (OR=1.24, P=7.28×10(-5)) and rs11191514 (OR=1.24, P=0.0003), with significant independent effects were identified. These results were supported by the data from both the discovery and validation stages. Further haplotype and imputation analyses also validated these results, and bioinformatics analyses indicated that CALHM1, which is located approximately 630kb away from CNNM2, might be a susceptible gene for schizophrenia. Our results provide further support that AS3MT, CNNM2 and CALHM1 are involved with the etiology and pathogenesis of schizophrenia, suggesting these genes are potential targets of interest for the improvement of disease management and the development of novel pharmacological strategies.

  8. Two-Stage Sampling Procedures for Comparing Means When Population Distributions Are Non-Normal.

    Science.gov (United States)

    Luh, Wei-Ming; Olejnik, Stephen

    Two-stage sampling procedures for comparing two population means when variances are heterogeneous have been developed by D. G. Chapman (1950) and B. K. Ghosh (1975). Both procedures assume sampling from populations that are normally distributed. The present study reports on the effect that sampling from non-normal distributions has on Type I error…

  9. Two Stage Fully Differential Sample and Hold Circuit Using .18µm Technology

    Directory of Open Access Journals (Sweden)

    Dharmendra Dongardiye

    2014-05-01

    Full Text Available This paper presents a well-established Fully Differential sample & hold circuitry, implemented in 180-nm CMOS technology. In this two stage method the first stage give us very high gain and second stage gives large voltage swing. The proposed opamp provides 149MHz unity-gain bandwidth , 78 degree phase margin and a differential peak to peak output swing more than 2.4v. using the improved fully differential two stage operational amplifier of 76.7dB gain. Although the sample and hold circuit meets the requirements of SNR specifications.

  10. A two-stage method to determine optimal product sampling considering dynamic potential market.

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level.

  11. Two-stage scheduling algorithm based on priority table for clusters with inaccurate system parameters

    Institute of Scientific and Technical Information of China (English)

    LIU An-feng; CHEN Zhi-gang; XIONG Ce

    2006-01-01

    A new two-stage soft real-time scheduling algorithm based on priority table was proposed for task dispatch and selection in cluster systems with inaccurate parameters. The inaccurate characteristics of the system were modeled through probability analysis. By taking into account the multiple important system parameters, including task deadline, priority, session integrity and memory access locality, the algorithm is expected to achieve high quality of service. Lots of simulation results collected under different load conditions demonstrate that the algorithm can not only effectively overcome the inaccuracy of the system state, but also optimize the task rejected ratio, value realized ratio, differentiated service guaranteed ratio, and session integrity ensured ratio with the average improvement of 3.5%, 5.8%, 7.6% and 5.5%, respectively. Compared with many existing schemes that cannot deal with the inaccurate parameters of the system, the proposed scheme can achieve the best system performance by carefully adjusting scheduling probability. The algorithm is expected to be promising in systems with soft real-time scheduling requirement such as E-commerce applications.

  12. Precision and cost considerations for two-stage sampling in a panelized forest inventory design.

    Science.gov (United States)

    Westfall, James A; Lister, Andrew J; Scott, Charles T

    2016-01-01

    Due to the relatively high cost of measuring sample plots in forest inventories, considerable attention is given to sampling and plot designs during the forest inventory planning phase. A two-stage design can be efficient from a field work perspective as spatially proximate plots are grouped into work zones. A comparison between subsampling with units of unequal size (SUUS) and a simple random sample (SRS) design in a panelized framework assessed the statistical and economic implications of using the SUUS design for a case study in the Northeastern USA. The sampling errors for estimates of forest land area and biomass were approximately 1.5-2.2 times larger with SUUS prior to completion of the inventory cycle. Considerable sampling error reductions were realized by using the zones within a post-stratified sampling paradigm; however, post-stratification of plots in the SRS design always provided smaller sampling errors in comparison. Cost differences between the two designs indicated the SUUS design could reduce the field work expense by 2-7 %. The results also suggest the SUUS design may provide substantial economic advantage for tropical forest inventories, where remote areas, poor access, and lower wages are typically encountered.

  13. The role of the upper sample size limit in two-stage bioequivalence designs.

    Science.gov (United States)

    Karalis, Vangelis

    2013-11-01

    Two-stage designs (TSDs) are currently recommended by the regulatory authorities for bioequivalence (BE) assessment. The TSDs presented until now rely on an assumed geometric mean ratio (GMR) value of the BE metric in stage I in order to avoid inflation of type I error. In contrast, this work proposes a more realistic TSD design where sample re-estimation relies not only on the variability of stage I, but also on the observed GMR. In these cases, an upper sample size limit (UL) is introduced in order to prevent inflation of type I error. The aim of this study is to unveil the impact of UL on two TSD bioequivalence approaches which are based entirely on the interim results. Monte Carlo simulations were used to investigate several different scenarios of UL levels, within-subject variability, different starting number of subjects, and GMR. The use of UL leads to no inflation of type I error. As UL values increase, the % probability of declaring BE becomes higher. The starting sample size and the variability of the study affect type I error. Increased UL levels result in higher total sample sizes of the TSD which are more pronounced for highly variable drugs.

  14. Synthetic control charts with two-stage sampling for monitoring bivariate processes

    Directory of Open Access Journals (Sweden)

    Antonio F. B. Costa

    2007-04-01

    Full Text Available In this article, we consider the synthetic control chart with two-stage sampling (SyTS chart to control bivariate processes. During the first stage, one item of the sample is inspected and two correlated quality characteristics (x;y are measured. If the Hotelling statistic T1² for these individual observations of (x;y is lower than a specified value UCL1 the sampling is interrupted. Otherwise, the sampling goes on to the second stage, where the remaining items are inspected and the Hotelling statistic T2² for the sample means of (x;y is computed. When the statistic T2² is larger than a specified value UCL2, the sample is classified as nonconforming. According to the synthetic control chart procedure, the signal is based on the number of conforming samples between two neighbor nonconforming samples. The proposed chart detects process disturbances faster than the bivariate charts with variable sample size and it is from the practical viewpoint more convenient to administer.Este artigo apresenta um gráfico de controle com regra especial de decisão e amostragens em dois estágios para o monitoramento de processos bivariados. No primeiro estágio, um item da amostra é inspecionado e duas características de qualidade correlacionadas (x;y são medidas. Se a estatística de Hotelling T1² para as observações individuais de (x;y for menor que um valor especificado UCL1 a amostragem é interrompida. Caso contrário, a amostragem segue para o segundo estágio, onde os demais itens da amostra são inspecionados e a estatística de Hotelling T2² para as médias de (x;y é calculada. Quando a estatística T2² é maior que um valor especificado UCL2, a amostra é classificada como não conforme. De acordo com a regra especial de decisão, o alarme é baseado no número de amostras entre duas não conformes. O gráfico proposto é mais ágil e mais simples do ponto de vista operacional que o gráfico de controle bivariado com tamanho de amostras variável.

  15. Exact alpha-error determination for two-stage sampling strategies to substantiate freedom from disease.

    Science.gov (United States)

    Kopacka, I; Hofrichter, J; Fuchs, K

    2013-05-01

    Sampling strategies to substantiate freedom from disease are important when it comes to the trade of animals and animal products. When considering imperfect tests and finite populations, sample size calculation can, however, be a challenging task. The generalized hypergeometric formula developed by Cameron and Baldock (1998a) offers a framework that can elegantly be extended to multi-stage sampling strategies, which are widely used to account for disease clustering at herd-level. The achieved alpha-error of such surveys, however, typically depends on the realization of the sample and can differ from the pre-calculated value. In this paper, we introduce a new formula to evaluate the exact alpha-error induced by a specific sample. We further give a numerically viable approximation formula and analyze its properties using a data example of Brucella melitensis in the Austrian sheep population.

  16. A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin

    Science.gov (United States)

    Blaschek, Michael; Duttmann, Rainer

    2015-04-01

    The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using

  17. TSCC: Two-Stage Combinatorial Clustering for virtual screening using protein-ligand interactions and physicochemical features

    Science.gov (United States)

    2010-01-01

    Background The increasing numbers of 3D compounds and protein complexes stored in databases contribute greatly to current advances in biotechnology, being employed in several pharmaceutical and industrial applications. However, screening and retrieving appropriate candidates as well as handling false positives presents a challenge for all post-screening analysis methods employed in retrieving therapeutic and industrial targets. Results Using the TSCC method, virtually screened compounds were clustered based on their protein-ligand interactions, followed by structure clustering employing physicochemical features, to retrieve the final compounds. Based on the protein-ligand interaction profile (first stage), docked compounds can be clustered into groups with distinct binding interactions. Structure clustering (second stage) grouped similar compounds obtained from the first stage into clusters of similar structures; the lowest energy compound from each cluster being selected as a final candidate. Conclusion By representing interactions at the atomic-level and including measures of interaction strength, better descriptions of protein-ligand interactions and a more specific analysis of virtual screening was achieved. The two-stage clustering approach enhanced our post-screening analysis resulting in accurate performances in clustering, mining and visualizing compound candidates, thus, improving virtual screening enrichment. PMID:21143810

  18. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  19. Could the clinical interpretability of subgroups detected using clustering methods be improved by using a novel two-stage approach?

    DEFF Research Database (Denmark)

    Kent, Peter; Stochkendahl, Mette Jensen; Wulff Christensen, Henrik

    2015-01-01

    is to use statistical clustering techniques, such as Cluster Analysis or Latent Class Analysis, to detect latent relationships between patient characteristics. Influential patient characteristics can come from diverse domains of health, such as pain, activity limitation, physical impairment, social role...... participation, psychological factors, biomarkers and imaging. However, such ‘whole person’ research may result in data-driven subgroups that are complex, difficult to interpret and challenging to recognise clinically. This paper describes a novel approach to applying statistical clustering techniques that may...... improve the clinical interpretability of derived subgroups and reduce sample size requirements. Methods This approach involves clustering in two sequential stages. The first stage involves clustering within health domains and therefore requires creating as many clustering models as there are health...

  20. Sampling strategies for estimating forest cover from remote sensing-based two-stage inventories

    Institute of Scientific and Technical Information of China (English)

    Piermaria; Corona; Lorenzo; Fattorini; Maria; Chiara; Pagliarella

    2015-01-01

    Background: Remote sensing-based inventories are essential in estimating forest cover in tropical and subtropical countries, where ground inventories cannot be performed periodically at a large scale owing to high costs and forest inaccessibility(e.g. REDD projects) and are mandatory for constructing historical records that can be used as forest cover baselines. Given the conditions of such inventories, the survey area is partitioned into a grid of imagery segments of pre-fixed size where the proportion of forest cover can be measured within segments using a combination of unsupervised(automated or semi-automated) classification of satellite imagery and manual(i.e. visual on-screen)enhancements. Because visual on-screen operations are time expensive procedures, manual classification can be performed only for a sample of imagery segments selected at a first stage, while forest cover within each selected segment is estimated at a second stage from a sample of pixels selected within the segment. Because forest cover data arising from unsupervised satellite imagery classification may be freely available(e.g. Landsat imagery)over the entire survey area(wall-to-wall data) and are likely to be good proxies of manually classified cover data(sample data), they can be adopted as suitable auxiliary information.Methods: The question is how to choose the sample areas where manual classification is carried out. We have investigated the efficiency of one-per-stratum stratified sampling for selecting segments and pixels, where to carry out manual classification and to determine the efficiency of the difference estimator for exploiting auxiliary information at the estimation level. The performance of this strategy is compared with simple random sampling without replacement.Results: Our results were obtained theoretically from three artificial populations constructed from the Landsat classification(forest/non forest) available at pixel level for a study area located in central Italy

  1. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Two-stage sample-to-answer system based on nucleic acid amplification approach for detection of malaria parasites.

    Science.gov (United States)

    Liu, Qing; Nam, Jeonghun; Kim, Sangho; Lim, Chwee Teck; Park, Mi Kyoung; Shin, Yong

    2016-08-15

    Rapid, early, and accurate diagnosis of malaria is essential for effective disease management and surveillance, and can reduce morbidity and mortality associated with the disease. Although significant advances have been achieved for the diagnosis of malaria, these technologies are still far from ideal, being time consuming, complex and poorly sensitive as well as requiring separate assays for sample processing and detection. Therefore, the development of a fast and sensitive method that can integrate sample processing with detection of malarial infection is desirable. Here, we report a two-stage sample-to-answer system based on nucleic acid amplification approach for detection of malaria parasites. It combines the Dimethyl adipimidate (DMA)/Thin film Sample processing (DTS) technique as a first stage and the Mach-Zehnder Interferometer-Isothermal solid-phase DNA Amplification (MZI-IDA) sensing technique as a second stage. The system can extract DNA from malarial parasites using DTS technique in a closed system, not only reducing sample loss and contamination, but also facilitating the multiplexed malarial DNA detection using the fast and accurate MZI-IDA technique. Here, we demonstrated that this system can deliver results within 60min (including sample processing, amplification and detection) with high sensitivity (malaria in low-resource settings.

  3. Application of composite estimation in studies of animal population production with two-stage repeated sample designs.

    Science.gov (United States)

    Farver, T B; Holt, D; Lehenbauer, T; Greenley, W M

    1997-05-01

    This paper reports results from two example data sets of a two-stage sampling design where sampling (in panels) both farms and animals within selected farms increases the efficiency of parameter estimation from measurements recorded over time. With such a design, not only are farms replaced from time-to-time but also animals subsampled within retained farms are subject to replacement. Three general categories of parameters estimated for the population (the set of animals belonging to the universe of farms of interest) were (1) the total at each measurement occasion; (2) the difference between means or totals on successive measurement occasions; (3) the total over a sequence of successive measurement periods. Whereas several responses at the farm level were highly correlated over time (rho 1), the corresponding animal responses were less correlated over time (rho 2)-leading to only moderate gains in relative efficiency. Intraclass correlation values were too low in most cases to counteract the overall negative impact of rho 2. In general, sizeable gains in relative efficiency were observed for estimating change-confirming a previous result which showed this to be true provided that rho 1 was high (irrespective of rho 2).

  4. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    Science.gov (United States)

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  5. Description of two-stage processes in reactions of nucleon and cluster knock out by medium-energy protons on the basis of the effective t-matrix

    Energy Technology Data Exchange (ETDEWEB)

    Vdovin, A.I.; Golovin, A.V.; Loshchakov, I.I.

    1986-06-01

    We propose to calculate the amplitude for the two-stage processes in reactions of nucleon and cluster knock out from light nuclei by medium-energy protons as a coherent sum of amplitudes of the probability of the quasielastic pp (or px) interaction and of inelastic proton scattering by the target nucleus with formation of excited states of the intermediate nucleus and with their subsequent decay with emission of a nucleon or a particle x. The reaction matrix element is calculated on the basis of the t-matrix distorted-wave approximation. All functions entering into the matrix element are given analytically, which allows the analytic calculation of multidimensional integrals entering into the matrix element. The calculations of the energy spectra of protons carried out with this model are in good agreement with experiment.

  6. Overlapping Community Detection Algorithm Based on Two-Stage Clustering%基于二阶段聚类的重叠社区发现算法

    Institute of Scientific and Technical Information of China (English)

    蒋盛益; 杨博泓; 李敏敏; 吴美玲; 王连喜

    2015-01-01

    Aiming at the complex network overlapping community detection, an overlapping community detection algorithm based on two-stage clustering is proposed. Eigen decomposition is applied to network adjacency matrix. The nodes are projected into k-dimensional Euclidean space, and then they are clustered by hard and soft clustering algorithm to detect the structure of overlapping community efficiently and adaptively. At the stage of hard clustering, a clustering algorithm based on the principle of minimum distance is introduced to divide nodes autonomously, and the number of communities and cluster centers for the soft clustering stage are determined. At the stage of soft clustering, fuzzy C-means algorithm is introduced and the fuzzy modularity is considered as objective function for the algorithm. Through iterative optimization of the fuzzy modularity, a soft partition is realized and overlapping community structures in network can be figured out. Experiments are carried out on a number of real network datasets, and the results indicate that the proposed algorithm can mine overlapping community structure in complex network with high efficiency and effectiveness.%针对当前复杂网络重叠社区发现的热点问题,提出基于二阶段聚类的重叠社区发现算法.对网络邻接矩阵进行特征分解时,节点投影到k维欧氏空间后,对节点先后进行硬聚类和软聚类,高效自适应地挖掘网络中的重叠社区结构.在硬聚类阶段中,引入基于距离最小原则的一趟聚类算法对节点进行自适应的硬划分,确定软聚类阶段中的聚类中心和网络的社区数量.在软聚类阶段中,引入以模糊模块度为目标函数的模糊C均值算法,通过迭代优化模糊模块度实现对节点的软划分,挖掘网络中的重叠社区结构.在多个真实网络数据集上的实验验证文中算法能高效挖掘复杂网络中的重叠社区结构.

  7. 基于Hadoop二阶段并行模糊c-Means聚类算法%HADOOP-BASED TWO-STAGE PARALLEL FUZZY C-MEANS CLUSTERING ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    胡吉朝; 黄红艳

    2016-01-01

    Aiming at the problem of too high occupancy of communication time and limited applying value of the algorithm under the mechanism of Mapreduce,we put forward a Hadoop-based two-stage parallel c-Means clustering algorithm to deal with the problem of extra-large data classification.First,we improved the MPI communication management method in Mapreduce mechanism,and used membership management protocol mode to realise the synchronisation of members management and Mapreduce reducing operation.Secondly, we implemented typical individuals group reducing operation instead of global individual reducing operation,and defined the two-stage buffer algorithm.Finally,through the buffer in first stage we further reduced the data amount of Mapreduce operation in second stage,and reduced the negative impact brought about by big data on the algorithm as much as possible.Based on this,we carried out the simulation by using artificial big data test set and KDD CUP 99 invasion test data.Experimental result showed that the algorithm could both guarantee the clustering precision requirement and speed up effectively the operation efficiency of algorithm.%针对Mapreduce机制下算法通信时间占用比过高,实际应用价值受限的情况,提出基于Hadoop二阶段并行c-Means聚类算法用来解决超大数据的分类问题。首先,改进Mapreduce机制下的MPI通信管理方法,采用成员管理协议方式实现成员管理与Mapreduce降低操作的同步化;其次,实行典型个体组降低操作代替全局个体降低操作,并定义二阶段缓冲算法;最后,通过第一阶段的缓冲进一步降低第二阶段Mapreduce操作的数据量,尽可能降低大数据带来的对算法负面影响。在此基础上,利用人造大数据测试集和KDD CUP 99入侵测试集进行仿真,实验结果表明,该算法既能保证聚类精度要求又可有效加快算法运行效率。

  8. An unsupervised two-stage clustering approach for forest structure classification based on X-band InSAR data - A case study in complex temperate forest stands

    Science.gov (United States)

    Abdullahi, Sahra; Schardt, Mathias; Pretzsch, Hans

    2017-05-01

    Forest structure at stand level plays a key role for sustainable forest management, since the biodiversity, productivity, growth and stability of the forest can be positively influenced by managing its structural diversity. In contrast to field-based measurements, remote sensing techniques offer a cost-efficient opportunity to collect area-wide information about forest stand structure with high spatial and temporal resolution. Especially Interferometric Synthetic Aperture Radar (InSAR), which facilitates worldwide acquisition of 3d information independent from weather conditions and illumination, is convenient to capture forest stand structure. This study purposes an unsupervised two-stage clustering approach for forest structure classification based on height information derived from interferometric X-band SAR data which was performed in complex temperate forest stands of Traunstein forest (South Germany). In particular, a four dimensional input data set composed of first-order height statistics was non-linearly projected on a two-dimensional Self-Organizing Map, spatially ordered according to similarity (based on the Euclidean distance) in the first stage and classified using the k-means algorithm in the second stage. The study demonstrated that X-band InSAR data exhibits considerable capabilities for forest structure classification. Moreover, the unsupervised classification approach achieved meaningful and reasonable results by means of comparison to aerial imagery and LiDAR data.

  9. Estimation of infection prevalence and sensitivity in a stratified two-stage sampling design employing highly specific diagnostic tests when there is no gold standard.

    Science.gov (United States)

    Miller, Ezer; Huppert, Amit; Novikov, Ilya; Warburg, Alon; Hailu, Asrat; Abbasi, Ibrahim; Freedman, Laurence S

    2015-11-10

    In this work, we describe a two-stage sampling design to estimate the infection prevalence in a population. In the first stage, an imperfect diagnostic test was performed on a random sample of the population. In the second stage, a different imperfect test was performed in a stratified random sample of the first sample. To estimate infection prevalence, we assumed conditional independence between the diagnostic tests and develop method of moments estimators based on expectations of the proportions of people with positive and negative results on both tests that are functions of the tests' sensitivity, specificity, and the infection prevalence. A closed-form solution of the estimating equations was obtained assuming a specificity of 100% for both tests. We applied our method to estimate the infection prevalence of visceral leishmaniasis according to two quantitative polymerase chain reaction tests performed on blood samples taken from 4756 patients in northern Ethiopia. The sensitivities of the tests were also estimated, as well as the standard errors of all estimates, using a parametric bootstrap. We also examined the impact of departures from our assumptions of 100% specificity and conditional independence on the estimated prevalence. Copyright © 2015 John Wiley & Sons, Ltd.

  10. The Efficiency Level in the Estimation of the Nigerian Population: A Comparison of One-Stage and Two-Stage Sampling Technique (A Case Study of the 2006 Census of Nigerians

    Directory of Open Access Journals (Sweden)

    T.J. Akingbade

    2014-09-01

    Full Text Available This research work compares the one-stage sampling technique (Simple Random Sampling and two-stage sampling technique for estimating the population total of Nigerians using the 2006 census result of Nigerians. A sample size of twenty (20 states was selected out of a population of thirty six (36 states at the Primary Sampling Unit (PSU and one-third of each state selected at the PSU was sample at the Secondary Sampling Unit (SSU and analyzed. The result shows that, with the same sample size at the PSU, one-stage sampling technique (Simple Random Sampling is more efficient than two-stage sampling technique and hence, recommended.

  11. Extending cluster lot quality assurance sampling designs for surveillance programs.

    Science.gov (United States)

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate.

  12. THE LIBERATION OF ARSENOSUGARS FROM MATRIX COMPONENTS IN DIFFICULT TO EXTRACT SEAFOOD SAMPLES UTILIZING TMAOH/ACETIC ACID SEQUENTIALLY IN A TWO-STAGE EXTRACTION PROCESS

    Science.gov (United States)

    Sample extraction is one of the most important steps in arsenic speciation analysis of solid dietary samples. One of the problem areas in this analysis is the partial extraction of arsenicals from seafood samples. The partial extraction allows the toxicity of the extracted arse...

  13. Cluster analysis in kinetic modelling of the brain: A noninvasive alternative to arterial sampling

    DEFF Research Database (Denmark)

    Liptrot, Matthew George; Adams, K.H.; Martiny, L.

    2004-01-01

    In emission tomography, quantification of brain tracer uptake, metabolism or binding requires knowledge of the cerebral input function. Traditionally, this is achieved with arterial blood sampling. We propose a noninvasive alternative via the use of a blood vessel time-activity curve (TAC......) extracted directly from dynamic positron emission tomography (PET) scans by cluster analysis. Five healthy subjects were injected with the 5HT2A- receptor ligand [18F]-altanserin and blood samples were subsequently taken from the radial artery and cubital vein. Eight regions-of-interest (ROI) TACs were...... extracted from the PET data set. Hierarchical K-means cluster analysis was performed on the PET time series to extract a cerebral vasculature ROI. The number of clusters was varied from K = 1 to 10 for the second of the two-stage method. Determination of the correct number of clusters was performed...

  14. Macroscopic flame structure in a premixed-spray burner. 1st Report. formation and disappearance processes of droplet clusters and two-stage flame structure; Yokongo funmu kaen no kyoshiteki nensho kyodo. 1. Yuteki cluster no keisei shoshitsu katei to niju kaen kozo

    Energy Technology Data Exchange (ETDEWEB)

    Tsushima, S.; Saito, H.; Akamatsu, F.; Katsuki, M. [Osaka University, Osaka (Japan)

    2000-08-25

    In an attempt to elucidate formation and disappearance processes of droplet clusters in spray flames, simultaneous measurements consisting of laser tomography and flame chemiluminescence detection are applied to a premixed-spay burner. The smart combination of measurements provides time-series data-set serving for better understanding of spray flames, which essentially contains inhomogeneity in space and time. It is revealed that referential flame propagation through a premixed-spray stream plays a significant role in creating droplet clusters and that droplet clusters formed in this manner evanesces from their outer boundaries. Those observation confirms that the premixed-spray flame comprises both premixed-mode flame in upstream region and diffusion-mode flame in downstream region, respectively, i.e, two-stage flame structure previously reported for spray flames stabilized in either counter or stagnation flows. (author)

  15. Global and Partial Errors in Stratified and Clustering Sampling

    OpenAIRE

    Giovanna Nicolini; Anna Lo Presti

    2005-01-01

    In this paper we split up the sampling error occurred in stratified and clustering sampling, called global error and measured by the variance of estimator, in many partial errors each one referred to a single stratum or cluster. In particular, we study, for clustering sampling, the empirical distribution of the homogeneity coefficient that is very important for settlement of partial errors.

  16. Brightest cluster galaxies in the extended GMRT radio halo cluster sample. Radio properties and cluster dynamics

    Science.gov (United States)

    Kale, R.; Venturi, T.; Cassano, R.; Giacintucci, S.; Bardelli, S.; Dallacasa, D.; Zucca, E.

    2015-09-01

    Aims: First-ranked galaxies in clusters, usually referred to as brightest cluster galaxies (BCGs), show exceptional properties over the whole electromagnetic spectrum. They are the most massive elliptical galaxies and show the highest probability to be radio loud. Moreover, their special location at the centres of galaxy clusters raises the question of the role of the environment in shaping their radio properties. In the attempt to separate the effect of the galaxy mass and of the environment on their statistical radio properties, we investigate the possible dependence of the occurrence of radio loudness and of the fractional radio luminosity function on the dynamical state of the hosting cluster. Methods: We studied the radio properties of the BCGs in the Extended GMRT Radio Halo Survey (EGRHS), which consists of 65 clusters in the redshift range 0.2-0.4, with X-ray luminosity LX ≥ 5 × 1044 erg s-1, and quantitative information on their dynamical state from high-quality Chandra imaging. We obtained a statistical sample of 59 BCGs, which we divided into two classes, depending on whether the dynamical state of the host cluster was merging (M) or relaxed (R). Results: Of the 59 BCGs, 28 are radio loud and 31 are radio quiet. The radio-loud sources are favourably located in relaxed clusters (71%), while the reverse is true for the radio-quiet BCGs, which are mostly located in merging systems (81%). The fractional radio luminosity function for the BCGs in merging and relaxed clusters is different, and it is considerably higher for BCGs in relaxed clusters, where the total fraction of radio loudness reaches almost 90%, to be compared to the ~30% in merging clusters. For relaxed clusters, we found a positive correlation between the radio power of the BCGs and the strength of the cool core, consistent with previous studies on local samples. Conclusions: Our study suggests that the radio loudness of the BCGs strongly depends on the cluster dynamics; their fraction is

  17. 基于视觉显著性的两阶段采样突变目标跟踪算法%Saliency Based Tracking Method for Abrupt Motions via Two-stage Sampling

    Institute of Scientific and Technical Information of China (English)

    江晓莲; 李翠华; 李雄宗

    2014-01-01

    In this paper, a saliency based tracking method via two-stage sampling is proposed for abrupt motions. Firstly, the visual salience is introduced as a prior knowledge into the Wang-Landau Monte Carlo (WLMC)-based tracking algorithm. By dividing the spatial space into disjoint sub-regions and assigning each sub-region a saliency value, a prior knowledge of the promising regions is obtained;then the saliency values of sub-regions are integrated into the Markov chain Monte Carlo (MCMC) acceptance mechanism to guide effective states sampling. Secondly, considering the abrupt motion sequence contains both abrupt and smooth motions, a two-stage sampling model is brought up into the algorithm. In the first stage, the model detects the motion type of the target. According to the result of the first stage, the model chooses either the saliency-based WLMC method to track abrupt motions or the double-chain MCMC method to track smooth motions of the target in the second stage. The algorithm effciently addresses tracking of abrupt motions while smooth motions are also accurately tracked. Experimental results demonstrate that this approach outperforms the state-of-the-art algorithms on abrupt motion sequence and public benchmark sequence in terms of accuracy and robustness.%针对运动突变目标视觉跟踪问题,提出一种基于视觉显著性的两阶段采样跟踪算法。首先,将视觉显著性信息引入到Wang-Landau 蒙特卡罗(Wang-Landau Monte Carlo, WLMC)跟踪算法中,设计了结合显著性先验的接受函数,利用子区域的显著性值来引导马尔可夫链的构造,通过增大目标出现区粒子的接受概率,提高采样效率;其次,针对运动序列中平滑与突变运动共存的特点,建立两阶段采样模型。其中第一阶段对目标当前运动类型进行判定,第二阶段则根据判定结果采用相应算法。突变运动采用基于视觉显著性的WLMC 算法,平滑运动采用双链

  18. Brightest Cluster Galaxies in the Extended GMRT radio halo cluster sample. Radio properties and cluster dynamics

    CERN Document Server

    Kale, Ruta; Cassano, Rossella; Giacintucci, Simona; Bardelli, sandro; Dallacasa, Daniele; Zucca, Elena

    2015-01-01

    Brightest Cluster Galaxies (BCGs) show exceptional properties over the whole electromagnetic spectrum. Their special location at the centres of galaxy clusters raises the question of the role of the environment on their radio properties. To decouple the effect of the galaxy mass and of the environment in their statistical radio properties, we investigate the possible dependence of the occurrence of radio loudness and of the fractional radio luminosity function on the dynamical state of the hosting cluster. We studied the radio properties of the BCGs in the Extended GMRT Radio Halo Survey (EGRHS). We obtained a statistical sample of 59 BCGs, which was divided into two classes, depending on the dynamical state of the host cluster, i.e. merging (M) and relaxed (R). Among the 59 BCGs, 28 are radio-loud, and 31 are radio--quiet. The radio-loud sources are located favourably located in relaxed clusters (71\\%), while the reverse is true for the radio-quiet BCGs, mostly located in merging systems (81\\%). The fraction...

  19. The construction of two-stage tests

    NARCIS (Netherlands)

    Adema, Jos J.

    1988-01-01

    Although two-stage testing is not the most efficient form of adaptive testing, it has some advantages. In this paper, linear programming models are given for the construction of two-stage tests. In these models, practical constraints with respect to, among other things, test composition, administrat

  20. Estimating regression coefficients from clustered samples: Sampling errors and optimum sample allocation

    Science.gov (United States)

    Kalton, G.

    1983-01-01

    A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.

  1. On the metallicity of open clusters. III. Homogenised sample

    CERN Document Server

    Netopil, M; Heiter, U; Soubiran, C

    2016-01-01

    Open clusters are known as excellent tools for various topics in Galactic research. For example, they allow accurately tracing the chemical structure of the Galactic disc. However, the metallicity is known only for a rather low percentage of the open cluster population, and these values are based on a variety of methods and data. Therefore, a large and homogeneous sample is highly desirable. In the third part of our series we compile a large sample of homogenised open cluster metallicities using a wide variety of different sources. These data and a sample of Cepheids are used to investigate the radial metallicity gradient, age effects, and to test current models. We used photometric and spectroscopic data to derive cluster metallicities. The different sources were checked and tested for possible offsets and correlations. In total, metallicities for 172 open cluster were derived. We used the spectroscopic data of 100 objects for a study of the radial metallicity distribution and the age-metallicity relation. W...

  2. On the metallicity of open clusters. III. Homogenised sample

    Science.gov (United States)

    Netopil, M.; Paunzen, E.; Heiter, U.; Soubiran, C.

    2016-01-01

    Context. Open clusters are known as excellent tools for various topics in Galactic research. For example, they allow accurately tracing the chemical structure of the Galactic disc. However, the metallicity is known only for a rather low percentage of the open cluster population, and these values are based on a variety of methods and data. Therefore, a large and homogeneous sample is highly desirable. Aims: In the third part of our series we compile a large sample of homogenised open cluster metallicities using a wide variety of different sources. These data and a sample of Cepheids are used to investigate the radial metallicity gradient, age effects, and to test current models. Methods: We used photometric and spectroscopic data to derive cluster metallicities. The different sources were checked and tested for possible offsets and correlations. Results: In total, metallicities for 172 open cluster were derived. We used the spectroscopic data of 100 objects for a study of the radial metallicity distribution and the age-metallicity relation. We found a possible increase of metallicity with age, which, if confirmed, would provide observational evidence for radial migration. Although a statistical significance is given, more studies are certainly needed to exclude selection effects, for example. The comparison of open clusters and Cepheids with recent Galactic models agrees well in general. However, the models do not reproduce the flat gradient of the open clusters in the outer disc. Thus, the effect of radial migration is either underestimated in the models, or an additional mechanism is at work. Conclusions: Apart from the Cepheids, open clusters are the best tracers for metallicity over large Galactocentric distances in the Milky Way. For a sound statistical analysis, a sufficiently large and homogeneous sample of cluster metallicities is needed. Our compilation is currently by far the largest and provides the basis for several basic studies such as the statistical

  3. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Science.gov (United States)

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  4. Declustering of clustered preferential sampling for histogram and semivariogram inference

    Science.gov (United States)

    Olea, R.A.

    2007-01-01

    Measurements of attributes obtained more as a consequence of business ventures than sampling design frequently result in samplings that are preferential both in location and value, typically in the form of clusters along the pay. Preferential sampling requires preprocessing for the purpose of properly inferring characteristics of the parent population, such as the cumulative distribution and the semivariogram. Consideration of the distance to the nearest neighbor allows preparation of resampled sets that produce comparable results to those from previously proposed methods. Clustered sampling of size 140, taken from an exhaustive sampling, is employed to illustrate this approach. ?? International Association for Mathematical Geology 2007.

  5. Cosmology and Astrophysics from Relaxed Galaxy Clusters I: Sample Selection

    CERN Document Server

    Mantz, Adam B; Morris, R Glenn; Schmidt, Robert W; von der Linden, Anja; Urban, Ondrej

    2015-01-01

    This is the first in a series of papers studying the astrophysics and cosmology of massive, dynamically relaxed galaxy clusters. Here we present a new, automated method for identifying relaxed clusters based on their morphologies in X-ray imaging data. While broadly similar to others in the literature, the morphological quantities that we measure are specifically designed to provide a fair basis for comparison across a range of data quality and cluster redshifts, to be robust against missing data due to point-source masks and gaps between detectors, and to avoid strong assumptions about the cosmological background and cluster masses. Based on three morphological indicators - Symmetry, Peakiness and Alignment - we develop the SPA criterion for relaxation. This analysis was applied to a large sample of cluster observations from the Chandra and ROSAT archives. Of the 361 clusters which received the SPA treatment, 57 (16 per cent) were subsequently found to be relaxed according to our criterion. We compare our me...

  6. A two-stage rank test using density estimation

    NARCIS (Netherlands)

    Albers, Willem/Wim

    1995-01-01

    For the one-sample problem, a two-stage rank test is derived which realizes a required power against a given local alternative, for all sufficiently smooth underlying distributions. This is achieved using asymptotic expansions resulting in a precision of orderm −1, wherem is the size of the first

  7. Spatial Clustering from GALEX-SDSS samples: Star Formation History and large-scale clustering

    CERN Document Server

    Heinis, Sebastien; Szalay, A S; Arnouts, Stephane; Aragon-Calvo, Miguel A; Wyder, Ted K; Barlow, Tom A; Foster, Karl; Friedman, Peter G; Martin, D Christopher; Morrissey, Patrick; Neff, Susan G; Schiminovich, David; Seibert, Mark; Bianchi, Luciana; Donas, Jose; Heckman, Timothy M; Lee, Young-Wook; Madore, Barry F; Milliard, Bruno; Rich, R Michael; Yi, Sukyoung K

    2009-01-01

    We measure the projected spatial correlation function w_p(r_p) from a large sample combining GALEX ultraviolet imaging with the SDSS spectroscopic sample. We study the dependence of the clustering strength for samples selected on (NUV - r)_abs color, specific star formation rate (SSFR), and stellar mass. We find that there is a smooth transition in the clustering of galaxies as a function of this color from weak clustering among blue galaxies to stronger clustering for red galaxies. The clustering of galaxies within the "green valley" has an intermediate strength, and is consistent with that expected from galaxy groups. The results are robust to the correction for dust extinction. The comparison with simple analytical modeling suggests that the halo occupation number increases with older star formation epochs. When splitting according to SSFR, we find that the SSFR is a more sensitive tracer of environment than stellar mass.

  8. Two Stage Gear Tooth Dynamics Program

    Science.gov (United States)

    1989-08-01

    cordi - tions and associated iteration prooedure become more complex. This is due to both the increased number of components and to the time for a...solved for each stage in the two stage solution . There are (3 + ntrrber of planets) degrees of freedom fcr eacb stage plus two degrees of freedom...should be devised. It should be noted that this is not minor task. In general, each stage plus an input or output shaft will have 2 times (4 + number

  9. High Frequency Cluster Radio Galaxies: Luminosity Functions and Implications for SZE Selected Cluster Samples

    Science.gov (United States)

    Gupta, N.; Saro, A.; Mohr, J. J.; Benson, B. A.; Bocquet, S.; Capasso, R.; Carlstrom, J. E.; Chiu, I.; Crawford, T. M.; de Haan, T.; Dietrich, J. P.; Gangkofner, C.; Holzapfel, W. L.; McDonald, M.; Rapetti, D.; Reichardt, C. L.

    2017-01-01

    We study the overdensity of point sources in the direction of X-ray-selected galaxy clusters from the Meta-Catalog of X-ray detected Clusters of galaxies (MCXC; = 0.14) at South Pole Telescope (SPT) and Sydney University Molonglo Sky Survey (SUMSS) frequencies. Flux densities at 95, 150 and 220 GHz are extracted from the 2500 deg2 SPT-SZ survey maps at the locations of SUMSS sources, producing a multi-frequency catalog of radio galaxies. In the direction of massive galaxy clusters, the radio galaxy flux densities at 95 and 150 GHz are biased low by the cluster Sunyaev-Zel'dovich Effect (SZE) signal, which is negative at these frequencies. We employ a cluster SZE model to remove the expected flux bias and then study these corrected source catalogs. We find that the high frequency radio galaxies are centrally concentrated within the clusters and that their luminosity functions (LFs) exhibit amplitudes that are characteristically an order of magnitude lower than the cluster LF at 843 MHz. We use the 150 GHz LF to estimate the impact of cluster radio galaxies on an SPT-SZ like survey. The radio galaxy flux typically produces a small bias on the SZE signal and has negligible impact on the observed scatter in the SZE mass-observable relation. If we assume there is no redshift evolution in the radio galaxy LF then 1.8 ± 0.7 percent of the clusters with detection significance ξ ≥ 4.5 would be lost from the sample. Allowing for redshift evolution of the form (1 + z)2.5 increases the incompleteness to 5.6 ± 1.0 percent. Improved constraints on the evolution of the cluster radio galaxy LF require a larger cluster sample extending to higher redshift.

  10. Condensate from a two-stage gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Henriksen, Ulrik Birk; Hindsgaul, Claus

    2000-01-01

    that the organic compounds and the inhibition effect are very low even before treatment with activated carbon. The moderate inhibition effect relates to a high content of ammonia in the condensate. The nitrifiers become tolerant to the condensate after a few weeks of exposure. The level of organic compounds......Condensate, produced when gas from downdraft biomass gasifier is cooled, contains organic compounds that inhibit nitrifiers. Treatment with activated carbon removes most of the organics and makes the condensate far less inhibitory. The condensate from an optimised two-stage gasifier is so clean...

  11. Two Stage Sibling Cycle Compressor/Expander.

    Science.gov (United States)

    1994-02-01

    vol. 5, p. 424. 11. L. Bauwens and M.P. Mitchell, " Regenerator Analysis: Validation of the MS*2 Stirling Cycle Code," Proc. XVIIIth International...PL-TR--94-1051 PL-TR-- 94-1051 TWO STAGE SIBLING CYCLE COMPRESSOR/EXPANDER Matthew P. Mitchell . Mitchell/ Stirling Machines/Systems, Inc. No\\ 1995...ty. THIS PAGE IS UNCLASSIFIED PL-TR-94-1051 This final report was prepared byMitchell/ Stirling Machines/Systems, Inc., Berkeley, CA under Contract

  12. Cluster Sampling Filters for Non-Gaussian Data Assimilation

    OpenAIRE

    2016-01-01

    This paper presents a fully non-Gaussian version of the Hamiltonian Monte Carlo (HMC) sampling filter. The Gaussian prior assumption in the original HMC filter is relaxed. Specifically, a clustering step is introduced after the forecast phase of the filter, and the prior density function is estimated by fitting a Gaussian Mixture Model (GMM) to the prior ensemble. Using the data likelihood function, the posterior density is then formulated as a mixture density, and is sampled using a HMC appr...

  13. Clustering of Galaxies and Groups in the NOG Sample

    CERN Document Server

    Giuricin, G; Girardi, M; Mezzetti, M; Marinoni, C

    2000-01-01

    We use the two-point correlation function in redshift space, $\\xi(s)$, to study the clustering of the galaxies and groups of the Nearby Optical Galaxy (NOG) Sample, which is a nearly all-sky, complete, magnitude-limited sample of \\~7000 bright and nearby optical galaxies. The correlation function of galaxies is well-described by a power-law, $\\xi(s)= (s/s_0)^{-\\gamma}$, with $\\gamma\\sim1.5$ and $s_0\\sim 6.4 h^{-1}$ Mpc. We find evidence of morphological segregation between early- and late-type galaxies, with a gradual decreasing of the strength of clustering from the S0 to the late-type spirals, on intermediate scales. Furthermore, luminous galaxies (with $M_B\\leq -19.5 + 5 \\log h$) are more clustered than dim galaxies. The groups show an excess of clustering with respect to galaxies. Groups with greater velocity dispersions, sizes, and masses are more clustered than those with lower values of these quantities.

  14. Recursive algorithm for the two-stage EFOP estimation method

    Institute of Scientific and Technical Information of China (English)

    LUO GuiMing; HUANG Jian

    2008-01-01

    A recursive algorithm for the two-stage empirical frequency-domain optimal param-eter (EFOP) estimation method Was proposed. The EFOP method was a novel sys-tem identificallon method for Black-box models that combines time-domain esti-mation and frequency-domain estimation. It has improved anti-disturbance perfor-mance, and could precisely identify models with fewer sample numbers. The two-stage EFOP method based on the boot-strap technique was generally suitable for Black-box models, but it was an iterative method and takes too much computation work so that it did not work well online. A recursive algorithm was proposed for dis-turbed stochastic systems. Some simulation examples are included to demonstrate the validity of the new method.

  15. Classification in two-stage screening.

    Science.gov (United States)

    Longford, Nicholas T

    2015-11-10

    Decision theory is applied to the problem of setting thresholds in medical screening when it is organised in two stages. In the first stage that involves a less expensive procedure that can be applied on a mass scale, an individual is classified as a negative or a likely positive. In the second stage, the likely positives are subjected to another test that classifies them as (definite) positives or negatives. The second-stage test is more accurate, but also more expensive and more involved, and so there are incentives to restrict its application. Robustness of the method with respect to the parameters, some of which have to be set by elicitation, is assessed by sensitivity analysis.

  16. Two stage gear tooth dynamics program

    Science.gov (United States)

    Boyd, Linda S.

    1989-01-01

    The epicyclic gear dynamics program was expanded to add the option of evaluating the tooth pair dynamics for two epicyclic gear stages with peripheral components. This was a practical extension to the program as multiple gear stages are often used for speed reduction, space, weight, and/or auxiliary units. The option was developed for either stage to be a basic planetary, star, single external-external mesh, or single external-internal mesh. The two stage system allows for modeling of the peripherals with an input mass and shaft, an output mass and shaft, and a connecting shaft. Execution of the initial test case indicated an instability in the solution with the tooth paid loads growing to excessive magnitudes. A procedure to trace the instability is recommended as well as a method of reducing the program's computation time by reducing the number of boundary condition iterations.

  17. Don't spin the pen: two alternative methods for second-stage sampling in urban cluster surveys

    Directory of Open Access Journals (Sweden)

    Rose Angela MC

    2007-06-01

    Full Text Available Abstract In two-stage cluster surveys, the traditional method used in second-stage sampling (in which the first household in a cluster is selected is time-consuming and may result in biased estimates of the indicator of interest. Firstly, a random direction from the center of the cluster is selected, usually by spinning a pen. The houses along that direction are then counted out to the boundary of the cluster, and one is then selected at random to be the first household surveyed. This process favors households towards the center of the cluster, but it could easily be improved. During a recent meningitis vaccination coverage survey in Maradi, Niger, we compared this method of first household selection to two alternatives in urban zones: 1 using a superimposed grid on the map of the cluster area and randomly selecting an intersection; and 2 drawing the perimeter of the cluster area using a Global Positioning System (GPS and randomly selecting one point within the perimeter. Although we only compared a limited number of clusters using each method, we found the sampling grid method to be the fastest and easiest for field survey teams, although it does require a map of the area. Selecting a random GPS point was also found to be a good method, once adequate training can be provided. Spinning the pen and counting households to the boundary was the most complicated and time-consuming. The two methods tested here represent simpler, quicker and potentially more robust alternatives to spinning the pen for cluster surveys in urban areas. However, in rural areas, these alternatives would favor initial household selection from lower density (or even potentially empty areas. Bearing in mind these limitations, as well as available resources and feasibility, investigators should choose the most appropriate method for their particular survey context.

  18. CLUSTER SAMPLING FOR DETERMINATION OF IMMUNIZATION COVERAGE: A LIMITATION

    Directory of Open Access Journals (Sweden)

    K. Nasseri

    1989-08-01

    Full Text Available Evidence provided from studies in Iran points to a possible bias in application of the standard EPI cluster sampling procedure, When carried out in populations with highly variant birth rates, tends to over-represent the low birth rate, i.e. higher socio-economic, strata, and if the entity under study shows significant socio-economic gradient, then the estimates arrived at by this method might be highly biased. Some alternatives have been mentioned.

  19. Two-Stage Modelling Of Random Phenomena

    Science.gov (United States)

    Barańska, Anna

    2015-12-01

    The main objective of this publication was to present a two-stage algorithm of modelling random phenomena, based on multidimensional function modelling, on the example of modelling the real estate market for the purpose of real estate valuation and estimation of model parameters of foundations vertical displacements. The first stage of the presented algorithm includes a selection of a suitable form of the function model. In the classical algorithms, based on function modelling, prediction of the dependent variable is its value obtained directly from the model. The better the model reflects a relationship between the independent variables and their effect on the dependent variable, the more reliable is the model value. In this paper, an algorithm has been proposed which comprises adjustment of the value obtained from the model with a random correction determined from the residuals of the model for these cases which, in a separate analysis, were considered to be the most similar to the object for which we want to model the dependent variable. The effect of applying the developed quantitative procedures for calculating the corrections and qualitative methods to assess the similarity on the final outcome of the prediction and its accuracy, was examined by statistical methods, mainly using appropriate parametric tests of significance. The idea of the presented algorithm has been designed so as to approximate the value of the dependent variable of the studied phenomenon to its value in reality and, at the same time, to have it "smoothed out" by a well fitted modelling function.

  20. Clustered nested sampling: efficient Bayesian inference for cosmology

    CERN Document Server

    Shaw, R; Hobson, M P

    2007-01-01

    Bayesian model selection provides the cosmologist with an exacting tool to distinguish between competing models based purely on the data, via the Bayesian evidence. Previous methods to calculate this quantity either lacked general applicability or were computationally demanding. However, nested sampling (Skilling 2004), which was recently applied successfully to cosmology by Muhkerjee et al. 2006, overcomes both of these impediments. Their implementation restricts the parameter space sampled, and thus improves the efficiency, using a decreasing ellipsoidal bound in the $n$-dimensional parameter space centred on the maximum likelihood point. However, if the likelihood function contains any multi-modality, then the ellipse is prevented from constraining the sampling region efficiently. In this paper we introduce a method of clustered ellipsoidal nested sampling which can form multiple ellipses around each individual peak in the likelihood. In addition we have implemented a method for determining the expectation...

  1. On Two-stage Seamless Adaptive Design in Clinical Trials

    Directory of Open Access Journals (Sweden)

    Shein-Chung Chow

    2008-12-01

    Full Text Available In recent years, the use of adaptive design methods in clinical research and development based on accrued data has become very popular because of its efficiency and flexibility in modifying trial and/or statistical procedures of ongoing clinical trials. One of the most commonly considered adaptive designs is probably a two-stage seamless adaptive trial design that combines two separate studies into one single study. In many cases, study endpoints considered in a two-stage seamless adaptive design may be similar but different (e.g. a biomarker versus a regular clinical endpoint or the same study endpoint with different treatment durations. In this case, it is important to determine how the data collected from both stages should be combined for the final analysis. It is also of interest to know how the sample size calculation/allocation should be done for achieving the study objectives originally set for the two stages (separate studies. In this article, formulas for sample size calculation/allocation are derived for cases in which the study endpoints are continuous, discrete (e.g. binary responses, and contain time-to-event data assuming that there is a well-established relationship between the study endpoints at different stages, and that the study objectives at different stages are the same. In cases in which the study objectives at different stages are different (e.g. dose finding at the first stage and efficacy confirmation at the second stage and when there is a shift in patient population caused by protocol amendments, the derived test statistics and formulas for sample size calculation and allocation are necessarily modified for controlling the overall type I error at the prespecified level.

  2. A Two Stage Classification Approach for Handwritten Devanagari Characters

    CERN Document Server

    Arora, Sandhya; Nasipuri, Mita; Malik, Latesh

    2010-01-01

    The paper presents a two stage classification approach for handwritten devanagari characters The first stage is using structural properties like shirorekha, spine in character and second stage exploits some intersection features of characters which are fed to a feedforward neural network. Simple histogram based method does not work for finding shirorekha, vertical bar (Spine) in handwritten devnagari characters. So we designed a differential distance based technique to find a near straight line for shirorekha and spine. This approach has been tested for 50000 samples and we got 89.12% success

  3. Sampling Within k-Means Algorithm to Cluster Large Datasets

    Energy Technology Data Exchange (ETDEWEB)

    Bejarano, Jeremy [Brigham Young University; Bose, Koushiki [Brown University; Brannan, Tyler [North Carolina State University; Thomas, Anita [Illinois Institute of Technology; Adragni, Kofi [University of Maryland; Neerchal, Nagaraj [University of Maryland; Ostrouchov, George [ORNL

    2011-08-01

    Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.

  4. Calculating sample sizes for cluster randomized trials: we can keep it simple and efficient !

    NARCIS (Netherlands)

    van Breukelen, Gerard J.P.; Candel, Math J.J.M.

    2012-01-01

    Objective: Simple guidelines for efficient sample sizes in cluster randomized trials with unknown intraclass correlation and varying cluster sizes. Methods: A simple equation is given for the optimal number of clusters and sample size per cluster. Here, optimal means maximizing power for a given

  5. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2002-01-01

    Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs......Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs...

  6. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  7. Merging and Clustering of the Swift BAT AGN Sample

    CERN Document Server

    Koss, Michael; Veilleux, Sylvain; Winter, Lisa; 10.1088/2041-8205/716/2/L125

    2010-01-01

    We discuss the merger rate, close galaxy environment, and clustering on scales up to a Mpc of the SWIFT BAT hard X-ray sample of nearby (z<0.05), moderate-luminosity active galactic nuclei (AGN). We find a higher incidence of galaxies with signs of disruption compared to a matched control sample (18% versus 1%) and of close pairs within 30 kpc (24% versus 1%). We also find a larger fraction with companions compared to normal galaxies and optical emission line selected AGN at scales up to 250 kpc. We hypothesize that these merging AGN may not be identified using optical emission line diagnostics because of optical extinction and dilution by star formation. In support of this hypothesis, in merging systems we find a higher hard X-ray to [OIII] flux ratio, as well as emission line diagnostics characteristic of composite or star-forming galaxies, and a larger IRAS 60 um to stellar mass ratio.

  8. Sample size for cluster randomized trials: effect of coefficient of variation of cluster size and analysis method.

    Science.gov (United States)

    Eldridge, Sandra M; Ashby, Deborah; Kerry, Sally

    2006-10-01

    Cluster randomized trials are increasingly popular. In many of these trials, cluster sizes are unequal. This can affect trial power, but standard sample size formulae for these trials ignore this. Previous studies addressing this issue have mostly focused on continuous outcomes or methods that are sometimes difficult to use in practice. We show how a simple formula can be used to judge the possible effect of unequal cluster sizes for various types of analyses and both continuous and binary outcomes. We explore the practical estimation of the coefficient of variation of cluster size required in this formula and demonstrate the formula's performance for a hypothetical but typical trial randomizing UK general practices. The simple formula provides a good estimate of sample size requirements for trials analysed using cluster-level analyses weighting by cluster size and a conservative estimate for other types of analyses. For trials randomizing UK general practices the coefficient of variation of cluster size depends on variation in practice list size, variation in incidence or prevalence of the medical condition under examination, and practice and patient recruitment strategies, and for many trials is expected to be approximately 0.65. Individual-level analyses can be noticeably more efficient than some cluster-level analyses in this context. When the coefficient of variation is <0.23, the effect of adjustment for variable cluster size on sample size is negligible. Most trials randomizing UK general practices and many other cluster randomized trials should account for variable cluster size in their sample size calculations.

  9. Two-stage designs for cross-over bioequivalence trials.

    Science.gov (United States)

    Kieser, Meinhard; Rauch, Geraldine

    2015-07-20

    The topic of applying two-stage designs in the field of bioequivalence studies has recently gained attention in the literature and in regulatory guidelines. While there exists some methodological research on the application of group sequential designs in bioequivalence studies, implementation of adaptive approaches has focused up to now on superiority and non-inferiority trials. Especially, no comparison of the features and performance characteristics of these designs has been performed, and therefore, the question of which design to employ in this setting remains open. In this paper, we discuss and compare 'classical' group sequential designs and three types of adaptive designs that offer the option of mid-course sample size recalculation. A comprehensive simulation study demonstrates that group sequential designs can be identified, which show power characteristics that are similar to those of the adaptive designs but require a lower average sample size. The methods are illustrated with a real bioequivalence study example.

  10. The Representative XMM-Newton Cluster Structure Survey (REXCESS) of an X-ray Luminosity Selected Galaxy Cluster Sample

    CERN Document Server

    Böhringer, H; Pratt, G W; Arnaud, M; Ponman, T J; Croston, J H; Borgani, S; Bower, R G; Briel, U G; Collins, C A; Donahue, M; Forman, W R; Finoguenov, A; Geller, M J; Guzzo, L; Henry, J P; Kneissl, R; Mohr, J J; Matsushita, K; Mullis, C R; Ohashi, T; Pedersen, K; Pierini, D; Quintana, H; Raychaudhuri, S; Reiprich, T H; Romer, A K; Rosati, P; Sabirli, K; Temple, R F; Viana, P T P; Vikhlinin, A; Voit, G M; Zhang, Y Y

    2007-01-01

    The largest uncertainty for cosmological studies using clusters of galaxies is introduced by our limited knowledge of the statistics of galaxy cluster structure, and of the scaling relations between observables and cluster mass. To improve on this situation we have started an XMM-Newton Large Programme for the in-depth study of a representative sample of 33 galaxy clusters, selected in the redshift range z=0.055 to 0.183 from the REFLEX Cluster Survey, having X-ray luminosities above 0.4 X 10^44 h_70^-2 erg s^-1 in the 0.1 - 2.4 keV band. This paper introduces the sample, compiles properties of the clusters, and provides detailed information on the sample selection function. We describe the selection of a nearby galaxy cluster sample that makes optimal use of the XMM-Newton field-of-view, and provides nearly homogeneous X-ray luminosity coverage for the full range from poor clusters to the most massive objects in the Universe. For the clusters in the sample, X-ray fluxes are derived and compared to the previo...

  11. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-03-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer support, and one resolution enhancing step with nonsmooth mixed regularization. The first step is strictly direct and of sampling type, and it faithfully detects the scatterer support. The second step is an innovative application of nonsmooth mixed regularization, and it accurately resolves the scatterer size as well as intensities. The nonsmooth model can be efficiently solved by a semi-smooth Newton-type method. Numerical results for two- and three-dimensional examples indicate that the new approach is accurate, computationally efficient, and robust with respect to data noise. © 2012 Elsevier Inc.

  12. Empirical power and sample size calculations for cluster-randomized and cluster-randomized crossover studies.

    Science.gov (United States)

    Reich, Nicholas G; Myers, Jessica A; Obeng, Daniel; Milstone, Aaron M; Perl, Trish M

    2012-01-01

    In recent years, the number of studies using a cluster-randomized design has grown dramatically. In addition, the cluster-randomized crossover design has been touted as a methodological advance that can increase efficiency of cluster-randomized studies in certain situations. While the cluster-randomized crossover trial has become a popular tool, standards of design, analysis, reporting and implementation have not been established for this emergent design. We address one particular aspect of cluster-randomized and cluster-randomized crossover trial design: estimating statistical power. We present a general framework for estimating power via simulation in cluster-randomized studies with or without one or more crossover periods. We have implemented this framework in the clusterPower software package for R, freely available online from the Comprehensive R Archive Network. Our simulation framework is easy to implement and users may customize the methods used for data analysis. We give four examples of using the software in practice. The clusterPower package could play an important role in the design of future cluster-randomized and cluster-randomized crossover studies. This work is the first to establish a universal method for calculating power for both cluster-randomized and cluster-randomized clinical trials. More research is needed to develop standardized and recommended methodology for cluster-randomized crossover studies.

  13. Dynamical Analyses of Galaxy Clusters With Large Redshift Samples

    Science.gov (United States)

    Mohr, J. J.; Richstone, D. O.; Wegner, G.

    1998-12-01

    We construct equilibrium models of galaxy orbits in five nearby galaxy clusters to study the distribution of binding mass, the nature of galaxy orbits and the kinematic differences between cluster populations of emission-line and non emission-line galaxies. We avail ourselves of 1718 galaxy redshifts (and 1203 cluster member redshifts) in this Jeans analysis; most of these redshifts are new, coming from multifiber spectroscopic runs on the MDM 2.4m with the Decaspec and queue observing on WIYN with Hydra. In addition to the spectroscopic data we have V and R band CCD mosaics (obtained with the MDM 1.3m) of the Abell region in each of these clusters. Our scientific goals include: (i) a quantitative estimate of the range of binding masses M500 consistent with the optical and X-ray data, (ii) an estimate of the typical galaxy oribital anisotropies required to make the galaxy data consistent with the NFW expectation for the cluster potential, (iii) a better understanding of the systematics inherent in the process of rescaling and ``stacking'' galaxy cluster observations, (iv) a reexamination of the recent CNOC results implying that emission-line (blue) galaxies are an equilibrium population with a more extended radial distribution than their non emission-line (red) galaxy counterparts and (v) a measure of the galaxy contribution to the cluster mass of baryons.

  14. Tigers on trails: occupancy modeling for cluster sampling.

    Science.gov (United States)

    Hines, J E; Nichols, J D; Royle, J A; MacKenzie, D I; Gopalaswamy, A M; Kumar, N Samba; Karanth, K U

    2010-07-01

    estimation in conservation monitoring. More generally, this work represents a contribution to the topic of cluster sampling for situations in which there is a need for specific modeling (e.g., reflecting dependence) for the distribution of the variable(s) of interest among subunits.

  15. Gas loading system for LANL two-stage gas guns

    Science.gov (United States)

    Gibson, Lee; Bartram, Brian; Dattelbaum, Dana; Lang, John; Morris, John

    2015-06-01

    A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures. The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez and Teflon. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system, and example data from the plate impact experiments will be shown. LA-UR-15-20521

  16. Effect of Silica Fume on two-stage Concrete Strength

    Science.gov (United States)

    Abdelgader, H. S.; El-Baden, A. S.

    2015-11-01

    Two-stage concrete (TSC) is an innovative concrete that does not require vibration for placing and compaction. TSC is a simple concept; it is made using the same basic constituents as traditional concrete: cement, coarse aggregate, sand and water as well as mineral and chemical admixtures. As its name suggests, it is produced through a two-stage process. Firstly washed coarse aggregate is placed into the formwork in-situ. Later a specifically designed self compacting grout is introduced into the form from the lowest point under gravity pressure to fill the voids, cementing the aggregate into a monolith. The hardened concrete is dense, homogeneous and has in general improved engineering properties and durability. This paper presents the results from a research work attempt to study the effect of silica fume (SF) and superplasticizers admixtures (SP) on compressive and tensile strength of TSC using various combinations of water to cement ratio (w/c) and cement to sand ratio (c/s). Thirty six concrete mixes with different grout constituents were tested. From each mix twenty four standard cylinder samples of size (150mm×300mm) of concrete containing crushed aggregate were produced. The tested samples were made from combinations of w/c equal to: 0.45, 0.55 and 0.85, and three c/s of values: 0.5, 1 and 1.5. Silica fume was added at a dosage of 6% of weight of cement, while superplasticizer was added at a dosage of 2% of cement weight. Results indicated that both tensile and compressive strength of TSC can be statistically derived as a function of w/c and c/s with good correlation coefficients. The basic principle of traditional concrete, which says that an increase in water/cement ratio will lead to a reduction in compressive strength, was shown to hold true for TSC specimens tested. Using a combination of both silica fume and superplasticisers caused a significant increase in strength relative to control mixes.

  17. Treatment of cadmium dust with two-stage leaching process

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The treatment of cadmium dust with a two-stage leaching process was investigated to replace the existing sulphation roast-leaching processes. The process parameters in the first stage leaching were basically similar to the neutralleaching in zinc hydrometallurgy. The effects of process parameters in the second stage leaching on the extraction of zincand cadmium were mainly studied. The experimental results indicated that zinc and cadmium could be efficiently recoveredfrom the cadmium dust by two-stage leaching process. The extraction percentages of zinc and cadmium in two stage leach-ing reached 95% and 88% respectively under the optimum conditions. The total extraction percentage of Zn and Cdreached 94%.

  18. High magnetostriction parameters for low-temperature sintered cobalt ferrite obtained by two-stage sintering

    Energy Technology Data Exchange (ETDEWEB)

    Khaja Mohaideen, K.; Joy, P.A., E-mail: pa.joy@ncl.res.in

    2014-12-15

    From the studies on the magnetostriction characteristics of two-stage sintered polycrystalline CoFe{sub 2}O{sub 4} made from nanocrystalline powders, it is found that two-stage sintering at low temperatures is very effective for enhancing the density and for attaining higher magnetostriction coefficient. Magnetostriction coefficient and strain derivative are further enhanced by magnetic field annealing and relatively larger enhancement in the magnetostriction parameters is obtained for the samples sintered at lower temperatures, after magnetic annealing, despite the fact that samples sintered at higher temperatures show larger magnetostriction coefficients before annealing. A high magnetostriction coefficient of ∼380 ppm is obtained after field annealing for the sample sintered at 1100 °C, below a magnetic field of 400 kA/m, which is the highest value so far reported at low magnetic fields for sintered polycrystalline cobalt ferrite. - Highlights: • Effect of two-stage sintering on the magnetostriction characteristics of CoFe{sub 2}O{sub 4} is studied. • Two-stage sintering is very effective for enhancing the density and the magnetostriction parameters. • Higher magnetostriction for samples sintered at low temperatures and after magnetic field annealing. • Highest reported magnetostriction of 380 ppm at low fields after two-stage, low-temperature sintering.

  19. LOGISTICS SCHEDULING: ANALYSIS OF TWO-STAGE PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Yung-Chia CHANG; Chung-Yee LEE

    2003-01-01

    This paper studies the coordination effects between stages for scheduling problems where decision-making is a two-stage process. Two stages are considered as one system. The system can be a supply chain that links two stages, one stage representing a manufacturer; and the other, a distributor.It also can represent a single manufacturer, while each stage represents a different department responsible for a part of operations. A problem that jointly considers both stages in order to achieve ideal overall system performance is defined as a system problem. In practice, at times, it might not be feasible for the two stages to make coordinated decisions due to (i) the lack of channels that allow decision makers at the two stages to cooperate, and/or (ii) the optimal solution to the system problem is too difficult (or costly) to achieve.Two practical approaches are applied to solve a variant of two-stage logistic scheduling problems. The Forward Approach is defined as a solution procedure by which the first stage of the system problem is solved first, followed by the second stage. Similarly, the Backward Approach is defined as a solution procedure by which the second stage of the system problem is solved prior to solving the first stage. In each approach, two stages are solved sequentially and the solution generated is treated as a heuristic solution with respect to the corresponding system problem. When decision makers at two stages make decisions locally without considering consequences to the entire system,ineffectiveness may result - even when each stage optimally solves its own problem. The trade-off between the time complexity and the solution quality is the main concern. This paper provides the worst-case performance analysis for each approach.

  20. Residential Two-Stage Gas Furnaces - Do They Save Energy?

    Energy Technology Data Exchange (ETDEWEB)

    Lekov, Alex; Franco, Victor; Lutz, James

    2006-05-12

    Residential two-stage gas furnaces account for almost a quarter of the total number of models listed in the March 2005 GAMA directory of equipment certified for sale in the United States. Two-stage furnaces are expanding their presence in the market mostly because they meet consumer expectations for improved comfort. Currently, the U.S. Department of Energy (DOE) test procedure serves as the method for reporting furnace total fuel and electricity consumption under laboratory conditions. In 2006, American Society of Heating Refrigeration and Air-conditioning Engineers (ASHRAE) proposed an update to its test procedure which corrects some of the discrepancies found in the DOE test procedure and provides an improved methodology for calculating the energy consumption of two-stage furnaces. The objectives of this paper are to explore the differences in the methods for calculating two-stage residential gas furnace energy consumption in the DOE test procedure and in the 2006 ASHRAE test procedure and to compare test results to research results from field tests. Overall, the DOE test procedure shows a reduction in the total site energy consumption of about 3 percent for two-stage compared to single-stage furnaces at the same efficiency level. In contrast, the 2006 ASHRAE test procedure shows almost no difference in the total site energy consumption. The 2006 ASHRAE test procedure appears to provide a better methodology for calculating the energy consumption of two-stage furnaces. The results indicate that, although two-stage technology by itself does not save site energy, the combination of two-stage furnaces with BPM motors provides electricity savings, which are confirmed by field studies.

  1. Two-stage local M-estimation of additive models

    Institute of Scientific and Technical Information of China (English)

    JIANG JianCheng; LI JianTao

    2008-01-01

    This paper studies local M-estimation of the nonparametric components of additive models. A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives. Under very mild conditions, the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known. The established asymptotic results also hold for two particular local M-estimations: the local least squares and least absolute deviation estimations. However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions, its implementation is time-consuming. To reduce the computational burden, one-step approximations to the two-stage local M-estimators are developed. The one-step estimators are shown to achieve the same efficiency as the fully iterative two-stage local M-estimators, which makes the two-stage local M-estimation more feasible in practice. The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers. In addition, the practical implementation of the proposed estimation is considered in details. Simulations demonstrate the merits of the two-stage local M-estimation, and a real example illustrates the performance of the methodology.

  2. Two-stage local M-estimation of additive models

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper studies local M-estimation of the nonparametric components of additive models.A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives.Under very mild conditions,the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known.The established asymptotic results also hold for two particular local M-estimations:the local least squares and least absolute deviation estimations.However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions,its implementation is time-consuming.To reduce the computational burden,one-step approximations to the two-stage local M-estimators are developed.The one-step estimators are shown to achieve the same effciency as the fully iterative two-stage local M-estimators,which makes the two-stage local M-estimation more feasible in practice.The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers.In addition,the practical implementation of the proposed estimation is considered in details.Simulations demonstrate the merits of the two-stage local M-estimation,and a real example illustrates the performance of the methodology.

  3. Sample size calculations for 3-level cluster randomized trials

    NARCIS (Netherlands)

    Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.

    2008-01-01

    BACKGROUND: The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health

  4. Sample size calculations for 3-level cluster randomized trials

    NARCIS (Netherlands)

    Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.

    2008-01-01

    Background The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health

  5. Sample size calculations for 3-level cluster randomized trials

    NARCIS (Netherlands)

    Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.

    2008-01-01

    BACKGROUND: The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health ca

  6. Sample size calculations for 3-level cluster randomized trials

    NARCIS (Netherlands)

    Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.

    2008-01-01

    Background The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health car

  7. Variation in rank abundance replicate samples and impact of clustering

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    Calculating a single-sample rank abundance curve by using the negative-binomial distribution provides a way to investigate the variability within rank abundance replicate samples and yields a measure of the degree of heterogeneity of the sampled community. The calculation of the single-sample rank a

  8. High Frequency Cluster Radio Galaxies: Luminosity Functions and Implications for SZE Selected Cluster Samples

    CERN Document Server

    Gupta, N; Mohr, J J; Benson, B A; Bocquet, S; Carlstrom, J E; Capasso, R; Chiu, I; Crawford, T M; de Haan, T; Dietrich, J P; Gangkofner, C; Holzapfel, W L; McDonald, M; Rapetti, D; Reichardt, C L

    2016-01-01

    We study the overdensity of point sources in the direction of X-ray-selected galaxy clusters from the Meta-Catalog of X-ray detected Clusters of galaxies (MCXC; $\\langle z \\rangle = 0.14$) at South Pole Telescope (SPT) and Sydney University Molonglo Sky Survey (SUMSS) frequencies. Flux densities at 95, 150 and 220 GHz are extracted from the 2500 deg$^2$ SPT-SZ survey maps at the locations of SUMSS sources, producing a multi-frequency catalog of radio galaxies. In the direction of massive galaxy clusters, the radio galaxy flux densities at 95 and 150 GHz are biased low by the cluster Sunyaev-Zel'dovich Effect (SZE) signal, which is negative at these frequencies. We employ a cluster SZE model to remove the expected flux bias and then study these corrected source catalogs. We find that the high frequency radio galaxies are centrally concentrated within the clusters and that their luminosity functions (LFs) exhibit amplitudes that are characteristically an order of magnitude lower than the cluster LF at 843 MHz. ...

  9. The CSS and The Two-Staged Methods for Parameter Estimation in SARFIMA Models

    Directory of Open Access Journals (Sweden)

    Erol Egrioglu

    2011-01-01

    Full Text Available Seasonal Autoregressive Fractionally Integrated Moving Average (SARFIMA models are used in the analysis of seasonal long memory-dependent time series. Two methods, which are conditional sum of squares (CSS and two-staged methods introduced by Hosking (1984, are proposed to estimate the parameters of SARFIMA models. However, no simulation study has been conducted in the literature. Therefore, it is not known how these methods behave under different parameter settings and sample sizes in SARFIMA models. The aim of this study is to show the behavior of these methods by a simulation study. According to results of the simulation, advantages and disadvantages of both methods under different parameter settings and sample sizes are discussed by comparing the root mean square error (RMSE obtained by the CSS and two-staged methods. As a result of the comparison, it is seen that CSS method produces better results than those obtained from the two-staged method.

  10. Accurate recapture identification for genetic mark-recapture studies with error-tolerant likelihood-based match calling and sample clustering.

    Science.gov (United States)

    Sethi, Suresh A; Linden, Daniel; Wenburg, John; Lewis, Cara; Lemons, Patrick; Fuller, Angela; Hare, Matthew P

    2016-12-01

    Error-tolerant likelihood-based match calling presents a promising technique to accurately identify recapture events in genetic mark-recapture studies by combining probabilities of latent genotypes and probabilities of observed genotypes, which may contain genotyping errors. Combined with clustering algorithms to group samples into sets of recaptures based upon pairwise match calls, these tools can be used to reconstruct accurate capture histories for mark-recapture modelling. Here, we assess the performance of a recently introduced error-tolerant likelihood-based match-calling model and sample clustering algorithm for genetic mark-recapture studies. We assessed both biallelic (i.e. single nucleotide polymorphisms; SNP) and multiallelic (i.e. microsatellite; MSAT) markers using a combination of simulation analyses and case study data on Pacific walrus (Odobenus rosmarus divergens) and fishers (Pekania pennanti). A novel two-stage clustering approach is demonstrated for genetic mark-recapture applications. First, repeat captures within a sampling occasion are identified. Subsequently, recaptures across sampling occasions are identified. The likelihood-based matching protocol performed well in simulation trials, demonstrating utility for use in a wide range of genetic mark-recapture studies. Moderately sized SNP (64+) and MSAT (10-15) panels produced accurate match calls for recaptures and accurate non-match calls for samples from closely related individuals in the face of low to moderate genotyping error. Furthermore, matching performance remained stable or increased as the number of genetic markers increased, genotyping error notwithstanding.

  11. Scaling up the DBSCAN Algorithm for Clustering Large Spatial Databases Based on Sampling Technique

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Clustering, in data mining, is a useful technique for discoveringinte resting data distributions and patterns in the underlying data, and has many app lication fields, such as statistical data analysis, pattern recognition, image p rocessing, and etc. We combine sampling technique with DBSCAN alg orithm to cluster large spatial databases, and two sampling-based DBSCAN (SDBSC A N) algorithms are developed. One algorithm introduces sampling technique inside DBSCAN, and the other uses sampling procedure outside DBSCAN. Experimental resul ts demonstrate that our algorithms are effective and efficient in clustering lar ge-scale spatial databases.

  12. Cluster sampling with referral to improve the efficiency of estimating unmet needs among pregnant and postpartum women after disasters.

    Science.gov (United States)

    Horney, Jennifer; Zotti, Marianne E; Williams, Amy; Hsia, Jason

    2012-01-01

    Women of reproductive age, in particular women who are pregnant or fewer than 6 months postpartum, are uniquely vulnerable to the effects of natural disasters, which may create stressors for caregivers, limit access to prenatal/postpartum care, or interrupt contraception. Traditional approaches (e.g., newborn records, community surveys) to survey women of reproductive age about unmet needs may not be practical after disasters. Finding pregnant or postpartum women is especially challenging because fewer than 5% of women of reproductive age are pregnant or postpartum at any time. From 2009 to 2011, we conducted three pilots of a sampling strategy that aimed to increase the proportion of pregnant and postpartum women of reproductive age who were included in postdisaster reproductive health assessments in Johnston County, North Carolina, after tornadoes, Cobb/Douglas Counties, Georgia, after flooding, and Bertie County, North Carolina, after hurricane-related flooding. Using this method, the percentage of pregnant and postpartum women interviewed in each pilot increased from 0.06% to 21%, 8% to 19%, and 9% to 17%, respectively. Two-stage cluster sampling with referral can be used to increase the proportion of pregnant and postpartum women included in a postdisaster assessment. This strategy may be a promising way to assess unmet needs of pregnant and postpartum women in disaster-affected communities. Published by Elsevier Inc.

  13. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  14. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  15. STARS A Two Stage High Gain Harmonic Generation FEL Demonstrator

    Energy Technology Data Exchange (ETDEWEB)

    M. Abo-Bakr; W. Anders; J. Bahrdt; P. Budz; K.B. Buerkmann-Gehrlein; O. Dressler; H.A. Duerr; V. Duerr; W. Eberhardt; S. Eisebitt; J. Feikes; R. Follath; A. Gaupp; R. Goergen; K. Goldammer; S.C. Hessler; K. Holldack; E. Jaeschke; Thorsten Kamps; S. Klauke; J. Knobloch; O. Kugeler; B.C. Kuske; P. Kuske; A. Meseck; R. Mitzner; R. Mueller; M. Neeb; A. Neumann; K. Ott; D. Pfluckhahn; T. Quast; M. Scheer; Th. Schroeter; M. Schuster; F. Senf; G. Wuestefeld; D. Kramer; Frank Marhauser

    2007-08-01

    BESSY is proposing a demonstration facility, called STARS, for a two-stage high-gain harmonic generation free electron laser (HGHG FEL). STARS is planned for lasing in the wavelength range 40 to 70 nm, requiring a beam energy of 325 MeV. The facility consists of a normal conducting gun, three superconducting TESLA-type acceleration modules modified for CW operation, a single stage bunch compressor and finally a two-stage HGHG cascaded FEL. This paper describes the faciliy layout and the rationale behind the operation parameters.

  16. Dynamic Modelling of the Two-stage Gasification Process

    DEFF Research Database (Denmark)

    Gøbel, Benny; Henriksen, Ulrik B.; Houbak, Niels

    1999-01-01

    A two-stage gasification pilot plant was designed and built as a co-operative project between the Technical University of Denmark and the company REKA.A dynamic, mathematical model of the two-stage pilot plant was developed to serve as a tool for optimising the process and the operating conditions...... of the gasification plant.The model consists of modules corresponding to the different elements in the plant. The modules are coupled together through mass and heat conservation.Results from the model are compared with experimental data obtained during steady and unsteady operation of the pilot plant. A good...

  17. Adaptive cluster sampling: An efficient method for assessing inconspicuous species

    Science.gov (United States)

    Andrea M. Silletti; Joan Walker

    2003-01-01

    Restorationistis typically evaluate the success of a project by estimating the population sizes of species that have been planted or seeded. Because total census is raely feasible, they must rely on sampling methods for population estimates. However, traditional random sampling designs may be inefficient for species that, for one reason or another, are challenging to...

  18. Analysing star cluster populations with stochastic models: the HST/WFC3 sample of clusters in M83

    CERN Document Server

    Fouesneau, Morgan; Chandar, Rupali; Whitmore, Bradley C

    2012-01-01

    The majority of clusters in the Universe have masses well below 10^5 Msun. Hence their integrated fluxes and colors can be affected by the random presence of a few bright stars introduced by stochastic sampling of the stellar mass function. Specific methods are being developed to extend the analysis of cluster SEDs into the low-mass regime. In this paper, we apply such a method to observations of star clusters, in the nearby spiral galaxy M83. We reassess ages and masses of a sample of 1242 objects for which UBVIHalpha fluxes were obtained with the HST/WFC3 images. Synthetic clusters with known properties are used to characterize the limitations of the method. The ensemble of color predictions of the discrete cluster models are in good agreement with the distribution of observed colors. We emphasize the important role of the Halpha data in the assessment of the fraction of young objects, particularly in breaking the age-extinction degeneracy that hampers an analysis based on UBVI only. We find the mass distri...

  19. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    Directory of Open Access Journals (Sweden)

    Yanju Chen

    2015-01-01

    Full Text Available This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the second-stage programming problem is derived. As a result, the proposed two-stage model is equivalent to a single-stage model, and the analytical optimal solution of the two-stage model is obtained, which helps us to discuss the properties of the optimal solution. Finally, some numerical experiments are performed to demonstrate the new modeling idea and the effectiveness. The computational results provided by the proposed model show that the more risk-averse investor will invest more wealth in the risk-free security. They also show that the optimal invested amount in risky security increases as the risk-free return decreases and the optimal utility increases as the risk-free return increases, whereas the optimal utility increases as the transaction costs decrease. In most instances the utilities provided by the proposed two-stage model are larger than those provided by the single-stage model.

  20. Efficient Two-Stage Group Testing Algorithms for DNA Screening

    CERN Document Server

    Huber, Michael

    2011-01-01

    Group testing algorithms are very useful tools for DNA library screening. Building on recent work by Levenshtein (2003) and Tonchev (2008), we construct in this paper new infinite classes of combinatorial structures, the existence of which are essential for attaining the minimum number of individual tests at the second stage of a two-stage disjunctive testing algorithm.

  1. High Performance Gasification with the Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Gøbel, Benny; Hindsgaul, Claus; Henriksen, Ulrik Birk

    2002-01-01

    Based on more than 15 years of research and practical experience, the Technical University of Denmark (DTU) and COWI Consulting Engineers and Planners AS present the two-stage gasification process, a concept for high efficiency gasification of biomass producing negligible amounts of tars. In the ......Based on more than 15 years of research and practical experience, the Technical University of Denmark (DTU) and COWI Consulting Engineers and Planners AS present the two-stage gasification process, a concept for high efficiency gasification of biomass producing negligible amounts of tars....... In the two-stage gasification concept, the pyrolysis and the gasification processes are physical separated. The volatiles from the pyrolysis are partially oxidized, and the hot gases are used as gasification medium to gasify the char. Hot gases from the gasifier and a combustion unit can be used for drying...... a cold gas efficiency exceeding 90% is obtained. In the original design of the two-stage gasification process, the pyrolysis unit consists of a screw conveyor with external heating, and the char unit is a fixed bed gasifier. This design is well proven during more than 1000 hours of testing with various...

  2. FREE GRAFT TWO-STAGE URETHROPLASTY FOR HYPOSPADIAS REPAIR

    Institute of Scientific and Technical Information of China (English)

    Zhong-jin Yue; Ling-jun Zuo; Jia-ji Wang; Gan-ping Zhong; Jian-ming Duan; Zhi-ping Wang; Da-shan Qin

    2005-01-01

    Objective To evaluate the effectiveness of free graft transplantation two-stage urethroplasty for hypospadias repair.Methods Fifty-eight cases with different types of hypospadias including 10 subcoronal, 36 penile shaft, 9 scrotal, and 3 perineal were treated with free full-thickness skin graft or (and) buccal mucosal graft transplantation two-stage urethroplasty. Of 58 cases, 45 were new cases, 13 had history of previous failed surgeries. Operative procedure included two stages: the first stage is to correct penile curvature (chordee), prepare transplanting bed, harvest and prepare full-thickness skin graft, buccal mucosal graft, and perform graft transplantation. The second stage is to complete urethroplasty and glanuloplasty.Results After the first stage operation, 56 of 58 cases (96.6%) were successful with grafts healing well, another 2foreskin grafts got gangrened. After the second stage operation on 56 cases, 5 cases failed with newly formed urethras opened due to infection, 8 cases had fistulas, 43 (76.8%) cases healed well.Conclusions Free graft transplantation two-stage urethroplasty for hypospadias repair is a kind of effective treatment with broad indication, comparatively high success rate, less complicationsand good cosmatic results, indicative of various types of hypospadias repair.

  3. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  4. The construction of customized two-stage tests

    NARCIS (Netherlands)

    Adema, Jos J.

    1990-01-01

    In this paper mixed integer linear programming models for customizing two-stage tests are given. Model constraints are imposed with respect to test composition, administration time, inter-item dependencies, and other practical considerations. It is not difficult to modify the models to make them use

  5. HICOSMO - cosmology with a complete sample of galaxy clusters - I. Data analysis, sample selection and luminosity-mass scaling relation

    Science.gov (United States)

    Schellenberger, G.; Reiprich, T. H.

    2017-08-01

    The X-ray regime, where the most massive visible component of galaxy clusters, the intracluster medium, is visible, offers directly measured quantities, like the luminosity, and derived quantities, like the total mass, to characterize these objects. The aim of this project is to analyse a complete sample of galaxy clusters in detail and constrain cosmological parameters, like the matter density, Ωm, or the amplitude of initial density fluctuations, σ8. The purely X-ray flux-limited sample (HIFLUGCS) consists of the 64 X-ray brightest galaxy clusters, which are excellent targets to study the systematic effects, that can bias results. We analysed in total 196 Chandra observations of the 64 HIFLUGCS clusters, with a total exposure time of 7.7 Ms. Here, we present our data analysis procedure (including an automated substructure detection and an energy band optimization for surface brightness profile analysis) that gives individually determined, robust total mass estimates. These masses are tested against dynamical and Planck Sunyaev-Zeldovich (SZ) derived masses of the same clusters, where good overall agreement is found with the dynamical masses. The Planck SZ masses seem to show a mass-dependent bias to our hydrostatic masses; possible biases in this mass-mass comparison are discussed including the Planck selection function. Furthermore, we show the results for the (0.1-2.4) keV luminosity versus mass scaling relation. The overall slope of the sample (1.34) is in agreement with expectations and values from literature. Splitting the sample into galaxy groups and clusters reveals, even after a selection bias correction, that galaxy groups exhibit a significantly steeper slope (1.88) compared to clusters (1.06).

  6. Cored Cottonwood Tree Sample Cluster Polygons at Sand Creek Massacre National Historic Site, Colorado

    Data.gov (United States)

    National Park Service, Department of the Interior — A vector polygon dataset representing the location of sample clusters of cored trees at Sand Creek Massacre NHS as part of a University of Colorado research study.

  7. Low X-Ray Luminosity Galaxy Clusters: Main goals, sample selection, photometric and spectroscopic observations

    CERN Document Server

    Castellón, J L Nilo; Lambas, D García; Valotto, Carlos; Mill, A L O'; Cuevas, H; Carrasco, E R; Ramírez, A; Astudillo, J M; Ramos, F; Jaque, M; Ulloa, N; Órdenes, Y

    2016-01-01

    We present the study of nineteen low X-ray luminosity galaxy clusters (L$_X \\sim$ 0.5--45 $\\times$ $10^{43}$ erg s$^{-1}$), selected from the ROSAT Position Sensitive Proportional Counters (PSPC) Pointed Observations (Vikhlinin et al. 1998) and the revised version of Mullis et al. (2003) in the redshift range of 0.16 to 0.7. This is the introductory paper of a series presenting the sample selection, photometric and spectroscopic observations and data reduction. Photometric data in different passbands were taken for eight galaxy clusters at Las Campanas Observatory; three clusters at Cerro Tololo Interamerican Observatory; and eight clusters at the Gemini Observatory. Spectroscopic data were collected for only four galaxy clusters using Gemini telescopes. With the photometry, the galaxies were defined based on the star-galaxy separation taking into account photometric parameters. For each galaxy cluster, the catalogues contain the PSF and aperture magnitudes of galaxies within the 90\\% completeness limit. They...

  8. Spectroscopy of PTCDA attached to rare gas samples: clusters vs. bulk matrices. I. Absorption spectroscopy

    CERN Document Server

    Dvorak, M; Knoblauch, T; Bünermann, O; Rydlo, A; Minniberger, S; Harbich, W; Stienkemeier, F

    2012-01-01

    The interaction between PTCDA (3,4,9,10-perylene-tetracarboxylic-dianhydride) and rare gas or para-hydrogen samples is studied by means of laser-induced fluorescence excitation spectroscopy. The comparison between spectra of PTCDA embedded in a neon matrix and spectra attached to large neon clusters shows that these large organic molecules reside on the surface of the clusters when doped by the pick-up technique. PTCDA molecules can adopt different conformations when attached to argon, neon and para-hydrogen clusters which implies that the surface of such clusters has a well-defined structure and has not liquid or fluxional properties. Moreover, a precise analysis of the doping process of these clusters reveals that the mobility of large molecules on the cluster surface is quenched, preventing agglomeration and complex formation.

  9. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    Science.gov (United States)

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  10. Cluster analysis of infrared spectra of rabbit cortical bone samples during maturation and growth.

    Science.gov (United States)

    Kobrina, Yevgeniya; Turunen, Mikael J; Saarakkala, Simo; Jurvelin, Jukka S; Hauta-Kasari, Markku; Isaksson, Hanna

    2010-12-01

    Bone consists of an organic and an inorganic matrix. During development, bone undergoes changes in its composition and structure. In this study we apply three different cluster analysis algorithms [K-means (KM), fuzzy C-means (FCM) and hierarchical clustering (HCA)], and discriminant analysis (DA) on infrared spectroscopic data from developing cortical bone with the aim of comparing their ability to correctly classify the samples into different age groups. Cortical bone samples from the mid-diaphysis of the humerus of New Zealand white rabbits from three different maturation stages (newborn (NB), immature (11 days-1 month old), mature (3-6 months old)) were used. Three clusters were obtained by KM, FCM and HCA methods on different spectral regions (amide I, phosphate and carbonate). The newborn samples were well separated (71-100% correct classifications) from the other age groups by all bone components. The mature samples (3-6 months old) were well separated (100%) from those of other age groups by the carbonate spectral region, while by the phosphate and amide I regions some samples were assigned to another group (43-71% correct classifications). The greatest variance in the results for all algorithms was observed in the amide I region. In general, FCM clustering performed better than the other methods, and the overall error was lower. The discriminate analysis results showed that by combining the clustering results from all three spectral regions, the ability to predict the correct age group for all samples increased (from 29-86% to 77-91%). This study is the first to compare several clustering methods on infrared spectra of bone. Fuzzy C-means clustering performed best, and its ability to study the degree of memberships of samples to each cluster might be beneficial in future studies of medical diagnostics.

  11. Square Kilometre Array station configuration using two-stage beamforming

    CERN Document Server

    Jiwani, Aziz; Razavi-Ghods, Nima; Hall, Peter J; Padhi, Shantanu; de Vaate, Jan Geralt bij

    2012-01-01

    The lowest frequency band (70 - 450 MHz) of the Square Kilometre Array will consist of sparse aperture arrays grouped into geographically-localised patches, or stations. Signals from thousands of antennas in each station will be beamformed to produce station beams which form the inputs for the central correlator. Two-stage beamforming within stations can reduce SKA-low signal processing load and costs, but has not been previously explored for the irregular station layouts now favoured in radio astronomy arrays. This paper illustrates the effects of two-stage beamforming on sidelobes and effective area, for two representative station layouts (regular and irregular gridded tile on an irregular station). The performance is compared with a single-stage, irregular station. The inner sidelobe levels do not change significantly between layouts, but the more distant sidelobes are affected by the tile layouts; regular tile creates diffuse, but regular, grating lobes. With very sparse arrays, the station effective area...

  12. Two stage sorption type cryogenic refrigerator including heat regeneration system

    Science.gov (United States)

    Jones, Jack A.; Wen, Liang-Chi; Bard, Steven

    1989-01-01

    A lower stage chemisorption refrigeration system physically and functionally coupled to an upper stage physical adsorption refrigeration system is disclosed. Waste heat generated by the lower stage cycle is regenerated to fuel the upper stage cycle thereby greatly improving the energy efficiency of a two-stage sorption refrigerator. The two stages are joined by disposing a first pressurization chamber providing a high pressure flow of a first refrigerant for the lower stage refrigeration cycle within a second pressurization chamber providing a high pressure flow of a second refrigerant for the upper stage refrigeration cycle. The first pressurization chamber is separated from the second pressurization chamber by a gas-gap thermal switch which at times is filled with a thermoconductive fluid to allow conduction of heat from the first pressurization chamber to the second pressurization chamber.

  13. Two-stage approach to full Chinese parsing

    Institute of Scientific and Technical Information of China (English)

    Cao Hailong; Zhao Tiejun; Yang Muyun; Li Sheng

    2005-01-01

    Natural language parsing is a task of great importance and extreme difficulty. In this paper, we present a full Chinese parsing system based on a two-stage approach. Rather than identifying all phrases by a uniform model, we utilize a divide and conquer strategy. We propose an effective and fast method based on Markov model to identify the base phrases. Then we make the first attempt to extend one of the best English parsing models i.e. the head-driven model to recognize Chinese complex phrases. Our two-stage approach is superior to the uniform approach in two aspects. First, it creates synergy between the Markov model and the head-driven model. Second, it reduces the complexity of full Chinese parsing and makes the parsing system space and time efficient. We evaluate our approach in PARSEVAL measures on the open test set, the parsing system performances at 87.53% precision, 87.95% recall.

  14. Income and Poverty across SMSAs: A Two-Stage Analysis

    OpenAIRE

    1993-01-01

    Two popular explanations of urban poverty are the "welfare-disincentive" and "urban-deindustrialization" theories. Using cross-sectional Census data, we develop a two-stage model to predict an SMSAs median family income and poverty rate. The model allows the city's welfare level and industrial structure to affect its median family income and poverty rate directly. It also allows welfare and industrial structure to affect income and poverty indirectly, through their effects on family structure...

  15. A Two-stage Polynomial Method for Spectrum Emissivity Modeling

    OpenAIRE

    Qiu, Qirong; Liu, Shi; Teng, Jing; Yan, Yong

    2015-01-01

    Spectral emissivity is a key in the temperature measurement by radiation methods, but not easy to determine in a combustion environment, due to the interrelated influence of temperature and wave length of the radiation. In multi-wavelength radiation thermometry, knowing the spectral emissivity of the material is a prerequisite. However in many circumstances such a property is a complex function of temperature and wavelength and reliable models are yet to be sought. In this study, a two stages...

  16. On the occurrence of Radio Halos in galaxy clusters - Insight from a mass-selected sample

    CERN Document Server

    Cuciti, V; Brunetti, G; Dallacasa, D; Kale, R; Ettori, S; Venturi, T

    2015-01-01

    Giant radio halos (RH) are diffuse Mpc-scale synchrotron sources detected in a fraction of massive and merging galaxy clusters. An unbiased study of the statistical properties of RHs is crucial to constrain their origin and evolution. We aim at investigating the occurrence of RHs and its dependence on the cluster mass in a SZ-selected sample of galaxy clusters, which is as close as possible to be a mass-selected sample. Moreover, we analyse the connection between RHs and merging clusters. We select from the Planck SZ catalogue (Planck Collaboration XXIX 2014) clusters with $M\\geq 6\\times10^{14} M_\\odot$ at z=0.08-0.33 and we search for the presence of RHs using the NVSS for z<0.2 and the GMRT RH survey (GRHS, Venturi et al. 2007, 2008) and its extension (EGRHS, Kale et al. 2013, 2015) for 0.2clusters dynamical status. We confirm that RH clusters are merging systems while the majority of clusters without RH are relaxed, thus supp...

  17. Measuring the Learning from Two-Stage Collaborative Group Exams

    CERN Document Server

    Ives, Joss

    2014-01-01

    A two-stage collaborative exam is one in which students first complete the exam individually, and then complete the same or similar exam in collaborative groups immediately afterward. To quantify the learning effect from the group component of these two-stage exams in an introductory Physics course, a randomized crossover design was used where each student participated in both the treatment and control groups. For each of the two two-stage collaborative group midterm exams, questions were designed to form matched near-transfer pairs with questions on an end-of-term diagnostic which was used as a learning test. For learning test questions paired with questions from the first midterm, which took place six to seven weeks before the learning test, an analysis using a mixed-effects logistic regression found no significant differences in learning-test performance between the control and treatment group. For learning test questions paired with questions from the second midterm, which took place one to two weeks prio...

  18. Hot Zone Identification: Analyzing Effects of Data Sampling on Spam Clustering

    Directory of Open Access Journals (Sweden)

    Rasib Khan

    2014-03-01

    Full Text Available Email is the most common and comparatively the most efficient means of exchanging information in today's world. However, given the widespread use of emails in all sectors, they have been the target of spammers since the beginning. Filtering spam emails has now led to critical actions such as forensic activities based on mining spam email. The data mine for spam emails at the University of Alabama at Birmingham is considered to be one of the most prominent resources for mining and identifying spam sources. It is a widely researched repository used by researchers from different global organizations. The usual process of mining the spam data involves going through every email in the data mine and clustering them based on their different attributes. However, given the size of the data mine, it takes an exceptionally long time to execute the clustering mechanism each time. In this paper, we have illustrated sampling as an efficient tool for data reduction, while preserving the information within the clusters, which would thus allow the spam forensic experts to quickly and effectively identify the ‘hot zone’ from the spam campaigns. We have provided detailed comparative analysis of the quality of the clusters after sampling, the overall distribution of clusters on the spam data, and timing measurements for our sampling approach. Additionally, we present different strategies which allowed us to optimize the sampling process using data-preprocessing and using the database engine's computational resources, and thus improving the performance of the clustering process.

  19. Forty-five-degree two-stage venous cannula: advantages over standard two-stage venous cannulation.

    Science.gov (United States)

    Lawrence, D R; Desai, J B

    1997-01-01

    We present a 45-degree two-stage venous cannula that confers advantage to the surgeon using cardiopulmonary bypass. This cannula exits the mediastinum under the transverse bar of the sternal retractor, leaving the rostral end of the sternal incision free of apparatus. It allows for lifting of the heart with minimal effect on venous return and does not interfere with the radially laid out sutures of an aortic valve replacement using an interrupted suture technique.

  20. Cluster analysis in kinetic modelling of the brain: A noninvasive alternative to arterial sampling

    DEFF Research Database (Denmark)

    Liptrot, Matthew George; Adams, K.H.; Martiny, L.

    2004-01-01

    by the 'within-variance' measure and by 3D visual inspection of the homogeneity of the determined clusters. The cluster-determined input curve was then used in Logan plot analysis and compared with the arterial and venous blood samples, and additionally with one of the currently used alternatives to arterial...... acts as a proof-of-principle that the use of cluster analysis on a PET data set could obviate the requirement for arterial cannulation when determining the input function for kinetic modelling of ligand binding, and that this may be a superior approach as compared to the other noninvasive alternatives......) extracted directly from dynamic positron emission tomography (PET) scans by cluster analysis. Five healthy subjects were injected with the 5HT2A- receptor ligand [18F]-altanserin and blood samples were subsequently taken from the radial artery and cubital vein. Eight regions-of-interest (ROI) TACs were...

  1. Planck early results. VIII. The all-sky early Sunyaev-Zeldovich cluster sample

    DEFF Research Database (Denmark)

    Bucher, M.; Delabrouille, J.; Giraud-Héraud, Y.;

    2011-01-01

    We present the first all-sky sample of galaxy clusters detected blindly by the Planck satellite through the Sunyaev-Zeldovich (SZ) effect from its six highest frequencies. This early SZ (ESZ) sample is comprised of 189 candidates, which have a high signal-to-noise ratio ranging from 6 to 29. Its ...

  2. Planck Early Results: The all-sky Early Sunyaev-Zeldovich cluster sample

    CERN Document Server

    Ade, P A R; Arnaud, M; Ashdown, M; Aumont, J; Baccigalupi, C; Balbi, A; Banday, A J; Barreiro, R B; Bartelmann, M; Bartlett, J G; Battaner, E; Battye, R; Benabed, K; Benoît, A; Bernard, J -P; Bersanelli, M; Bhatia, R; Bock, J J; Bonaldi, A; Bond, J R; Borrill, J; Bouchet, F R; Brown, M L; Bucher, M; Burigana, C; Cabella, P; Cantalupo, C M; Cardoso, J -F; Carvalho, P; Catalano, A; Cayón, L; Challinor, A; Chamballu, A; Chary, R -R; Chiang, L -Y; Chiang, C; Chon, G; Christensen, P R; Churazov, E; Clements, D L; Colafrancesco, S; Colombi, S; Couchot, F; Coulais, A; Crill, B P; Cuttaia, F; Da Silva, A; Dahle, H; Danese, L; Davis, R J; de Bernardis, P; de Gasperis, G; de Rosa, A; de Zotti, G; Delabrouille, J; Delouis, J -M; Désert, F -X; Dickinson, C; Diego, J M; Dolag, K; Dole, H; Donzelli, S; Doré, O; Dörl, U; Douspis, M; Dupac, X; Efstathiou, G; Eisenhardt, P; En\\sslin, T A; Feroz, F; Finelli, F; Flores, I; Forni, O; Fosalba, P; Frailis, M; Franceschi, E; Fromenteau, S; Galeotta, S; Ganga, K; Génova-Santos, R T; Giard, M; Giardino, G; Giraud-Héraud, Y; González-Nuevo, J; González-Riestra, R; Górski, K M; Grainge, K J B; Gratton, S; Gregorio, A; Gruppuso, A; Harrison, D; Heinämäki, P; Henrot-Versillé, S; Hernández-Monteagudo, C; Herranz, D; Hildebrandt, S R; Hivon, E; Hobson, M; Holmes, W A; Hovest, W; Hoyland, R J; Huffenberger, K M; Hurier, G; Hurley-Walker, N; Jaffe, A H; Jones, W C; Juvela, M; Keihänen, E; Keskitalo, R; Kisner, T S; Kneissl, R; Knox, L; Kurki-Suonio, H; Lagache, G; Lamarre, J -M; Lasenby, A; Laureijs, R J; Lawrence, C R; Jeune, M Le; Leach, S; Leonardi, R; Li, C; Liddle, A; Lilje, P B; Linden-V\\ornle, M; López-Caniego, M; Lubin, P M; Macías-Pérez, J F; MacTavish, C J; Maffei, B; Maino, D; Mandolesi, N; Mann, R; Maris, M; Marleau, F; Martínez-González, E; Masi, S; Matarrese, S; Matthai, F; Mazzotta, P; Mei, S; Meinhold, P R; Melchiorri, A; Melin, J -B; Mendes, L; Mennella, A; Mitra, S; Miville-Deschênes, M -A; Moneti, A; Montier, L; Morgante, G; Mortlock, D; Munshi, D; Murphy, A; Naselsky, P; Nati, F; Natoli, P; Netterfield, C B; N\\orgaard-Nielsen, H U; Noviello, F; Novikov, D; Novikov, I; Olamie, M; Osborne, S; Pajot, F; Pasian, F; Patanchon, G; Pearson, T J; Perdereau, O; Perotto, L; Perrotta, F; Piacentini, F; Piat, M; Pierpaoli, E; Piffaretti, R; Plaszczynski, S; Pointecouteau, E; Polenta, G; Ponthieu, N; Poutanen, T; Pratt, G W; Prézeau, G; Prunet, S; Puget, J -L; Rachen, J P; Reach, W T; Rebolo, R; Reinecke, M; Renault, C; Ricciardi, S; Riller, T; Ristorcelli, I; Rocha, G; Rosset, C; Rubi\; Rusholme, B; Saar, E; Sandri, M; Santos, D; Saunders, R D E; Savini, G; Schaefer, B M; Scott, D; Seiffert, M D; Shellard, P; Smoot, G F; Stanford, A; Starck, J -L; Stivoli, F; Stolyarov, V; Stompor, R; Sudiwala, R; Sunyaev, R; Sutton, D; Sygnet, J -F; Taburet, N; Tauber, J A; Terenzi, L; Toffolatti, L; Tomasi, M; Torre, J -P; Tristram, M; Tuovinen, J; Valenziano, L; Vibert, L; Vielva, P; Villa, F; Vittorio, N; Wade, L A; Wandelt, B D; Weller, J; White, S D M; White, M; Yvon, D; Zacchei, A; Zonca, A

    2011-01-01

    We present the first all-sky sample of galaxy clusters detected blindly by the Planck satellite through the Sunyaev-Zeldovich (SZ) effect from its six highest frequencies. This Early SZ (ESZ) sample of 189 candidates comprises high signal-to-noise clusters, from 6 to 29. Its high reliability (purity above 95%) is further insured by an extensive validation process based on Planck-internal quality assessments and external cross-identification and follow-up observations. Planck provides the first measured SZ signal for about 80% of the 169 ESZ known clusters. Planck further releases 30 new cluster candidates among which 20 are within the ESZ signal-to-noise selection criterion. Eleven of these 20 ESZ candidates are confirmed using XMM-Newton snapshot observations as new clusters, most of them with disturbed morphologies and low luminosities. The ESZ clusters are mostly at moderate redshifts (86% with z below 0.3) and span over a decade in mass, up to the rarest and most massive clusters with masses above 10^15 M...

  3. Space Velocities of Southern Globular Clusters. V. A Low Galactic Latitude Sample

    CERN Document Server

    Casetti-Dinescu, D I; Herrera, D; Van Altena, W F; López, C E; Castillo, D J

    2007-01-01

    We have measured the absolute proper motions of globular clusters NGC 2808, 3201, 4372, 4833, 5927 and 5986. The proper motions are on the Hipparcos system and they are the first determinations ever made for these low Galactic latitude clusters. The proper motion uncertainties range from 0.3 to 0.5 mas/yr. The inferred orbits indicate that 1) the single metal rich cluster in our sample, NGC 5927, dynamically belongs to the thick disk, 2) the remaining metal poor clusters have rather low-energy orbits of high eccentricity; among these, there appear to be two "pairs" of dynamically associated clusters, 3) the most energetic cluster in our sample, NGC 3201 is on a highly retrograde orbit -- which had already been surmised from its radial velocity alone -- with an apocentric distance of 22 kpc, and 4) none of the metal poor clusters appear to be associated with the recently detected SDSS streams, or with the Monoceros structure. These are the first results of the Southern Proper-Motion Program (SPM) where the sec...

  4. Two stage treatment of dairy effluent using immobilized Chlorella pyrenoidosa.

    Science.gov (United States)

    Yadavalli, Rajasri; Heggers, Goutham Rao Venkata Naga

    2013-12-19

    Dairy effluents contains high organic load and unscrupulous discharge of these effluents into aquatic bodies is a matter of serious concern besides deteriorating their water quality. Whilst physico-chemical treatment is the common mode of treatment, immobilized microalgae can be potentially employed to treat high organic content which offer numerous benefits along with waste water treatment. A novel low cost two stage treatment was employed for the complete treatment of dairy effluent. The first stage consists of treating the diary effluent in a photobioreactor (1 L) using immobilized Chlorella pyrenoidosa while the second stage involves a two column sand bed filtration technique. Whilst NH4+-N was completely removed, a 98% removal of PO43--P was achieved within 96 h of two stage purification processes. The filtrate was tested for toxicity and no mortality was observed in the zebra fish which was used as a model at the end of 96 h bioassay. Moreover, a significant decrease in biological oxygen demand and chemical oxygen demand was achieved by this novel method. Also the biomass separated was tested as a biofertilizer to the rice seeds and a 30% increase in terms of length of root and shoot was observed after the addition of biomass to the rice plants. We conclude that the two stage treatment of dairy effluent is highly effective in removal of BOD and COD besides nutrients like nitrates and phosphates. The treatment also helps in discharging treated waste water safely into the receiving water bodies since it is non toxic for aquatic life. Further, the algal biomass separated after first stage of treatment was highly capable of increasing the growth of rice plants because of nitrogen fixation ability of the green alga and offers a great potential as a biofertilizer.

  5. Two-stage series array SQUID amplifier for space applications

    Science.gov (United States)

    Tuttle, J. G.; DiPirro, M. J.; Shirron, P. J.; Welty, R. P.; Radparvar, M.

    We present test results for a two-stage integrated SQUID amplifier which uses a series array of d.c. SQUIDS to amplify the signal from a single input SQUID. The device was developed by Welty and Martinis at NIST and recent versions have been manufactured by HYPRES, Inc. Shielding and filtering techniques were employed during the testing to minimize the external noise. Energy resolution of 300 h was demonstrated using a d.c. excitation at frequencies above 1 kHz, and better than 500 h resolution was typical down to 300 Hz.

  6. Two-Stage Aggregate Formation via Streams in Myxobacteria

    Science.gov (United States)

    Alber, Mark; Kiskowski, Maria; Jiang, Yi

    2005-03-01

    In response to adverse conditions, myxobacteria form aggregates which develop into fruiting bodies. We model myxobacteria aggregation with a lattice cell model based entirely on short range (non-chemotactic) cell-cell interactions. Local rules result in a two-stage process of aggregation mediated by transient streams. Aggregates resemble those observed in experiment and are stable against even very large perturbations. Noise in individual cell behavior increases the effects of streams and result in larger, more stable aggregates. Phys. Rev. Lett. 93: 068301 (2004).

  7. Straw Gasification in a Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Hindsgaul, Claus; Henriksen, Ulrik Birk

    2002-01-01

    Additive-prepared straw pellets were gasified in the 100 kW two-stage gasifier at The Department of Mechanical Engineering of the Technical University of Denmark (DTU). The fixed bed temperature range was 800-1000°C. In order to avoid bed sintering, as observed earlier with straw gasification...... residues were examined after the test. No agglomeration or sintering was observed in the ash residues. The tar content was measured both by solid phase amino adsorption (SPA) method and cold trapping (Petersen method). Both showed low tar contents (~42 mg/Nm3 without gas cleaning). The particle content...

  8. Two-Stage Fan I: Aerodynamic and Mechanical Design

    Science.gov (United States)

    Messenger, H. E.; Kennedy, E. E.

    1972-01-01

    A two-stage, highly-loaded fan was designed to deliver an overall pressure ratio of 2.8 with an adiabatic efficiency of 83.9 percent. At the first rotor inlet, design flow per unit annulus area is 42 lbm/sec/sq ft (205 kg/sec/sq m), hub/tip ratio is 0.4 with a tip diameter of 31 inches (0.787 m), and design tip speed is 1450 ft/sec (441.96 m/sec). Other features include use of multiple-circular-arc airfoils, resettable stators, and split casings over the rotor tip sections for casing treatment tests.

  9. Two-Stage Eagle Strategy with Differential Evolution

    CERN Document Server

    Yang, Xin-She

    2012-01-01

    Efficiency of an optimization process is largely determined by the search algorithm and its fundamental characteristics. In a given optimization, a single type of algorithm is used in most applications. In this paper, we will investigate the Eagle Strategy recently developed for global optimization, which uses a two-stage strategy by combing two different algorithms to improve the overall search efficiency. We will discuss this strategy with differential evolution and then evaluate their performance by solving real-world optimization problems such as pressure vessel and speed reducer design. Results suggest that we can reduce the computing effort by a factor of up to 10 in many applications.

  10. See Change: the Supernova Sample from the Supernova Cosmology Project High Redshift Cluster Supernova Survey

    Science.gov (United States)

    Hayden, Brian; Perlmutter, Saul; Boone, Kyle; Nordin, Jakob; Rubin, David; Lidman, Chris; Deustua, Susana E.; Fruchter, Andrew S.; Aldering, Greg Scott; Brodwin, Mark; Cunha, Carlos E.; Eisenhardt, Peter R.; Gonzalez, Anthony H.; Jee, James; Hildebrandt, Hendrik; Hoekstra, Henk; Santos, Joana; Stanford, S. Adam; Stern, Daniel; Fassbender, Rene; Richard, Johan; Rosati, Piero; Wechsler, Risa H.; Muzzin, Adam; Willis, Jon; Boehringer, Hans; Gladders, Michael; Goobar, Ariel; Amanullah, Rahman; Hook, Isobel; Huterer, Dragan; Huang, Xiaosheng; Kim, Alex G.; Kowalski, Marek; Linder, Eric; Pain, Reynald; Saunders, Clare; Suzuki, Nao; Barbary, Kyle H.; Rykoff, Eli S.; Meyers, Joshua; Spadafora, Anthony L.; Sofiatti, Caroline; Wilson, Gillian; Rozo, Eduardo; Hilton, Matt; Ruiz-Lapuente, Pilar; Luther, Kyle; Yen, Mike; Fagrelius, Parker; Dixon, Samantha; Williams, Steven

    2017-01-01

    The Supernova Cosmology Project has finished executing a large (174 orbits, cycles 22-23) Hubble Space Telescope program, which has measured ~30 type Ia Supernovae above z~1 in the highest-redshift, most massive galaxy clusters known to date. Our SN Ia sample closely matches our pre-survey predictions; this sample will improve the constraint by a factor of 3 on the Dark Energy equation of state above z~1, allowing an unprecedented probe of Dark Energy time variation. When combined with the improved cluster mass calibration from gravitational lensing provided by the deep WFC3-IR observations of the clusters, See Change will triple the Dark Energy Task Force Figure of Merit. With the primary observing campaign completed, we present the preliminary supernova sample and our path forward to the supernova cosmology results. We also compare the number of SNe Ia discovered in each cluster with our pre-survey expectations based on cluster mass and SFR estimates. Our extensive HST and ground-based campaign has already produced unique results; we have confirmed several of the highest redshift cluster members known to date, confirmed the redshift of one of the most massive galaxy clusters at z~1.2 expected across the entire sky, and characterized one of the most extreme starburst environments yet known in a z~1.7 cluster. We have also discovered a lensed SN Ia at z=2.22 magnified by a factor of ~2.7, which is the highest spectroscopic redshift SN Ia currently known.

  11. The Properties of X-ray Cold Fronts in a Statistical Sample of Simulated Galaxy Clusters

    CERN Document Server

    Hallman, Eric J; Jeltema, Tesla E; Smith, Britton D; O'Shea, Brian W; Burns, Jack O; Norman, Michael L

    2010-01-01

    We examine the incidence of cold fronts in a large sample of galaxy clusters extracted from a (512h^-1 Mpc) hydrodynamic/N-body cosmological simulation with adiabatic gas physics computed with the Enzo adaptive mesh refinement code. This simulation contains a sample of roughly 4000 galaxy clusters with M > 10^14 M_sun at z=0. For each simulated galaxy cluster, we have created mock 0.3-8.0 keV X-ray observations and spectroscopic-like temperature maps. We have searched these maps with a new automated algorithm to identify the presence of cold fronts in projection. Using a threshold of a minimum of 10 cold front pixels in our images, corresponding to a total comoving length L_cf > 156h^-1 kpc, we find that roughly 10-12% of all projections in a mass-limited sample would be classified as cold front clusters. Interestingly, the fraction of clusters with extended cold front features in our synthetic maps of a mass-limited sample trends only weakly with redshift out to z=1.0. However, when using different selection...

  12. Two-stage perceptual learning to break visual crowding.

    Science.gov (United States)

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  13. Runway Operations Planning: A Two-Stage Heuristic Algorithm

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a two stage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.

  14. Two-Stage Heuristic Algorithm for Aircraft Recovery Problem

    Directory of Open Access Journals (Sweden)

    Cheng Zhang

    2017-01-01

    Full Text Available This study focuses on the aircraft recovery problem (ARP. In real-life operations, disruptions always cause schedule failures and make airlines suffer from great loss. Therefore, the main objective of the aircraft recovery problem is to minimize the total recovery cost and solve the problem within reasonable runtimes. An aircraft recovery model (ARM is proposed herein to formulate the ARP and use feasible line of flights as the basic variables in the model. We define the feasible line of flights (LOFs as a sequence of flights flown by an aircraft within one day. The number of LOFs exponentially grows with the number of flights. Hence, a two-stage heuristic is proposed to reduce the problem scale. The algorithm integrates a heuristic scoring procedure with an aggregated aircraft recovery model (AARM to preselect LOFs. The approach is tested on five real-life test scenarios. The computational results show that the proposed model provides a good formulation of the problem and can be solved within reasonable runtimes with the proposed methodology. The two-stage heuristic significantly reduces the number of LOFs after each stage and finally reduces the number of variables and constraints in the aircraft recovery model.

  15. The XXL Survey. II. The bright cluster sample: catalogue and luminosity function

    Science.gov (United States)

    Pacaud, F.; Clerc, N.; Giles, P. A.; Adami, C.; Sadibekova, T.; Pierre, M.; Maughan, B. J.; Lieu, M.; Le Fèvre, J. P.; Alis, S.; Altieri, B.; Ardila, F.; Baldry, I.; Benoist, C.; Birkinshaw, M.; Chiappetti, L.; Démoclès, J.; Eckert, D.; Evrard, A. E.; Faccioli, L.; Gastaldello, F.; Guennou, L.; Horellou, C.; Iovino, A.; Koulouridis, E.; Le Brun, V.; Lidman, C.; Liske, J.; Maurogordato, S.; Menanteau, F.; Owers, M.; Poggianti, B.; Pomarède, D.; Pompei, E.; Ponman, T. J.; Rapetti, D.; Reiprich, T. H.; Smith, G. P.; Tuffs, R.; Valageas, P.; Valtchanov, I.; Willis, J. P.; Ziparo, F.

    2016-06-01

    Context. The XXL Survey is the largest survey carried out by the XMM-Newton satellite and covers a total area of 50 square degrees distributed over two fields. It primarily aims at investigating the large-scale structures of the Universe using the distribution of galaxy clusters and active galactic nuclei as tracers of the matter distribution. The survey will ultimately uncover several hundreds of galaxy clusters out to a redshift of ~2 at a sensitivity of ~10-14 erg s-1 cm-2 in the [0.5-2] keV band. Aims: This article presents the XXL bright cluster sample, a subsample of 100 galaxy clusters selected from the full XXL catalogue by setting a lower limit of 3 × 10-14 erg s-1 cm-2 on the source flux within a 1' aperture. Methods: The selection function was estimated using a mixture of Monte Carlo simulations and analytical recipes that closely reproduce the source selection process. An extensive spectroscopic follow-up provided redshifts for 97 of the 100 clusters. We derived accurate X-ray parameters for all the sources. Scaling relations were self-consistently derived from the same sample in other publications of the series. On this basis, we study the number density, luminosity function, and spatial distribution of the sample. Results: The bright cluster sample consists of systems with masses between M500 = 7 × 1013 and 3 × 1014 M⊙, mostly located between z = 0.1 and 0.5. The observed sky density of clusters is slightly below the predictions from the WMAP9 model, and significantly below the prediction from the Planck 2015 cosmology. In general, within the current uncertainties of the cluster mass calibration, models with higher values of σ8 and/or ΩM appear more difficult to accommodate. We provide tight constraints on the cluster differential luminosity function and find no hint of evolution out to z ~ 1. We also find strong evidence for the presence of large-scale structures in the XXL bright cluster sample and identify five new superclusters. Based on

  16. EDisCS -- the ESO Distant Cluster Survey -- Sample Definition and Optical Photometry

    CERN Document Server

    White, S D M; Simard, L; Rudnick, G; De Lucia, G; Aragón-Salamanca, A; Bender, R; Best, P; Bremer, M; Charlot, S; Dalcanton, J; Dantel, M; Desai, V; Fort, B; Halliday, C; Jablonka, P; Kauffmann, G; Mellier, Y; Milvang-Jensen, B; Pellò, R; Poggianti, B M; Poirier, S; Rottgering, H; Saglia, R; Schneider, P; Zaritsky, D

    2005-01-01

    We present the ESO Distant Cluster Survey (EDisCS) a survey of 20 fields containing distant galaxy clusters with redshifts ranging from 0.4 to almost 1.0. Candidate clusters were chosen from among the brightest objects identified in the Las Campanas Distant Cluster Survey, half with estimated redshift z_est~0.5 and half with z_est~0.8. They were confirmed by identifying red sequences in moderately deep two colour data from VLT/FORS2. For confirmed candidates we have assembled deep three-band optical photometry using VLT/FORS2, deep near-infrared photometry in one or two bands using NTT/SOFI, deep optical spectroscopy using VLT/FORS2, wide field imaging in two or three bands using the ESO Wide Field Imager, and HST/ACS mosaic images for 10 of the most distant clusters. This first paper presents our sample and the VLT photometry we have obtained. We present images, colour-magnitude diagrams and richness estimates for our clusters, as well as giving redshifts and positions for the brightest cluster members. Subs...

  17. A Novel Two-Stage Illumination Estimation Framework for Expression Recognition

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2014-01-01

    Full Text Available One of the critical issues for facial expression recognition is to eliminate the negative effect caused by variant poses and illuminations. In this paper a two-stage illumination estimation framework is proposed based on three-dimensional representative face and clustering, which can estimate illumination directions under a series of poses. First, 256 training 3D face models are adaptively categorized into a certain amount of facial structure types by k-means clustering to group people with similar facial appearance into clusters. Then the representative face of each cluster is generated to represent the facial appearance type of that cluster. Our training set is obtained by rotating all representative faces to a certain pose, illuminating them with a series of different illumination conditions, and then projecting them into two-dimensional images. Finally the saltire-over-cross feature is selected to train a group of SVM classifiers and satisfactory performance is achieved when estimating a number of test sets including images generated from 64 3D face models kept for testing, CAS-PEAL face database, CMU PIE database, and a small test set created by ourselves. Compared with other related works, our method is subject independent and has less computational complexity O(C×N without 3D facial reconstruction.

  18. A KAT-7 view of a low-mass sample of galaxy clusters

    CERN Document Server

    Bernardi, G; Cassano, R; Dallacasa, D; Brunetti, G; Cuciti, V; Johnston-Hollitt, M; Oozeer, N; Smirnov, O M

    2016-01-01

    Radio observations over the last two decades have provided evidence that diffuse synchrotron emission in the form of megaparsec-scale radio halos in galaxy clusters is likely tracing regions of the intracluster medium where relativistic particles are accelerated during cluster mergers. In this paper we present results of a survey of 14 galaxy clusters carried out with the 7-element Karoo Array Telescope at 1.86 GHz, aimed to extend the current studies of radio halo occurrence to systems with lower masses (M$_{\\rm 500} > 4\\times10^{14}$ M${_\\odot}$). We found upper limits at the $0.6 - 1.9 \\times 10^{24}$ Watt Hz$^{-1}$ level for $\\sim 50\\%$ of the sample, confirming that bright radio halos in less massive galaxy clusters are statistically rare.

  19. Two-Stage Part-Based Pedestrian Detection

    DEFF Research Database (Denmark)

    Møgelmose, Andreas; Prioletti, Antonio; Trivedi, Mohan M.

    2012-01-01

    Detecting pedestrians is still a challenging task for automotive vision system due the extreme variability of targets, lighting conditions, occlusions, and high speed vehicle motion. A lot of research has been focused on this problem in the last 10 years and detectors based on classifiers has...... gained a special place among the different approaches presented. This work presents a state-of-the-art pedestrian detection system based on a two stages classifier. Candidates are extracted with a Haar cascade classifier trained with the DaimlerDB dataset and then validated through part-based HOG...... of several metrics, such as detection rate, false positives per hour, and frame rate. The novelty of this system rely in the combination of HOG part-based approach, tracking based on specific optimized feature and porting on a real prototype....

  20. Laparoscopic management of a two staged gall bladdertorsion

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    Gall bladder torsion (GBT) is a relatively uncommonentity and rarely diagnosed preoperatively. A constantfactor in all occurrences of GBT is a freely mobilegall bladder due to congenital or acquired anomalies.GBT is commonly observed in elderly white females.We report a 77-year-old, Caucasian lady who wasoriginally diagnosed as gall bladder perforation butwas eventually found with a two staged torsion of thegall bladder with twisting of the Riedel's lobe (partof tongue like projection of liver segment 4A). Thistogether, has not been reported in literature, to thebest of our knowledge. We performed laparoscopiccholecystectomy and she had an uneventful postoperativeperiod. GBT may create a diagnostic dilemmain the context of acute cholecystitis. Timely diagnosisand intervention is necessary, with extra care whileoperating as the anatomy is generally distorted. Thefundus first approach can be useful due to alteredanatomy in the region of Calot's triangle. Laparoscopiccholecystectomy has the benefit of early recovery.

  1. Lightweight Concrete Produced Using a Two-Stage Casting Process

    Directory of Open Access Journals (Sweden)

    Jin Young Yoon

    2015-03-01

    Full Text Available The type of lightweight aggregate and its volume fraction in a mix determine the density of lightweight concrete. Minimizing the density obviously requires a higher volume fraction, but this usually causes aggregates segregation in a conventional mixing process. This paper proposes a two-stage casting process to produce a lightweight concrete. This process involves placing lightweight aggregates in a frame and then filling in the remaining interstitial voids with cementitious grout. The casting process results in the lowest density of lightweight concrete, which consequently has low compressive strength. The irregularly shaped aggregates compensate for the weak point in terms of strength while the round-shape aggregates provide a strength of 20 MPa. Therefore, the proposed casting process can be applied for manufacturing non-structural elements and structural composites requiring a very low density and a strength of at most 20 MPa.

  2. TWO-STAGE OCCLUDED OBJECT RECOGNITION METHOD FOR MICROASSEMBLY

    Institute of Scientific and Technical Information of China (English)

    WANG Huaming; ZHU Jianying

    2007-01-01

    A two-stage object recognition algorithm with the presence of occlusion is presented for microassembly. Coarse localization determines whether template is in image or not and approximately where it is, and fine localization gives its accurate position. In coarse localization, local feature, which is invariant to translation, rotation and occlusion, is used to form signatures. By comparing signature of template with that of image, approximate transformation parameter from template to image is obtained, which is used as initial parameter value for fine localization. An objective function, which is a function of transformation parameter, is constructed in fine localization and minimized to realize sub-pixel localization accuracy. The occluded pixels are not taken into account in objective function, so the localization accuracy will not be influenced by the occlusion.

  3. The hybrid two stage anticlockwise cycle for ecological energy conversion

    Directory of Open Access Journals (Sweden)

    Cyklis Piotr

    2016-01-01

    Full Text Available The anticlockwise cycle is commonly used for refrigeration, air conditioning and heat pumps applications. The application of refrigerant in the compression cycle is within the temperature limits of the triple point and the critical point. New refrigerants such as 1234yf or 1234ze have many disadvantages, therefore natural refrigerants application is favourable. The carbon dioxide and water can be applied only in the hybrid two stages cycle. The possibilities of this solutions are shown for refrigerating applications, as well some experimental results of the adsorption-compression double stages cycle, powered with solar collectors are shown. As a high temperature cycle the adsorption system is applied. The low temperature cycle is the compression stage with carbon dioxide as a working fluid. This allows to achieve relatively high COP for low temperature cycle and for the whole system.

  4. Two Stage Assessment of Thermal Hazard in An Underground Mine

    Science.gov (United States)

    Drenda, Jan; Sułkowski, Józef; Pach, Grzegorz; Różański, Zenon; Wrona, Paweł

    2016-06-01

    The results of research into the application of selected thermal indices of men's work and climate indices in a two stage assessment of climatic work conditions in underground mines have been presented in this article. The difference between these two kinds of indices was pointed out during the project entitled "The recruiting requirements for miners working in hot underground mine environments". The project was coordinated by The Institute of Mining Technologies at Silesian University of Technology. It was a part of a Polish strategic project: "Improvement of safety in mines" being financed by the National Centre of Research and Development. Climate indices are based only on physical parameters of air and their measurements. Thermal indices include additional factors which are strictly connected with work, e.g. thermal resistance of clothing, kind of work etc. Special emphasis has been put on the following indices - substitute Silesian temperature (TS) which is considered as the climatic index, and the thermal discomfort index (δ) which belongs to the thermal indices group. The possibility of the two stage application of these indices has been taken into consideration (preliminary and detailed estimation). Based on the examples it was proved that by the application of thermal hazard (detailed estimation) it is possible to avoid the use of additional technical solutions which would be necessary to reduce thermal hazard in particular work places according to the climate index. The threshold limit value for TS has been set, based on these results. It was shown that below TS = 24°C it is not necessary to perform detailed estimation.

  5. Chandra Cluster Cosmology Project. II. Samples and X-Ray Data Reduction

    DEFF Research Database (Denmark)

    Vikhlinin, A.; Burenin, R. A.; Ebeling, H.;

    2009-01-01

    We discuss the measurements of the galaxy cluster mass functions at z ≈ 0.05 and z ≈ 0.5 using high-quality Chandra observations of samples derived from the ROSAT PSPC All-Sky and 400 deg2 surveys. We provide a full reference for the data analysis procedures, present updated calibration of relati...

  6. A clustering algorithm for sample data based on environmental pollution characteristics

    Science.gov (United States)

    Chen, Mei; Wang, Pengfei; Chen, Qiang; Wu, Jiadong; Chen, Xiaoyun

    2015-04-01

    Environmental pollution has become an issue of serious international concern in recent years. Among the receptor-oriented pollution models, CMB, PMF, UNMIX, and PCA are widely used as source apportionment models. To improve the accuracy of source apportionment and classify the sample data for these models, this study proposes an easy-to-use, high-dimensional EPC algorithm that not only organizes all of the sample data into different groups according to the similarities in pollution characteristics such as pollution sources and concentrations but also simultaneously detects outliers. The main clustering process consists of selecting the first unlabelled point as the cluster centre, then assigning each data point in the sample dataset to its most similar cluster centre according to both the user-defined threshold and the value of similarity function in each iteration, and finally modifying the clusters using a method similar to k-Means. The validity and accuracy of the algorithm are tested using both real and synthetic datasets, which makes the EPC algorithm practical and effective for appropriately classifying sample data for source apportionment models and helpful for better understanding and interpreting the sources of pollution.

  7. Optical and X-ray profiles in the REXCESS sample of galaxy clusters

    CERN Document Server

    Holland, John G; Chon, Gayoung; Pierini, Daniele

    2015-01-01

    Galaxy clusters' structure, dominated by dark matter, is traced by member galaxies in the optical and hot intra-cluster medium (ICM) in X-rays. We compare the radial distribution of these components and determine the mass-to-light ratio vs. system mass relation. We use 14 clusters from the REXCESS sample which is representative of clusters detected in X-ray surveys. Photometric observations with the Wide Field Imager on the 2.2m MPG/ESO telescope are used to determine the number density profiles of the galaxy distribution out to $r_{200}$. These are compared to electron density profiles of the ICM obtained using XMM-Newton, and dark matter profiles inferred from scaling relations and an NFW model. While red sequence galaxies trace the total matter profile, the blue galaxy distribution is much shallower. We see a deficit of faint galaxies in the central regions of massive and regular clusters, and strong suppression of bright and faint blue galaxies in the centres of cool-core clusters, attributable to ram pre...

  8. The duty cycle of radio-mode feedback in complete samples of clusters

    CERN Document Server

    Bîrzan, L; Nulsen, P E J; McNamara, B R; Röttgering, H J A; Wise, M W; Mittal, R

    2012-01-01

    The Chandra X-ray Observatory has revealed X-ray bubbles in the intracluster medium (ICM) of many nearby cooling flow clusters. The bubbles trace feedback that is thought to couple the central active galactic nucleus (AGN) to the ICM, helping to stabilize cooling flows and govern the evolution of massive galaxies. However, the prevalence and duty cycle of such AGN outbursts is not well understood. To this end, we study how cooling is balanced by bubble heating for complete samples of clusters (the Brightest 55 clusters of galaxies, hereafter B55, and the HIghest X-ray FLUx Galaxy Cluster Sample, HIFLUGCS). We find that the radio luminosity of the central galaxy only exceeds 2.5 x 10^30 erg s^-1 Hz^-1 in cooling flow clusters. This result implies a connection between the central radio source and the ICM, as expected if AGN feedback is operating. Additionally, we find a duty cycle for radio mode feedback, the fraction of time that a system possesses bubbles inflated by its central radio source, of > 69 per cent...

  9. Evaluation of a Two-Stage Approach in Trans-Ethnic Meta-Analysis in Genome-Wide Association Studies.

    Science.gov (United States)

    Hong, Jaeyoung; Lunetta, Kathryn L; Cupples, L Adrienne; Dupuis, Josée; Liu, Ching-Ti

    2016-05-01

    Meta-analysis of genome-wide association studies (GWAS) has achieved great success in detecting loci underlying human diseases. Incorporating GWAS results from diverse ethnic populations for meta-analysis, however, remains challenging because of the possible heterogeneity across studies. Conventional fixed-effects (FE) or random-effects (RE) methods may not be most suitable to aggregate multiethnic GWAS results because of violation of the homogeneous effect assumption across studies (FE) or low power to detect signals (RE). Three recently proposed methods, modified RE (RE-HE) model, binary-effects (BE) model and a Bayesian approach (Meta-analysis of Transethnic Association [MANTRA]), show increased power over FE and RE methods while incorporating heterogeneity of effects when meta-analyzing trans-ethnic GWAS results. We propose a two-stage approach to account for heterogeneity in trans-ethnic meta-analysis in which we clustered studies with cohort-specific ancestry information prior to meta-analysis. We compare this to a no-prior-clustering (crude) approach, evaluating type I error and power of these two strategies, in an extensive simulation study to investigate whether the two-stage approach offers any improvements over the crude approach. We find that the two-stage approach and the crude approach for all five methods (FE, RE, RE-HE, BE, MANTRA) provide well-controlled type I error. However, the two-stage approach shows increased power for BE and RE-HE, and similar power for MANTRA and FE compared to their corresponding crude approach, especially when there is heterogeneity across the multiethnic GWAS results. These results suggest that prior clustering in the two-stage approach can be an effective and efficient intermediate step in meta-analysis to account for the multiethnic heterogeneity.

  10. HICOSMO: cosmology with a complete sample of galaxy clusters - II. Cosmological results

    Science.gov (United States)

    Schellenberger, G.; Reiprich, T. H.

    2017-10-01

    The X-ray bright, hot gas in the potential well of a galaxy cluster enables systematic X-ray studies of samples of galaxy clusters to constrain cosmological parameters. HIFLUGCS consists of the 64 X-ray brightest galaxy clusters in the Universe, building up a local sample. Here, we utilize this sample to determine, for the first time, individual hydrostatic mass estimates for all the clusters of the sample and, by making use of the completeness of the sample, we quantify constraints on the two interesting cosmological parameters, Ωm and σ8. We apply our total hydrostatic and gas mass estimates from the X-ray analysis to a Bayesian cosmological likelihood analysis and leave several parameters free to be constrained. We find Ωm = 0.30 ± 0.01 and σ8 = 0.79 ± 0.03 (statistical uncertainties, 68 per cent credibility level) using our default analysis strategy combining both a mass function analysis and the gas mass fraction results. The main sources of biases that we correct here are (1) the influence of galaxy groups (incompleteness in parent samples and differing behaviour of the Lx-M relation), (2) the hydrostatic mass bias, (3) the extrapolation of the total mass (comparing various methods), (4) the theoretical halo mass function and (5) other physical effects (non-negligible neutrino mass). We find that galaxy groups introduce a strong bias, since their number density seems to be over predicted by the halo mass function. On the other hand, incorporating baryonic effects does not result in a significant change in the constraints. The total (uncorrected) systematic uncertainties (∼20 per cent) clearly dominate the statistical uncertainties on cosmological parameters for our sample.

  11. A study of high-redshift AGN feedback in SZ cluster samples

    Science.gov (United States)

    Bîrzan, L.; Rafferty, D. A.; Brüggen, M.; Intema, H. T.

    2017-10-01

    We present a study of active galactic nucleus (AGN) feedback at higher redshifts (0.3 samples of clusters from the South Pole Telescope and Atacama Cosmology Telescope surveys. In contrast to studies of nearby systems, we do not find a separation between cooling flow (CF) clusters and non-CF clusters based on the radio luminosity of the central radio source (cRS). This lack may be due to the increased incidence of galaxy-galaxy mergers at higher redshift that triggers AGN activity. In support of this scenario, we find evidence for evolution in the radio-luminosity function of the cRS, while the lower luminosity sources do not evolve much, the higher luminosity sources show a strong increase in the frequency of their occurrence at higher redshifts. We interpret this evolution as an increase in high-excitation radio galaxies (HERGs) in massive clusters at z > 0.6, implying a transition from HERG-mode accretion to lower power low-excitation radio galaxy (LERG)-mode accretion at intermediate redshifts. Additionally, we use local radio-to-jet power scaling relations to estimate feedback power and find that half of the CF systems in our sample probably have enough heating to balance cooling. However, we postulate that the local relations are likely not well suited to predict feedback power in high-luminosity HERGs, as they are derived from samples composed mainly of lower luminosity LERGs.

  12. Noncausal two-stage image filtration at presence of observations with anomalous errors

    Directory of Open Access Journals (Sweden)

    S. V. Vishnevyy

    2013-04-01

    Full Text Available Introduction. It is necessary to develop adaptive algorithms, which allow to detect such regions and to apply filter with respective parameters for suppression of anomalous noises for the purposes of image filtration, which consist of regions with anomalous errors. Development of adaptive algorithm for non-causal two-stage images filtration at pres-ence of observations with anomalous errors. The adaptive algorithm for noncausal two-stage filtration is developed. On the first stage the adaptive one-dimensional algorithm for causal filtration is used for independent processing along rows and columns of image. On the second stage the obtained data are united and a posteriori estimations are calculated. Results of experimental investigations. The developed adaptive algorithm for noncausal images filtration at presence of observations with anomalous errors is investigated on the model sample by means of statistical modeling on PC. The image is modeled as a realization of Gaussian-Markov random field. The modeled image is corrupted with uncorrelated Gaussian noise. Regions of image with anomalous errors are corrupted with uncorrelated Gaussian noise which has higher power than normal noise on the rest part of the image. Conclusions. The analysis of adaptive algorithm for noncausal two-stage filtration is done. The characteristics of accuracy of computed estimations are shown. The comparisons of first stage and second stage of the developed adaptive algorithm are done. Adaptive algorithm is compared with known uniform two-stage algorithm of image filtration. According to the obtained results the uniform algorithm does not suppress anomalous noise meanwhile the adaptive algorithm shows good results.

  13. Using Dynamic Quantum Clustering to Analyze Hierarchically Heterogeneous Samples on the Nanoscale

    Energy Technology Data Exchange (ETDEWEB)

    Hume, Allison; /Princeton U. /SLAC

    2012-09-07

    Dynamic Quantum Clustering (DQC) is an unsupervised, high visual data mining technique. DQC was tested as an analysis method for X-ray Absorption Near Edge Structure (XANES) data from the Transmission X-ray Microscopy (TXM) group. The TXM group images hierarchically heterogeneous materials with nanoscale resolution and large field of view. XANES data consists of energy spectra for each pixel of an image. It was determined that DQC successfully identifies structure in data of this type without prior knowledge of the components in the sample. Clusters and sub-clusters clearly reflected features of the spectra that identified chemical component, chemical environment, and density in the image. DQC can also be used in conjunction with the established data analysis technique, which does require knowledge of components present.

  14. Characterization of component interactions in two-stage axial turbine

    Directory of Open Access Journals (Sweden)

    Adel Ghenaiet

    2016-08-01

    Full Text Available This study concerns the characterization of both the steady and unsteady flows and the analysis of stator/rotor interactions of a two-stage axial turbine. The predicted aerodynamic performances show noticeable differences when simulating the turbine stages simultaneously or separately. By considering the multi-blade per row and the scaling technique, the Computational fluid dynamics (CFD produced better results concerning the effect of pitchwise positions between vanes and blades. The recorded pressure fluctuations exhibit a high unsteadiness characterized by a space–time periodicity described by a double Fourier decomposition. The Fast Fourier Transform FFT analysis of the static pressure fluctuations recorded at different interfaces reveals the existence of principal harmonics and their multiples, and each lobed structure of pressure wave corresponds to the number of vane/blade count. The potential effect is seen to propagate both upstream and downstream of each blade row and becomes accentuated at low mass flow rates. Between vanes and blades, the potential effect is seen to dominate the quasi totality of blade span, while downstream the blades this effect seems to dominate from hub to mid span. Near the shroud the prevailing effect is rather linked to the blade tip flow structure.

  15. A continuous two stage solar coal gasification system

    Science.gov (United States)

    Mathur, V. K.; Breault, R. W.; Lakshmanan, S.; Manasse, F. K.; Venkataramanan, V.

    The characteristics of a two-stage fluidized-bed hybrid coal gasification system to produce syngas from coal, lignite, and peat are described. Devolatilization heat of 823 K is supplied by recirculating gas heated by a solar receiver/coal heater. A second-stage gasifier maintained at 1227 K serves to crack remaining tar and light oil to yield a product free from tar and other condensables, and sulfur can be removed by hot clean-up processes. CO is minimized because the coal is not burned with oxygen, and the product gas contains 50% H2. Bench scale reactors consist of a stage I unit 0.1 m in diam which is fed coal 200 microns in size. A stage II reactor has an inner diam of 0.36 m and serves to gasify the char from stage I. A solar power source of 10 kWt is required for the bench model, and will be obtained from a central receiver with quartz or heat pipe configurations for heat transfer.

  16. Characterization of component interactions in two-stage axial turbine

    Institute of Scientific and Technical Information of China (English)

    Adel Ghenaiet; Kaddour Touil

    2016-01-01

    This study concerns the characterization of both the steady and unsteady flows and the analysis of stator/rotor interactions of a two-stage axial turbine. The predicted aerodynamic perfor-mances show noticeable differences when simulating the turbine stages simultaneously or sepa-rately. By considering the multi-blade per row and the scaling technique, the Computational fluid dynamics (CFD) produced better results concerning the effect of pitchwise positions between vanes and blades. The recorded pressure fluctuations exhibit a high unsteadiness characterized by a space–time periodicity described by a double Fourier decomposition. The Fast Fourier Transform FFT analysis of the static pressure fluctuations recorded at different interfaces reveals the existence of principal harmonics and their multiples, and each lobed structure of pressure wave corresponds to the number of vane/blade count. The potential effect is seen to propagate both upstream and downstream of each blade row and becomes accentuated at low mass flow rates. Between vanes and blades, the potential effect is seen to dominate the quasi totality of blade span, while down-stream the blades this effect seems to dominate from hub to mid span. Near the shroud the prevail-ing effect is rather linked to the blade tip flow structure.

  17. Two stages kinetics of municipal solid waste inoculation composting processes

    Institute of Scientific and Technical Information of China (English)

    XI Bei-dou1; HUANG Guo-he; QIN Xiao-sheng; LIU Hong-liang

    2004-01-01

    In order to understand the key mechanisms of the composting processes, the municipal solid waste(MSW) composting processes were divided into two stages, and the characteristics of typical experimental scenarios from the viewpoint of microbial kinetics was analyzed. Through experimentation with advanced composting reactor under controlled composting conditions, several equations were worked out to simulate the degradation rate of the substrate. The equations showed that the degradation rate was controlled by concentration of microbes in the first stage. The degradation rates of substrates of inoculation Run A, B, C and Control composting systems were 13.61 g/(kg·h), 13.08 g/(kg·h), 15.671 g/(kg·h), and 10.5 g/(kg·h), respectively. The value of Run C is around 1.5 times higher than that of Control system. The decomposition rate of the second stage is controlled by concentration of substrate. Although the organic matter decomposition rates were similar to all Runs, inoculation could reduce the values of the half velocity coefficient and could be more efficient to make the composting stable. Particularly. For Run C, the decomposition rate is high in the first stage, and is low in the second stage. The results indicated that the inoculation was efficient for the composting processes.

  18. Detecting Sunyaev-Zel'dovich clusters with PLANCK: III. Properties of the expected SZ-cluster sample

    CERN Document Server

    Schaefer, B M; Schaefer, Bjoern Malte; Bartelmann, Matthias

    2006-01-01

    The PLANCK-mission is the most sensitive all-sky submillimetric mission currently being planned and prepared. Special emphasis is given to the observation of clusters of galaxies by their thermal Sunyaev-Zel'dovich (SZ) effect. In this work, the results of a simulation are presented that combines all-sky maps of the thermal and kinetic SZ-effect with cosmic microwave background (CMB) fluctuations, Galactic foregrounds (synchrotron emission, thermal emission from dust, free-free emission and rotational transitions of carbon monoxide molecules) and sub-millimetric emission from planets and asteroids of the Solar System. Observational issues, such as PLANCKs beam shapes, frequency response and spatially non-uniform instrumental noise have been incorporated. Matched and scale-adaptive multi-frequency filtering schemes have been extended to spherical coordinates and are now applied to the data sets in order to isolate and amplify the weak thermal SZ-signal. The properties of the resulting SZ-cluster sample are cha...

  19. Identification of clusters of foot pain location in a community sample.

    Science.gov (United States)

    Gill, Tiffany K; Menz, Hylton B; Landorf, Karl B; Arnold, John B; Taylor, Anne W; Hill, Catherine L

    2017-02-23

    To identify foot pain clusters according to pain location in a community based sample of the general population. This study analysed data from the North West Adelaide Health Study. Data were obtained between 2004-2006, using Computer Assisted Telephone Interviewing, clinical assessment and self-completed questionnaire. The location of foot pain was assessed using a diagram during the clinical assessment. Hierarchical cluster analysis was undertaken to identify foot pain location clusters, which were then compared in relation to demographics, comorbidities and podiatry utilisation. There were 558 participants with foot pain (mean age 54.4 years, 57.5% female). Five clusters were identified: one with predominantly arch and ball pain (26.8%); one with hindfoot pain (20.9%); another with heel pain (13.3%), and two with predominantly forefoot, toe and nail pain (28.3% and 10.7%). Each cluster was distinct in age, sex and comorbidity profiles. Of the two clusters with predominantly forefoot, toe and nail pain, one had a higher proportion of males, and those who were classified as obese, had diabetes and who used podiatry services (30%), while the other was comprised of a higher proportion of females who were overweight and had a lower use of podiatry services (17.5%). Five clusters of foot pain according to pain location were identified, all with distinct age, sex and comorbidity profiles. These findings may assist in identifying individuals at risk for developing foot pain and the development of targeted preventative strategies and treatments. This article is protected by copyright. All rights reserved. © 2017, American College of Rheumatology.

  20. The XXL Survey III. Luminosity-temperature relation of the Bright Cluster Sample

    CERN Document Server

    Giles, P A; Pacaud, F; Lieu, M; Clerc, N; Pierre, M; Adami, C; Chiappetti, L; Démoclés, J; Ettori, S; Févre, J P Le; Ponman, T; Sadibekova, T; Smith, G P; Willis, J P; Ziparo, F

    2015-01-01

    The XXL Survey is the largest homogeneous survey carried out with XMM-Newton. Covering an area of 50 deg$^{2}$, the survey contains several hundred galaxy clusters out to a redshift of $\\approx$2 above an X-ray flux limit of $\\sim$5$\\times10^{-15}$ erg cm$^{-2}$ s$^{-1}$. This paper belongs to the first series of XXL papers focusing on the bright cluster sample. We investigate the luminosity-temperature (LT) relation for the brightest clusters detected in the XXL Survey, taking fully into account the selection biases. We investigate the form of the LT relation, placing constraints on its evolution. We have classified the 100 brightest clusters in the XXL Survey based on their measured X-ray flux. These 100 clusters have been analysed to determine their luminosity and temperature to evaluate the LT relation. We used three methods to fit the LT relation, with two of these methods providing a prescription to fully take into account the selection effects of the survey. We measure the evolution of the LT relation ...

  1. The XXL Survey. XIII. Baryon content of the bright cluster sample

    CERN Document Server

    Eckert, D; Coupon, J; Gastaldello, F; Pierre, M; Melin, J -B; Brun, A M C Le; McCarthy, I G; Adami, C; Chiappetti, L; Faccioli, L; Giles, P; Lavoie, S; Lefevre, J P; Lieu, M; Mantz, A; Maughan, B; McGee, S; Pacaud, F; Paltani, S; Sadibekova, T; Smith, G P; Ziparo, F

    2015-01-01

    Traditionally, galaxy clusters have been expected to retain all the material accreted since their formation epoch. For this reason, their matter content should be representative of the Universe as a whole, and thus their baryon fraction should be close to the Universal baryon fraction. We make use of the sample of the 100 brightest galaxy clusters discovered in the XXL Survey to investigate the fraction of baryons in the form of hot gas and stars in the cluster population. We measure the gas masses of the detected halos and use a mass--temperature relation directly calibrated using weak-lensing measurements for a subset of XXL clusters to estimate the halo mass. We find that the weak-lensing calibrated gas fraction of XXL-100-GC clusters is substantially lower than was found in previous studies using hydrostatic masses. Our best-fit relation between gas fraction and mass reads $f_{\\rm gas,500}=0.055_{-0.006}^{+0.007}\\left(M_{\\rm 500}/10^{14}M_\\odot\\right)^{0.21_{-0.10}^{+0.11}}$. The baryon budget of galaxy c...

  2. Cluster analysis of passive air sampling data based on the relative composition of persistent organic pollutants.

    Science.gov (United States)

    Liu, Xiande; Wania, Frank

    2014-03-01

    The development of passive air samplers has allowed the measurement of time-integrated concentrations of persistent organic pollutants (POPs) within spatial networks on a variety of scales. Cluster analysis of POP composition may enhance the interpretation of such spatial data. Several methodological aspects of the application of cluster analysis are discussed, including the influence of a dominant pollutant, the role of PAS duplication, and comparison of regional studies. Relying on data from six regional studies in North and South America, Africa, and Asia, we illustrate here how cluster analysis can be used to extract information and gain insights into POP sources and atmospheric transport contributions. Cluster analysis allows classification of PAS samples into those with significant local source contributions and those that represent regional fingerprints. Local emissions, atmospheric transport, and seasonal cycles are identified as being among the major factors determining the variation in POP composition at many sites. By complementing cluster analysis with meteorological data such as air mass back-trajectories, terrain, as well as geographical and socio-economic aspects, a comprehensive picture of the atmospheric contamination of a region by POPs emerges.

  3. Surface Brightness Profiles for a sample of LMC, SMC and Fornax galaxy Globular Clusters

    CERN Document Server

    Noyola, Eva

    2007-01-01

    We use Hubble Space Telescope archival images to measure central surface brightness profiles of globular clusters around satellite galaxies of the Milky Way. We report results for 21 clusters around the LMC, 5 around the SMC, and 4 around the Fornax dwarf galaxy. The profiles are obtained using a recently developed technique based on measuring integrated light, which is tested on an extensive simulated dataset. Our results show that for 70% of the sample, the central photometric points of our profiles are brighter than previous measurements using star counts with deviations as large as 2 mag/arcsec^2. About 40% of the objects have central profiles deviating from a flat central core, with central logarithmic slopes continuously distributed between -0.2 and -1.2. These results are compared with those found for a sample of Galactic clusters using the same method. We confirm the known correlation in which younger clusters tend to have smaller core radii, and we find that they also have brighter central surface br...

  4. The XXL Survey. II. The bright cluster sample: catalogue and luminosity function

    CERN Document Server

    Pacaud, F; Giles, P A; Adami, C; Sadibekova, T; Pierre, M; Maughan, B J; Lieu, M; Fèvre, J -P Le; Alis, S; Altieri, B; Ardila, F; Baldry, I; Benoist, C; Birkinshaw, M; Chiappetti, L; Démoclès, J; Eckert, D; Evrard, A E; Faccioli, L; Gastaldello, F; Guennou, L; Horellou, C; Iovino, A; Koulouridis, E; Brun, V Le; Lidman, C; Liske, J; Maurogordato, S; Menanteau, F; Owers, M; Poggianti, B; Pomarède, D; Pompei, E; Ponman, T J; Rapetti, D; Reiprich, T H; Smith, G P; Tuffs, R; Valageas, P; Valtchanov, I; Willis, J P; Ziparo, F

    2015-01-01

    Context. The XXL Survey is the largest survey carried out by the XMM-Newton satellite and covers a total area of 50 square degrees distributed over two fields. It primarily aims at investigating the large-scale structures of the Universe using the distribution of galaxy clusters and active galactic nuclei as tracers of the matter distribution. Aims. This article presents the XXL bright cluster sample, a subsample of 100 galaxy clusters selected from the full XXL catalogue by setting a lower limit of $3\\times 10^{-14}\\,\\mathrm{erg \\,s^{-1}cm^{-2}}$ on the source flux within a 1$^{\\prime}$ aperture. Methods. The selection function was estimated using a mixture of Monte Carlo simulations and analytical recipes that closely reproduce the source selection process. An extensive spectroscopic follow-up provided redshifts for 97 of the 100 clusters. We derived accurate X-ray parameters for all the sources. Scaling relations were self-consistently derived from the same sample in other publications of the series. On th...

  5. H0 from an orientation-unbiased sample of SZ and X-ray clusters

    CERN Document Server

    Jones, M E; Grainge, K; Grainger, W F; Kneissl, R; Pooley, G G; Saunders, R; Miyoshi, S J; Tsuruta, T; Yamashita, K; Tawara, Y; Furuzawa, A; Harada, A; Hatsukade, I; Jones, Michael E.; Edge, Alastair C.; Grainge, Keith; Grainger, William F.; Kneissl, Ruediger; Saunders, Richard; Miyoshi, Shigeru J.; Tsuruta, Taisuke; Yamashita, Koujun; Tawara, Yuzuru; Furuzawa, Akihiro; Harada, Akihiro; Hatsukade, Isamu

    2001-01-01

    We have observed the Sunyaev-Zel'dovich effect in a sample of five moderate-redshift clusters with the Ryle Telescope, and used them in conjunction with X-ray imaging and spectral data from ROSAT and ASCA to measure the Hubble constant. This sample was chosen with a strict X-ray flux limit using both the BCS and NORAS cluster catalogues to be well above the surface-brightness limit of the ROSAT All-Sky Survey, and hence to be unbiased with respect to the orientation of the cluster. This controls the major potential systematic effect in the SZ/X-ray method of measureing H0. Taking the weighted geometric mean of the results and including the main sources of random error, namely the noise in the SZ measurement, the uncertainty in the X-ray temperatures and the unknown ellipticity of the clusters, we find H0 = 59 +8/-7 km/s/Mpc assuming a standard CDM model with Omega_M = 1.0, Omega_Lambda = 0.0, or H0 = 65 +8/-7 km/s/Mpc if Omega_M = 0.3, Omega_Lambda = 0.7.

  6. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile

    DEFF Research Database (Denmark)

    Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.;

    2013-01-01

    and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......SUMMARY Disease cases are often clustered within herds or generally groups that share common characteristics. Sample size formulae must adjust for the within-cluster correlation of the primary sampling units. Traditionally, the intra-cluster correlation coefficient (ICC), which is an average...

  7. PERFORMANCE STUDY OF A TWO STAGE SOLAR ADSORPTION REFRIGERATION SYSTEM

    Directory of Open Access Journals (Sweden)

    BAIJU. V

    2011-07-01

    Full Text Available The present study deals with the performance of a two stage solar adsorption refrigeration system with activated carbon-methanol pair investigated experimentally. Such a system was fabricated and tested under the conditions of National Institute of Technology Calicut, Kerala, India. The system consists of a parabolic solar concentrator,two water tanks, two adsorbent beds, condenser, expansion device, evaporator and accumulator. In this particular system the second water tank is act as a sensible heat storage device so that the system can be used during night time also. The system has been designed for heating 50 litres of water from 25oC to 90oC as well ascooling 10 litres of water from 30oC to 10oC within one hour. The performance parameters such as specific cooling power (SCP, coefficient of performance, solar COP and exergetic efficiency are studied. The dependency between the exergetic efficiency and cycle COP with the driving heat source temperature is also studied. The optimum heat source temperature for this system is determined as 72.4oC. The results show that the system has better performance during night time as compared to the day time. The system has a mean cycle COP of 0.196 during day time and 0.335 for night time. The mean SCP values during day time and night time are 47.83 and 68.2, respectively. The experimental results also demonstrate that the refrigerator has cooling capacity of 47 to 78 W during day time and 57.6 W to 104.4W during night time.

  8. The Atacama Cosmology Telescope Sunyaev-Zel'dovich Equatorial Galaxy Cluster Sample

    Science.gov (United States)

    Menanteau, Felipe; Cosmology Telescope, Atacama

    2012-05-01

    We have reached the era where microwave surveys such as the Atacama Cosmology Telescope (ACT), the South Pole Telescope (SPT) and Planck are reporting the first samples of massive galaxy clusters through the Sunyaev-Zel'dovich (SZ) effect. Here I will introduce a new mass-selected and redshift-independent sample of optically-confirmed galaxy clusters detected by ACT over approximately 300 square-degrees along the celestial equator overlapping the deep optical u,g,r,i and z imaging from SDSS Stripe 82. This work was supported by the U.S. National Science Foundation through awards AST- 0408698 for the ACT project and PHY-0355328, AST-0707731, and PIRE-0507768 (award number OISE-0530095).

  9. Generalized Yule-walker and two-stage identification algorithms for dual-rate systems

    Institute of Scientific and Technical Information of China (English)

    Feng DING

    2006-01-01

    In this paper, two approaches are developed for directly identifying single-rate models of dual-rate stochastic systems in which the input updating frequency is an integer multiple of the output sampling frequency. The first is the generalized Yule-Walker algorithm and the second is a two-stage algorithm based on the correlation technique. The basic idea is to directly identify the parameters of underlying single-rate models instead of the lifted models of dual-rate systems from the dual-rate input-output data, assuming that the measurement data are stationary and ergodic. An example is given.

  10. FORMATION OF HIGHLY RESISTANT CARBIDE AND BORIDE COATINGS BY A TWO-STAGE DEPOSITION METHOD

    Directory of Open Access Journals (Sweden)

    W. I. Sawich

    2011-01-01

    Full Text Available A study was made of the aspects of forming highly resistant coatings in the surface zone of tool steels and solid carbide inserts by a two-stage method. at the first stage of the method, pure Ta or Nb coatings were electrodeposited on samples of tool steel and solid carbide insert in a molten salt bath containing Ta and Nb fluorides. at the second stage, the electrodeposited coating of Ta (Nb was subjected to carburizing or boriding to form carbide (TaC, NbC or boride (TaB, NbB cladding layers.

  11. Clinical evaluation of nonsyndromic dental anomalies in Dravidian population: A cluster sample analysis

    OpenAIRE

    Yamunadevi, Andamuthu; Selvamani, M.; Vinitha, V.; Srivandhana, R.; Balakrithiga, M.; Prabhu, S; Ganapathy, N

    2015-01-01

    Aim: To record the prevalence rate of dental anomalies in Dravidian population and analyze the percentage of individual anomalies in the population. Methodology: A cluster sample analysis was done, where 244 subjects studying in a dental institution were all included and analyzed for occurrence of dental anomalies by clinical examination, excluding third molars from analysis. Results: 31.55% of the study subjects had dental anomalies and shape anomalies were more prevalent (22.1%), followed b...

  12. Multiwavelength Mass Comparisons of the z~0.3 CNOC Cluster Sample

    Science.gov (United States)

    Hicks, A. K.; Ellingson, E.; Hoekstra, H.; Yee, H. K. C.

    2006-11-01

    Results are presented from a detailed analysis of optical and X-ray observations of moderate-redshift galaxy clusters from the Canadian Network for Observational Cosmology (CNOC) subsample of the EMSS. The combination of extensive optical and deep X-ray observations of these clusters make them ideal candidates for multiwavelength mass comparison studies. X-ray surface brightness profiles of 14 clusters with 0.17R2500 provide temperature, abundance, and luminosity information. Under assumptions of hydrostatic equilibrium and spherical symmetry, we derive gas and total masses within R2500 and R200. We find an average gas mass fraction of fgas(R200)=0.092+/-0.004 h-3/270, resulting in Ωm=0.42+/-0.02 (formal error). We also derive dynamical masses for these clusters to R200. We find no systematic bias between X-ray and dynamical methods across the sample, with an average Mdyn/MX=0.97+/-0.05. We also compare X-ray masses to weak-lensing mass estimates of a subset of our sample, resulting in a weighted average of Mlens/MX of 0.99+/-0.07. We investigate X-ray-scaling relationships and find power-law slopes that are slightly steeper than the predictions of self-similar models, with an E(z)-1LX-TX slope of 2.4+/-0.2 and an E(z)M2500-TX slope of 1.7+/-0.1. Relationships between red-sequence optical richness (Bgc,red) and global cluster X-ray properties (TX, LX, and M2500) are also examined and fitted.

  13. Proton transfer pathways in an aspartate-water cluster sampled by a network of discrete states

    Science.gov (United States)

    Reidelbach, Marco; Betz, Fridtjof; Mäusle, Raquel Maya; Imhof, Petra

    2016-08-01

    Proton transfer reactions are complex transitions due to the size and flexibility of the hydrogen-bonded networks along which the protons may ;hop;. The combination of molecular dynamics based sampling of water positions and orientations with direct sampling of proton positions is an efficient way to capture the interplay of these degrees of freedom in a transition network. The energetically most favourable pathway in the proton transfer network computed for an aspartate-water cluster shows the pre-orientation of water molecules and aspartate side chains to be a pre-requisite for the subsequent concerted proton transfer to the product state.

  14. Late-type dwarf galaxies in the Virgo cluster; 1, the samples

    CERN Document Server

    Almoznino, E; Almoznino, Elchanan; Brosch, Noah

    1996-01-01

    We selected complete samples of late-type dwarf galaxies in the Virgo cluster with HI information. The galaxies were observed at the Wise-Observatory using several broad-band and H\\alpha bandpasses. UV measurements were carried out with the IUE Observatory from VILSPA, and use was made of images from the FAUST shuttle-borne UV telescope. We describe our observations in detail, paying particular attention to the determination of measurement errors, and present the observational results together with published data and far-infrared information from IRAS. The sample will be analyzed in subsequent papers, in order to study star formation mechanisms.

  15. Chemical Abundances in a Sample of Red Giants in the Open Cluster NGC 2420 from APOGEE

    CERN Document Server

    Souto, Diogo; Smith, Verne; Prieto, Carlos Allende; Pinsonneault, Marc; Zamora, Olga; García-Hernández, D Anibal; Bovy, Szabolcs Meszaros Jo; Pérez, Ana Elia García; Anders, Friedrich; Bizyaev, Dmitry; Carrera, Ricardo; Frinchaboy, Peter; Holtzman, Jon; Ivans, Inese; Majewski, Steve; Shetrone, Matthew; Sobeck, Jennifer; Pan, Kaike; Tang, Baitian; Villanova, Sandro; Geisler, Douglas

    2016-01-01

    NGC 2420 is a $\\sim$2 Gyr-old well-populated open cluster that lies about 2 kpc beyond the solar circle, in the general direction of the Galactic anti-center. Most previous abundance studies have found this cluster to be mildly metal-poor, but with a large scatter in the obtained metallicities for this open cluster. Detailed chemical abundance distributions are derived for 12 red-giant members of NGC 2420 via a manual abundance analysis of high-resolution (R = 22,500) near-infrared ($\\lambda$1.5 - 1.7$\\mu$m) spectra obtained from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) survey. The sample analyzed contains 6 stars that are identified as members of the first-ascent red giant branch (RGB), as well as 6 members of the red clump (RC). We find small scatter in the star-to-star abundances in NGC 2420, with a mean cluster abundance of [Fe/H] = -0.16 $\\pm$ 0.04 for the 12 red giants. The internal abundance dispersion for all elements (C, N, O, Na, Mg, Al, Si, K, Ca, Ti, V, Cr, Mn, Co and Ni...

  16. The Luminosity Function of the NoSOCS Galaxy Cluster Sample

    CERN Document Server

    De Filippis, E; Longo, G; La Barbera, F; de Carvalho, R R; Gal, R

    2011-01-01

    We present the analysis of the luminosity function of a large sample of galaxy clusters from the Northern Sky Optical Cluster Survey, using latest data from the Sloan Digital Sky Survey. Our global luminosity function (down to M_r<= -16) does not show the presence of an upturn at faint magnitudes, while we do observe a strong dependence of its shape on both richness and cluster-centric radius, with a brightening of M^* and an increase of the dwarf to giant ratio with richness, indicating that more massive systems are more efficient in creating/retaining a population of dwarf satellites. This is observed both within physical (0.5 R_200) and fixed (0.5 Mpc) apertures, suggesting that the trend is either due to a global effect, operating at all scales, or to a local one but operating on even smaller scales. We further observe a decrease of the relative number of dwarf galaxies towards the cluster center; this is most probably due to tidal collisions or collisional disruption of the dwarfs since merging proces...

  17. An Efficient Technique for Network Traffic Summarization using Multiview Clustering and Statistical Sampling

    Directory of Open Access Journals (Sweden)

    Mohiuddin Ahmed

    2015-07-01

    Full Text Available There is significant interest in the data mining and network management communities to efficiently analyse huge amounts of network traffic, given the amount of network traffic generated even in small networks. Summarization is a primary data mining task for generating a concise yet informative summary of the given data and it is a research challenge to create summary from network traffic data. Existing clustering based summarization techniques lack the ability to create a suitable summary for further data mining tasks such as anomaly detection and require the summary size as an external input. Additionally, for complex and high dimensional network traffic datasets, there is often no single clustering solution that explains the structure of the given data. In this paper, we investigate the use of multiview clustering to create a meaningful summary using original data instances from network traffic data in an efficient manner. We develop a mathematically sound approach to select the summary size using a sampling technique. We compare our proposed approach with regular clustering based summarization incorporating the summary size calculation method and random approach. We validate our proposed approach using the benchmark network traffic dataset and state-of-theart summary evaluation metrics.

  18. Molecular dynamics computer simulations of sputtering of benzene sample by large mixed Lennard-Jones clusters

    Energy Technology Data Exchange (ETDEWEB)

    Rzeznik, L., E-mail: rzeznik@lippmann.lu [University of Information Technology and Management, Sucharskiego 2, 35-225 Rzeszów (Poland); Postawa, Z. [Institute of Physics, Jagiellonian University, Reymonta 4, 30-059 Kraków (Poland)

    2014-05-01

    Molecular dynamics computer simulations have been used to probe the role of the projectile composition on the emission efficiency and the sample damage. A benzene crystal was bombarded by 15 keV large heterogeneous noble gas clusters containing 2953 atoms. The projectiles used in this study are two-component clusters composed of Ne, Ar, and Kr atoms directed at 0° and 60° relative to the surface normal. It has been found that for normal incidence the total sputtering yield decreases with the projectile mass, whereas for 60° impact angle the yield increases with this quantity. For both 0° and 60° impact angles the observed sputtering yield for heterogeneous clusters cannot be calculated as a sum of sputtering yields obtained for homogeneous projectiles multiplied by the concentration of each component in the multi-component cluster. The difference in deposition scenarios of the primary kinetic energy is shown to be responsible for the observed behavior of the total sputtering yield.

  19. A Legacy Magellanic Clouds Star Clusters Sample for the Calibration of Stellar Evolution Models

    Science.gov (United States)

    Fouesneau, Morgan

    2014-10-01

    Stellar evolution models are fundamental to all studies in astrophysics. These models are the foundations of the interpretation of colors and luminosities of stars necessary to address problems ranging from galaxy formation to determining the habitable zone of planets and interstellar medium properties. For decades the standard calibration of these models relied on a handful of star clusters. However, large uncertainties remain in the fundamental parameters underlying stellar evolution models. The project we propose is two-fold. First we propose to generate a new high quality reference dataset of the resolved stars in 121 Magellanic Cloud clusters, selected from 18 past programs to efficiently sample a large grid of stellar evolution models. Our team will measure the photometry of individual stars in those clusters and characterize individual completeness and photometric uncertainties. Second, we will migrate the calibration of the stellar evolution into a fully probabilistic framework, that will not only reflect the state-of-the-art, but will also be published with fully characterized uncertainties, based on the entire reference data set, rather than a few select clusters.We have entered an era dominated by large surveys {e.g. SDSS, PanSTARRS, Gaia, LSST} where the variations between families of stellar models are greater than the nominal precision of the instruments. Our proposed program will provide a library needed for a convergence in the stellar models and our understanding of stellar evolution.

  20. Fuzzy C-Means Clustering Model Data Mining For Recognizing Stock Data Sampling Pattern

    Directory of Open Access Journals (Sweden)

    Sylvia Jane Annatje Sumarauw

    2007-06-01

    Full Text Available Abstract Capital market has been beneficial to companies and investor. For investors, the capital market provides two economical advantages, namely deviden and capital gain, and a non-economical one that is a voting .} hare in Shareholders General Meeting. But, it can also penalize the share owners. In order to prevent them from the risk, the investors should predict the prospect of their companies. As a consequence of having an abstract commodity, the share quality will be determined by the validity of their company profile information. Any information of stock value fluctuation from Jakarta Stock Exchange can be a useful consideration and a good measurement for data analysis. In the context of preventing the shareholders from the risk, this research focuses on stock data sample category or stock data sample pattern by using Fuzzy c-Me, MS Clustering Model which providing any useful information jar the investors. lite research analyses stock data such as Individual Index, Volume and Amount on Property and Real Estate Emitter Group at Jakarta Stock Exchange from January 1 till December 31 of 204. 'he mining process follows Cross Industry Standard Process model for Data Mining (CRISP,. DM in the form of circle with these steps: Business Understanding, Data Understanding, Data Preparation, Modelling, Evaluation and Deployment. At this modelling process, the Fuzzy c-Means Clustering Model will be applied. Data Mining Fuzzy c-Means Clustering Model can analyze stock data in a big database with many complex variables especially for finding the data sample pattern, and then building Fuzzy Inference System for stimulating inputs to be outputs that based on Fuzzy Logic by recognising the pattern. Keywords: Data Mining, AUz..:y c-Means Clustering Model, Pattern Recognition

  1. New Survey Questions and Estimators for Network Clustering with Respondent-Driven Sampling Data

    CERN Document Server

    Verdery, Ashton M; Siripong, Nalyn; Abdesselam, Kahina; Bauldry, Shawn

    2016-01-01

    Respondent-driven sampling (RDS) is a popular method for sampling hard-to-survey populations that leverages social network connections through peer recruitment. While RDS is most frequently applied to estimate the prevalence of infections and risk behaviors of interest to public health, like HIV/AIDS or condom use, it is rarely used to draw inferences about the structural properties of social networks among such populations because it does not typically collect the necessary data. Drawing on recent advances in computer science, we introduce a set of data collection instruments and RDS estimators for network clustering, an important topological property that has been linked to a network's potential for diffusion of information, disease, and health behaviors. We use simulations to explore how these estimators, originally developed for random walk samples of computer networks, perform when applied to RDS samples with characteristics encountered in realistic field settings that depart from random walks. In partic...

  2. Right Axillary Sweating After Left Thoracoscopic Sypathectomy in Two-Stage Surgery

    Directory of Open Access Journals (Sweden)

    Berkant Ozpolat

    2013-06-01

    Full Text Available One stage bilateral or two stage unilateral video assisted thoracoscopic sympathectomy could be performed in the treatment of primary focal hyperhidrosis. Here we present a case with compensatory sweating of contralateral side after a two stage operation.

  3. The Two-stage Constrained Equal Awards and Losses Rules for Multi-Issue Allocation Situation

    NARCIS (Netherlands)

    Lorenzo-Freire, S.; Casas-Mendez, B.; Hendrickx, R.L.P.

    2005-01-01

    This paper considers two-stage solutions for multi-issue allocation situations.Characterisations are provided for the two-stage constrained equal awards and constrained equal losses rules, based on the properties of composition and path independence.

  4. On the errors on Omega(0): Monte Carlo simulations of the EMSS cluster sample

    DEFF Research Database (Denmark)

    Oukbir, J.; Arnaud, M.

    2001-01-01

    We perform Monte Carlo simulations of synthetic EMSS cluster samples, to quantify the systematic errors and the statistical uncertainties on the estimate of Omega (0) derived from fits to the cluster number density evolution and to the X-ray temperature distribution up to z=0.83. We identify...... the scatter around the relation between cluster X-ray luminosity and temperature to be a source of systematic error, of the order of Delta (syst)Omega (0) = 0.09, if not properly taken into account in the modelling. After correcting for this bias, our best Omega (0) is 0.66. The uncertainties on the shape...... and normalization of the power spectrum of matter fluctuations imply relatively large uncertainties on this estimate of Omega (0), of the order of Delta (stat)Omega (0) = 0.1 at the 1 sigma level. On the other hand, the statistical uncertainties due to the finite size of the high-redshift sample are twice as small...

  5. Simultaneous alignment and clustering of peptide data using a Gibbs sampling approach

    DEFF Research Database (Denmark)

    Andreatta, Massimo; Lund, Ole; Nielsen, Morten

    2013-01-01

    peptide datasets, however, is a complex task, especially when the data contain multiple receptor binding motifs, and/or the motifs are found at different locations within distinct peptides.Results: The algorithm presented in this article, based on Gibbs sampling, identifies multiple specificities...... in peptide data by performing two essential tasks simultaneously: alignment and clustering of peptide data. We apply the method to de-convolute binding motifs in a panel of peptide datasets with different degrees of complexity spanning from the simplest case of pre-aligned fixed-length peptides to cases...... of unaligned peptide datasets of variable length. Example applications described in this article include mixtures of binders to different MHC class I and class II alleles, distinct classes of ligands for SH3 domains and sub-specificities of the HLA-A*02:01 molecule.Availability: The Gibbs clustering method...

  6. Two-Stage Exams Improve Student Learning in an Introductory Geology Course: Logistics, Attendance, and Grades

    Science.gov (United States)

    Knierim, Katherine; Turner, Henry; Davis, Ralph K.

    2015-01-01

    Two-stage exams--where students complete part one of an exam closed book and independently and part two is completed open book and independently (two-stage independent, or TS-I) or collaboratively (two-stage collaborative, or TS-C)--provide a means to include collaborative learning in summative assessments. Collaborative learning has been shown to…

  7. The Clustering Evolution of Distant Red Galaxies in the GOODS-MUSIC Sample

    CERN Document Server

    Grazian, A; De Santis, C; Fontana, A; Gallozzi, S; Giallongo, E; Menci, N; Moscardini, L; Nonino, M; Salimbeni, S; Vanzella, E

    2006-01-01

    We use the GOODS-MUSIC sample, a catalog of ~3000 Ks-selected galaxies based on VLT and HST observation of the GOODS-South field with extended multi-wavelength coverage (from 0.3 to 8 micron) and accurate estimates of the photometric redshifts to select 179 DRGs with J-Ks>1.3 in an area of 135 sq. arcmin. We first show that the J-Ks>1.3 criterion selects a rather heterogeneous sample of galaxies, going from the targeted high-redshift luminous evolved systems, to a significant fraction of lower redshift (1clustered than higher-z DRGs. With the aid of extreme and simplified theoretical models of clustering evolution we show that it is unlikely that the two samples are drawn from the same population observed at two different stages of evolution. High-z DRGs likely represent the progenitors of the more massive and more luminous galaxies in the local Universe and might mark the regions that will later evolve into structu...

  8. Two Stage Secure Dynamic Load Balancing Architecture for SIP Server Clusters

    Directory of Open Access Journals (Sweden)

    G. Vennila

    2014-08-01

    Full Text Available Session Initiation Protocol (SIP is a signaling protocol emerged with an aim to enhance the IP network capabilities in terms of complex service provision. SIP server scalability with load balancing has a greater concern due to the dramatic increase in SIP service demand. Load balancing of session method (request/response and security measures optimizes the SIP server to regulate of network traffic in Voice over Internet Protocol (VoIP. Establishing a honeywall prior to the load balancer significantly reduces SIP traffic and drops inbound malicious load. In this paper, we propose Active Least Call in SIP Server (ALC_Server algorithm fulfills objectives like congestion avoidance, improved response times, throughput, resource utilization, reducing server faults, scalability and protection of SIP call from DoS attacks. From the test bed, the proposed two-tier architecture demonstrates that the ALC_Server method dynamically controls the overload and provides robust security, uniform load distribution for SIP servers.

  9. Galaxy cluster X-ray luminosity scaling relations from a representative local sample (REXCESS)

    CERN Document Server

    Pratt, G W; Arnaud, M; Böhringer, H

    2008-01-01

    (Abridged) We examine the X-ray luminosity scaling relations of 31 nearby galaxy clusters from the Representative XMM-Newton Cluster Structure Survey (REXCESS). The objects are selected in X-ray luminosity only, optimally sampling the cluster luminosity function; temperatures range from 2 to 9 keV and there is no bias toward any particular morphological type. Pertinent values are extracted in an aperture corresponding to R_500, estimated using the tight correlation between Y_X and total mass. The data exhibit power law relations between bolometric X-ray luminosity and temperature, Y_X and total mass, all with slopes that are significantly steeper than self-similar expectations. We examine the causes for the steepening, finding that the primary driver appears to be a systematic variation of the gas content with mass. Scatter about the relations is dominated in all cases by the presence of cool cores. The logarithmic scatter about the raw X-ray luminosity-temperature relation is approximately 30%, and that abou...

  10. National Longitudinal Study of the High School Class of 1972. Sample Design Efficiency Study: Effects of Stratification, Clustering, and Unequal Weighting on the Variances of NLS Statistics.

    Science.gov (United States)

    National Center for Education Statistics (DHEW), Washington, DC.

    A complex two-stage sample selection process was used in designing the National Longitudinal Study of the High School Class of 1972. The first-stage sampling frame used in the selection of schools was stratified by the following seven variables: public vs. private control, geographic region, grade 12 enrollment, proximity to institutions of higher…

  11. Cluster Analysis of the Yale Global Tic Severity Scale (YGTSS): Symptom Dimensions and Clinical Correlates in an Outpatient Youth Sample

    Science.gov (United States)

    Kircanski, Katharina; Woods, Douglas W.; Chang, Susanna W.; Ricketts, Emily J.; Piacentini, John C.

    2010-01-01

    Tic disorders are heterogeneous, with symptoms varying widely both within and across patients. Exploration of symptom clusters may aid in the identification of symptom dimensions of empirical and treatment import. This article presents the results of two studies investigating tic symptom clusters using a sample of 99 youth (M age = 10.7, 81% male,…

  12. Cluster Analysis of the Yale Global Tic Severity Scale (YGTSS): Symptom Dimensions and Clinical Correlates in an Outpatient Youth Sample

    Science.gov (United States)

    Kircanski, Katharina; Woods, Douglas W.; Chang, Susanna W.; Ricketts, Emily J.; Piacentini, John C.

    2010-01-01

    Tic disorders are heterogeneous, with symptoms varying widely both within and across patients. Exploration of symptom clusters may aid in the identification of symptom dimensions of empirical and treatment import. This article presents the results of two studies investigating tic symptom clusters using a sample of 99 youth (M age = 10.7, 81% male,…

  13. Aerobic and two-stage anaerobic-aerobic sludge digestion with pure oxygen and air aeration.

    Science.gov (United States)

    Zupancic, Gregor D; Ros, Milenko

    2008-01-01

    The degradability of excess activated sludge from a wastewater treatment plant was studied. The objective was establishing the degree of degradation using either air or pure oxygen at different temperatures. Sludge treated with pure oxygen was degraded at temperatures from 22 degrees C to 50 degrees C while samples treated with air were degraded between 32 degrees C and 65 degrees C. Using air, sludge is efficiently degraded at 37 degrees C and at 50-55 degrees C. With oxygen, sludge was most effectively degraded at 38 degrees C or at 25-30 degrees C. Two-stage anaerobic-aerobic processes were studied. The first anaerobic stage was always operated for 5 days HRT, and the second stage involved aeration with pure oxygen and an HRT between 5 and 10 days. Under these conditions, there is 53.5% VSS removal and 55.4% COD degradation at 15 days HRT - 5 days anaerobic, 10 days aerobic. Sludge digested with pure oxygen at 25 degrees C in a batch reactor converted 48% of sludge total Kjeldahl nitrogen to nitrate. Addition of an aerobic stage with pure oxygen aeration to the anaerobic digestion enhances ammonium nitrogen removal. In a two-stage anaerobic-aerobic sludge digestion process within 8 days HRT of the aerobic stage, the removal of ammonium nitrogen was 85%.

  14. CA II TRIPLET SPECTROSCOPY OF SMALL MAGELLANIC CLOUD RED GIANTS. III. ABUNDANCES AND VELOCITIES FOR A SAMPLE OF 14 CLUSTERS

    Energy Technology Data Exchange (ETDEWEB)

    Parisi, M. C.; Clariá, J. J.; Marcionni, N. [Observatorio Astronómico, Universidad Nacional de Córdoba, Laprida 854, Córdoba, CP 5000 (Argentina); Geisler, D.; Villanova, S. [Departamento de Astronomía, Universidad de Concepción Casilla 160-C, Concepción (Chile); Sarajedini, A. [Department of Astronomy, University of Florida P.O. Box 112055, Gainesville, FL 32611 (United States); Grocholski, A. J., E-mail: celeste@oac.uncor.edu, E-mail: claria@oac.uncor.edu, E-mail: nmarcionni@oac.uncor.edu, E-mail: dgeisler@astro-udec.cl, E-mail: svillanova@astro-udec.cl, E-mail: ata@astro.ufl.edu, E-mail: grocholski@phys.lsu.edu [Department of Physics and Astronomy, Louisiana State University 202 Nicholson Hall, Tower Drive, Baton Rouge, LA 70803-4001 (United States)

    2015-05-15

    We obtained spectra of red giants in 15 Small Magellanic Cloud (SMC) clusters in the region of the Ca ii lines with FORS2 on the Very Large Telescope. We determined the mean metallicity and radial velocity with mean errors of 0.05 dex and 2.6 km s{sup −1}, respectively, from a mean of 6.5 members per cluster. One cluster (B113) was too young for a reliable metallicity determination and was excluded from the sample. We combined the sample studied here with 15 clusters previously studied by us using the same technique, and with 7 clusters whose metallicities determined by other authors are on a scale similar to ours. This compilation of 36 clusters is the largest SMC cluster sample currently available with accurate and homogeneously determined metallicities. We found a high probability that the metallicity distribution is bimodal, with potential peaks at −1.1 and −0.8 dex. Our data show no strong evidence of a metallicity gradient in the SMC clusters, somewhat at odds with recent evidence from Ca ii triplet spectra of a large sample of field stars. This may be revealing possible differences in the chemical history of clusters and field stars. Our clusters show a significant dispersion of metallicities, whatever age is considered, which could be reflecting the lack of a unique age–metallicity relation in this galaxy. None of the chemical evolution models currently available in the literature satisfactorily represents the global chemical enrichment processes of SMC clusters.

  15. The Massive and Distant Clusters of WISE Survey (MaDCoWS): Stellar mass fractions in a sample of infrared-selected galaxy clusters at z~1

    Science.gov (United States)

    Decker, Bandon; Brodwin, Mark

    2017-01-01

    Galaxy clusters are the largest gravitationally bound objects in the universe. In addition to being interesting objects in their own right, they are excellent laboratories in which to study galaxy evolution and the properties and abundance of galaxy clusters provide important tests for cosmology. The Massive and Distant Clusters of WISE Survey (MaDCoWS) is a high-redshift (z~1) survey that selects galaxy clusters in the infrared over nearly the full extragalactic sky using the Wide-field Infrared Survey Explorer (WISE) AllWISE data release. We have measured Sunyaev-Zel'dovich (SZ) masses for twelve of the MaDCoWS clusters lying in the range 0.9 Research in Millimeter-wave Astronomy (CARMA) and used follow-up Spitzer/IRAC rest-frame near-infrared observations to measure the stellar mass of these clusters. With these data, we have measured the stellar mass fraction, f_star, and it's relation to total mass for a sample of infrared-selected clusters at z~1. We repeated our analysis of stellar mass fraction on a sample of SZ-selected clusters from the South Pole Telescope (SPT)-SZ survey that lie in a comparable range of mass and redshift to our MaDCoWS clusters to compare the selection methods. We found no significant difference in the trend of stellar mass fraction-to-total mass between infrared and radio selections. Comparing to similar measurements in the local Universe, we find no evidence of strong evolution in the trend over the last 8 Gyr.

  16. Cluster analysis for the systematic grouping of genuine cocoa butter and cocoa butter equivalent samples based on triglyceride patterns.

    Science.gov (United States)

    Buchgraber, Manuela; Ulberth, Franz; Anklam, Elke

    2004-06-16

    The triglyceride profile of cocoa butters (CBs) from different geographical origins, varieties, growing seasons, and a number of cocoa butter equivalents (CBEs) was determined by capillary gas liquid chromatography. Hierarchical cluster analysis was applied to the five main triglycerides of the samples for the ability to find natural groupings among (a) CBs of various provenance and (b) CBE samples of different types. The samples were clustered using Ward's method, and the similarity values of the linkages were represented by dendrograms. The five triglycerides contained adequate information to obtain a meaningful sample differentiation. This information can be used to assess the purity and the origin of the CB sample examined.

  17. Clustering Information of Non-Sampled Area in Small Area Estimation of Poverty Indicators

    Science.gov (United States)

    Sundara, V. Y.; Kurnia, A.; Sadik, K.

    2017-03-01

    Empirical Bayes (EB) is one of indirect estimates methods which used to estimate parameters in small area. Molina and Rao has been used this method for estimates nonlinear small area parameter based on a nested error model. Problems occur when this method is used to estimate parameter of non-sampled area which is solely based on synthetic model which ignore the area effects. This paper proposed an approach to clustering area effects of auxiliary variable by assuming that there are similarities among particular area. A simulation study was presented to demonstrate the proposed approach. All estimations were evaluated based on the relative bias and relative root mean squares error. The result of simulation showed that proposed approach can improve the ability of model to estimate non-sampled area. The proposed model was applied to estimate poverty indicators at sub-districts level in regency and city of Bogor, West Java, Indonesia. The result of case study, relative root mean squares error prediction of empirical Bayes with information cluster is smaller than synthetic model.

  18. Ca II Triplet Spectroscopy of Small Magellanic Cloud Red Giants. III. Abundances and Velocities for a Sample of 14 Clusters

    CERN Document Server

    Parisi, M C; Clariá, J J; Villanova, S; Marcionni, N; Sarajedini, A; Grocholski, A J

    2015-01-01

    We obtained spectra of red giants in 15 Small Magellanic Cloud (SMC) clusters in the region of the CaII lines with FORS2 on the Very Large Telescope (VLT). We determined the mean metallicity and radial velocity with mean errors of 0.05 dex and 2.6 km/s, respectively, from a mean of 6.5 members per cluster. One cluster (B113) was too young for a reliable metallicity determination and was excluded from the sample. We combined the sample studied here with 15 clusters previously studied by us using the same technique, and with 7 clusters whose metallicities determined by other authors are on a scale similar to ours. This compilation of 36 clusters is the largest SMC cluster sample currently available with accurate and homogeneously determined metallicities. We found a high probability that the metallicity distribution is bimodal, with potential peaks at -1.1 and -0.8 dex. Our data show no strong evidence of a metallicity gradient in the SMC clusters, somewhat at odds with recent evidence from CaT spectra of a lar...

  19. Concepts of relative sample outlier (RSO) and weighted sample similarity (WSS) for improving performance of clustering genes: co-function and co-regulation.

    Science.gov (United States)

    Bhattacharya, Anindya; Chowdhury, Nirmalya; De, Rajat K

    2015-01-01

    Performance of clustering algorithms is largely dependent on selected similarity measure. Efficiency in handling outliers is a major contributor to the success of a similarity measure. Better the ability of similarity measure in measuring similarity between genes in the presence of outliers, better will be the performance of the clustering algorithm in forming biologically relevant groups of genes. In the present article, we discuss the problem of handling outliers with different existing similarity measures and introduce the concepts of Relative Sample Outlier (RSO). We formulate new similarity, called Weighted Sample Similarity (WSS), incorporated in Euclidean distance and Pearson correlation coefficient and then use them in various clustering and biclustering algorithms to group different gene expression profiles. Our results suggest that WSS improves performance, in terms of finding biologically relevant groups of genes, of all the considered clustering algorithms.

  20. Ca II Triplet Spectroscopy of Small Magellanic Cloud Red Giants. IV. Abundances for a Large Sample of Field Stars and Comparison with the Cluster Sample

    Science.gov (United States)

    Parisi, M. C.; Geisler, D.; Carraro, G.; Clariá, J. J.; Villanova, S.; Gramajo, L. V.; Sarajedini, A.; Grocholski, A. J.

    2016-09-01

    This paper represents a major step forward in the systematic and homogeneous study of Small Magellanic Cloud (SMC) star clusters and field stars carried out by applying the calcium triplet technique. We present in this work the radial velocity and metallicity of approximately 400 red giant stars in 15 SMC fields, with typical errors of about 7 km s-1 and 0.16 dex, respectively. We added to this information our previously determined metallicity values for 29 clusters and approximately 350 field stars using the identical techniques. Using this enlarged sample, we analyze the metallicity distribution and gradient in this galaxy. We also compare the chemical properties of the clusters and of their surrounding fields. We find a number of surprising results. While the clusters, taken as a whole, show no strong evidence for a metallicity gradient (MG), the field stars exhibit a clear negative gradient in the inner region of the SMC, consistent with the recent results of Dobbie et al. For distances to the center of the galaxy less than 4°, field stars show a considerably smaller metallicity dispersion than that of the clusters. However, in the external SMC regions, clusters and field stars exhibit similar metallicity dispersions. Moreover, in the inner region of the SMC, clusters appear to be concentrated in two groups: one more metal-poor and another more metal-rich than field stars. Individually considered, neither cluster group presents an MG. Most surprisingly, the MG for both stellar populations (clusters and field stars) appears to reverse sign in the outer regions of the SMC. The difference between the cluster metallicity and the mean metallicity of the surrounding field stars turns out to be a strong function of the cluster metallicity. These results could be indicating different chemical evolution histories for these two SMC stellar populations. They could also indicate variations in the chemical behavior of the SMC in its internal and external regions.

  1. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations.

    Science.gov (United States)

    NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel

    2017-08-01

    Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.

  2. Loss Function Based Ranking in Two-Stage, Hierarchical Models

    Science.gov (United States)

    Lin, Rongheng; Louis, Thomas A.; Paddock, Susan M.; Ridgeway, Greg

    2009-01-01

    Performance evaluations of health services providers burgeons. Similarly, analyzing spatially related health information, ranking teachers and schools, and identification of differentially expressed genes are increasing in prevalence and importance. Goals include valid and efficient ranking of units for profiling and league tables, identification of excellent and poor performers, the most differentially expressed genes, and determining “exceedances” (how many and which unit-specific true parameters exceed a threshold). These data and inferential goals require a hierarchical, Bayesian model that accounts for nesting relations and identifies both population values and random effects for unit-specific parameters. Furthermore, the Bayesian approach coupled with optimizing a loss function provides a framework for computing non-standard inferences such as ranks and histograms. Estimated ranks that minimize Squared Error Loss (SEL) between the true and estimated ranks have been investigated. The posterior mean ranks minimize SEL and are “general purpose,” relevant to a broad spectrum of ranking goals. However, other loss functions and optimizing ranks that are tuned to application-specific goals require identification and evaluation. For example, when the goal is to identify the relatively good (e.g., in the upper 10%) or relatively poor performers, a loss function that penalizes classification errors produces estimates that minimize the error rate. We construct loss functions that address this and other goals, developing a unified framework that facilitates generating candidate estimates, comparing approaches and producing data analytic performance summaries. We compare performance for a fully parametric, hierarchical model with Gaussian sampling distribution under Gaussian and a mixture of Gaussians prior distributions. We illustrate approaches via analysis of standardized mortality ratio data from the United States Renal Data System. Results show that SEL

  3. Contextual Classification of Point Clouds Using a Two-Stage Crf

    Science.gov (United States)

    Niemeyer, J.; Rottensteiner, F.; Soergel, U.; Heipke, C.

    2015-03-01

    In this investigation, we address the task of airborne LiDAR point cloud labelling for urban areas by presenting a contextual classification methodology based on a Conditional Random Field (CRF). A two-stage CRF is set up: in a first step, a point-based CRF is applied. The resulting labellings are then used to generate a segmentation of the classified points using a Conditional Euclidean Clustering algorithm. This algorithm combines neighbouring points with the same object label into one segment. The second step comprises the classification of these segments, again with a CRF. As the number of the segments is much smaller than the number of points, it is computationally feasible to integrate long range interactions into this framework. Additionally, two different types of interactions are introduced: one for the local neighbourhood and another one operating on a coarser scale. This paper presents the entire processing chain. We show preliminary results achieved using the Vaihingen LiDAR dataset from the ISPRS Benchmark on Urban Classification and 3D Reconstruction, which consists of three test areas characterised by different and challenging conditions. The utilised classification features are described, and the advantages and remaining problems of our approach are discussed. We also compare our results to those generated by a point-based classification and show that a slight improvement is obtained with this first implementation.

  4. Tumor producing fibroblast growth factor 23 localized by two-staged venous sampling.

    NARCIS (Netherlands)

    Boekel, G.A.J van; Ruinemans-Koerts, J.; Joosten, F.; Dijkhuizen, P.; Sorge, A van; Boer, H de

    2008-01-01

    BACKGROUND: Tumor-induced osteomalacia is a rare paraneoplastic syndrome characterized by hypophosphatemia, renal phosphate wasting, suppressed 1,25-dihydroxyvitamin D production, and osteomalacia. It is caused by a usually benign mesenchymal tumor producing fibroblast growth factor 23 (FGF-23). Sur

  5. Globular clusters and supermassive black holes in galaxies: further analysis and a larger sample

    CERN Document Server

    Harris, Gretchen L H; Harris, William E

    2013-01-01

    We explore several correlations between various large-scale galaxy properties, particularly total globular cluster population (N_GCS), the central black hole mass (M_BH), velocity dispersion (nominally sigma_e), and bulge mass (M_dyn). Our data sample of 49 galaxies, for which both N_GC and M_BH are known, is larger than used in previous discussions of these two parameters and we employ the same sample to explore all pairs of correlations. Further, within this galaxy sample we investigate the scatter in each quantity, with emphasis on the range of published values for sigma_e and effective radius (R_e). We find that these two quantities in particular are difficult to measure consistently and caution that precise intercomparison of galaxy properties involving R_e and sigma_e is particularly difficult. Using both chi^2 and Monte Carlo Markov Chain (MCMC) fitting techniques, we show that quoted observational uncertainties for all parameters are too small to represent the true scatter in the data. We find that th...

  6. Two-stage re-estimation adaptive design: a simulation study

    Directory of Open Access Journals (Sweden)

    Francesca Galli

    2013-10-01

    Full Text Available Background: adaptive clinical trial design has been proposed as a promising new approach to improve the drug discovery process. Among the many options available, adaptive sample size re-estimation is of great interest mainly because of its ability to avoid a large ‘up-front’ commitment of resources. In this simulation study, we investigate the statistical properties of two-stage sample size re-estimation designs in terms of type I error control, study power and sample size, in comparison with the fixed-sample study.Methods: we simulated a balanced two-arm trial aimed at comparing two means of normally distributed data, using the inverse normal method to combine the results of each stage, and considering scenarios jointly defined by the following factors: the sample size re-estimation method, the information fraction, the type of group sequential boundaries and the use of futility stopping. Calculations were performed using the statistical software SAS™ (version 9.2.Results: under the null hypothesis, any type of adaptive design considered maintained the prefixed type I error rate, but futility stopping was required to avoid the unwanted increase in sample size. When deviating from the null hypothesis, the gain in power usually achieved with the adaptive design and its performance in terms of sample size were influenced by the specific design options considered.Conclusions: we show that adaptive designs incorporating futility stopping, a sufficiently high information fraction (50-70% and the conditional power method for sample size re-estimation have good statistical properties, which include a gain in power when trial results are less favourable than anticipated. 

  7. Mineral chemistry of the Tissint meteorite: Indications of two-stage crystallization in a closed system

    Science.gov (United States)

    Liu, Yang; Baziotis, Ioannis P.; Asimow, Paul D.; Bodnar, Robert J.; Taylor, Lawrence A.

    2016-12-01

    The Tissint meteorite is a geochemically depleted, olivine-phyric shergottite. Olivine megacrysts contain 300-600 μm cores with uniform Mg# ( 80 ± 1) followed by concentric zones of Fe-enrichment toward the rims. We applied a number of tests to distinguish the relationship of these megacrysts to the host rock. Major and trace element compositions of the Mg-rich core in olivine are in equilibrium with the bulk rock, within uncertainty, and rare earth element abundances of melt inclusions in Mg-rich olivines reported in the literature are similar to those of the bulk rock. Moreover, the P Kα intensity maps of two large olivine grains show no resorption between the uniform core and the rim. Taken together, these lines of evidence suggest the olivine megacrysts are phenocrysts. Among depleted olivine-phyric shergottites, Tissint is the first one that acts mostly as a closed system with olivine megacrysts being the phenocrysts. The texture and mineral chemistry of Tissint indicate a crystallization sequence of: olivine (Mg# 80 ± 1) → olivine (Mg# 76) + chromite → olivine (Mg# 74) + Ti-chromite → olivine (Mg# 74-63) + pyroxene (Mg# 76-65) + Cr-ulvöspinel → olivine (Mg# 63-35) + pyroxene (Mg# 65-60) + plagioclase, followed by late-stage ilmenite and phosphate. The crystallization of the Tissint meteorite likely occurred in two stages: uniform olivine cores likely crystallized under equilibrium conditions; and a fractional crystallization sequence that formed the rest of the rock. The two-stage crystallization without crystal settling is simulated using MELTS and the Tissint bulk composition, and can broadly reproduce the crystallization sequence and mineral chemistry measured in the Tissint samples. The transition between equilibrium and fractional crystallization is associated with a dramatic increase in cooling rate and might have been driven by an acceleration in the ascent rate or by encounter with a steep thermal gradient in the Martian crust.

  8. AREA DETERMINATION OF DIABETIC FOOT ULCER IMAGES USING A CASCADED TWO-STAGE SVM BASED CLASSIFICATION.

    Science.gov (United States)

    Wang, Lei; Pedersen, Peder; Agu, Emmanuel; Strong, Diane; Tulu, Bengisu

    2016-11-23

    It is standard practice for clinicians and nurses to primarily assess patients' wounds via visual examination. This subjective method can be inaccurate in wound assessment and also represents a significant clinical workload. Hence, computer-based systems, especially implemented on mobile devices, can provide automatic, quantitative wound assessment and can thus be valuable for accurately monitoring wound healing status. Out of all wound assessment parameters, the measurement of the wound area is the most suitable for automated analysis. Most of the current wound boundary determination methods only process the image of the wound area along with a small amount of surrounding healthy skin. In this paper, we present a novel approach that uses Support Vector Machine (SVM) to determine the wound boundary on a foot ulcer image captured with an image capture box, which provides controlled lighting, angle and range conditions. The Simple Linear Iterative Clustering (SLIC) method is applied for effective super-pixel segmentation. A cascaded two-stage classifier is trained as follows: in the first stage a set of k binary SVM classifiers are trained and applied to different subsets of the entire training images dataset, and a set of incorrectly classified instances are collected. In the second stage, another binary SVM classifier is trained on the incorrectly classified set. We extracted various color and texture descriptors from super-pixels that are used as input for each stage in the classifier training. Specifically, we apply the color and Bag-of-Word (BoW) representation of local Dense SIFT features (DSIFT) as the descriptor for ruling out irrelevant regions (first stage), and apply color and wavelet based features as descriptors for distinguishing healthy tissue from wound regions (second stage). Finally, the detected wound boundary is refined by applying a Conditional Random Field (CRF) image processing technique. We have implemented the wound classification on a Nexus

  9. Cluster Analysis of the Klein Sexual Orientation Grid in Clinical and Nonclinical Samples: When Bisexuality Is Not Bisexuality.

    Science.gov (United States)

    Weinrich, James D; Klein, Fritz; McCutchan, J Allen; Grant, Igor

    2014-01-01

    We used a cluster analysis to empirically address whether sexual orientation is a continuum or can usefully be divided into categories such as heterosexual, homosexual, and bisexual using scores on the Klein Sexual Orientation Grid (KSOG) in three samples: groups of men and women recruited through bisexual groups and the Internet (Main Study men; Main Study women), and men recruited for a clinical study of HIV and the nervous system (HIV Study men). A five-cluster classification was chosen for the Main Study men (n = 212), a four-cluster classification for the Main Study women (n = 120), and a five-cluster classification for the HIV Study men (n = 620). We calculated means and standard deviations of these 14 clusters on the 21 variables composing the KSOG. Generally, the KSOG's overtly erotic items (Sexual Fantasies, Sexual Behavior, and Sexual Attraction), as well as the Self Identification items, tended to be more uniform within groups than the more social items were (Emotional Preference, Socialize with, and Lifestyle). The result is a set of objectively identified subgroups of bisexual men and women along with characterizations of the extent to which their KSOG scores describe and differentiate them. The Bisexual group identified by the cluster analysis of the HIV sample was distinctly different from any of the bisexual groups identified by the clustering process in the Main Sample. Simply put, the HIV sample's bisexuality is not like bisexuality in general, and attempts to generalize (even cautiously) from this clinical Bisexual group to a larger population would be doomed to failure. This underscores the importance of recruiting non-clinical samples if one wants insight into the nature of bisexuality in the population at large. Although the importance of non-clinical sampling in studies of sexual orientation has been widely and justly asserted, it has rarely been demonstrated by direct comparisons of the type conducted in the present study.

  10. On Simon's two-stage design for single-arm phase IIA cancer clinical trials under beta-binomial distribution.

    Science.gov (United States)

    Liu, Junfeng; Lin, Yong; Shih, Weichung Joe

    2010-05-10

    Simon (Control. Clin. Trials 1989; 10:1-10)'s two-stage design has been broadly applied to single-arm phase IIA cancer clinical trials in order to minimize either the expected or the maximum sample size under the null hypothesis of drug inefficacy, i.e. when the pre-specified amount of improvement in response rate (RR) is not expected to be observed. This paper studies a realistic scenario where the standard and experimental treatment RRs follow two continuous distributions (e.g. beta distribution) rather than two single values. The binomial probabilities in Simon's (Control. Clin. Trials 1989; 10:1-10) design are replaced by prior predictive Beta-binomial probabilities that are the ratios of two beta functions and domain-restricted RRs involve incomplete beta functions to induce the null hypothesis acceptance probability. We illustrate that Beta-binomial mixture model based two-stage design retains certain desirable properties for hypothesis testing purpose. However, numerical results show that such designs may not exist under certain hypothesis and error rate (type I and II) setups within maximal sample size approximately 130. Furthermore, we give theoretical conditions for asymptotic two-stage design non-existence (sample size goes to infinity) in order to improve the efficiency of design search and to avoid needless searching.

  11. Preemptive scheduling in a two-stage supply chain to minimize the makespan

    NARCIS (Netherlands)

    Pei, Jun; Fan, Wenjuan; Pardalos, Panos M.; Liu, Xinbao; Goldengorin, Boris; Yang, Shanlin

    2015-01-01

    This paper deals with the problem of preemptive scheduling in a two-stage supply chain framework. The supply chain environment contains two stages: production and transportation. In the production stage jobs are processed on a manufacturer's bounded serial batching machine, preemptions are allowed,

  12. Two-stage removal of nitrate from groundwater using biological and chemical treatments.

    Science.gov (United States)

    Ayyasamy, Pudukadu Munusamy; Shanthi, Kuppusamy; Lakshmanaperumalsamy, Perumalsamy; Lee, Soon-Jae; Choi, Nag-Choul; Kim, Dong-Ju

    2007-08-01

    In this study, we attempted to treat groundwater contaminated with nitrate using a two-stage removal system: one is biological treatment using the nitrate-degrading bacteria Pseudomonas sp. RS-7 and the other is chemical treatment using a coagulant. For the biological system, the effect of carbon sources on nitrate removal was first investigated using mineral salt medium (MSM) containing 500 mg l(-1) nitrate to select the most effective carbon source. Among three carbon sources, namely, glucose, starch and cellulose, starch at 1% was found to be the most effective. Thus, starch was used as a representative carbon source for the remaining part of the biological treatment where nitrate removal was carried out for MSM solution and groundwater samples containing 500 mg l(-1) and 460 mg l(-1) nitrate, respectively. About 86% and 89% of nitrate were removed from the MSM solution and groundwater samples, respectively at 72 h. Chemical coagulants such as alum, lime and poly aluminium chloride were tested for the removal of nitrate remaining in the samples. Among the coagulants, lime at 150 mg l(-1) exhibited the highest nitrate removal efficiency with complete disappearance for the MSM solutions. Thus, a combined system of biological and chemical treatments was found to be more effective for the complete removal of nitrate from groundwater.

  13. Cluster Sampling Bias in Government-Sponsored Evaluations: A Correlational Study of Employment and Welfare Pilots in England.

    Science.gov (United States)

    Vaganay, Arnaud

    2016-01-01

    For pilot or experimental employment programme results to apply beyond their test bed, researchers must select 'clusters' (i.e. the job centres delivering the new intervention) that are reasonably representative of the whole territory. More specifically, this requirement must account for conditions that could artificially inflate the effect of a programme, such as the fluidity of the local labour market or the performance of the local job centre. Failure to achieve representativeness results in Cluster Sampling Bias (CSB). This paper makes three contributions to the literature. Theoretically, it approaches the notion of CSB as a human behaviour. It offers a comprehensive theory, whereby researchers with limited resources and conflicting priorities tend to oversample 'effect-enhancing' clusters when piloting a new intervention. Methodologically, it advocates for a 'narrow and deep' scope, as opposed to the 'wide and shallow' scope, which has prevailed so far. The PILOT-2 dataset was developed to test this idea. Empirically, it provides evidence on the prevalence of CSB. In conditions similar to the PILOT-2 case study, investigators (1) do not sample clusters with a view to maximise generalisability; (2) do not oversample 'effect-enhancing' clusters; (3) consistently oversample some clusters, including those with higher-than-average client caseloads; and (4) report their sampling decisions in an inconsistent and generally poor manner. In conclusion, although CSB is prevalent, it is still unclear whether it is intentional and meant to mislead stakeholders about the expected effect of the intervention or due to higher-level constraints or other considerations.

  14. Cluster information of non-sampled area in small area estimation of poverty indicators using Empirical Bayes

    Science.gov (United States)

    Sundara, Vinny Yuliani; Sadik, Kusman; Kurnia, Anang

    2017-03-01

    Survey is one of data collection method which sampling of individual units from a population. However, national survey only provides limited information which impacts on low precision in small area level. In fact, when the area is not selected as sample unit, estimation cannot be made. Therefore, small area estimation method is required to solve this problem. One of model-based estimation methods is empirical Bayes which has been widely used to estimate parameter in small area, even in non-sampled area. Yet, problems occur when this method is used to estimate parameter of non-sampled area which is solely based on synthetic model which ignore the area effects. This paper proposed an approach to cluster area effects of auxiliary variable by assuming that there are similar among particular area. Direct estimates in several sub-districts in regency and city of Bogor are zero because no household which are under poverty in the sample that selected from these sub-districts. Empirical Bayes method is used to get the estimates are not zero. Empirical Bayes method on FGT poverty measures both Molina & Rao and information clusters have the same estimates in the sub-districts selected as samples, but have different estimates on non-sampled sub-districts. Empirical Bayes methods with information cluster has smaller coefficient of variation. Empirical Bayes method with cluster information is better than empirical Bayes methods without cluster information on non-sampled sub-districts in regency and city of Bogor in terms of coefficient of variation.

  15. A gas-loading system for LANL two-stage gas guns

    Energy Technology Data Exchange (ETDEWEB)

    Gibson, Lloyd Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bartram, Brian Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dattelbaum, Dana Mcgraw [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lang, John Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Morris, John Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-09-01

    A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures.The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez® and Teflon®. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system and example data from the plate impact experiments will be shown.

  16. Two stages of isotopic exchanges experienced by the Ertaibei granite pluton, northern Xinjiang, China

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    18O/16O and D/H of coexisting feldspar, quartz, and biotite separates of twenty samples collected from the Ertaibei granite pluton, northern Xinjiang, China are determined. It is shown that the Ertaibei pluton experienced two stages of isotopic exchanges. The second stage of 18O/16O and D/H exchanges with meteoric water brought about a marked decrease in the δ18O values of feldspar and biotite from the second group of samples. The D/H of biotite exhibits a higher sensitivity to the meteoric water alteration than its 18O/16O. However, the first stage of 18O/16O exchange with the 18O-rich aqueous fluid derived from the dehydration within the deep crust caused the δ18OQuartz-Feldspar reversal. It is inferred that the dehydration-melting may have been an important mechanism for anatexis. It is shown that the deep fluid encircled the Ertaibei pluton like an envelope which serves as an effective screen to the surface waters.

  17. Two stages of isotopic exchanges experienced by the Ertaibei granite pluton, northern Xinjiang, China

    Institute of Scientific and Technical Information of China (English)

    刘伟

    2000-01-01

    18O/16O and D/H of coexisting feldspar, quartz, and biotite separates of twenty samples collected from the Ertaibei granite pluton, northern Xinjiang, China are determined. It is shown that the Ertaibei pluton experienced two stages of isotopic exchanges. The second stage of 18O/16O and D/H exchanges with meteoric water brought about a marked decrease in the δ18O values of feldspar and biotite from the second group of samples. The D/H of biotite exhibits a higher sensitivity to the meteoric water alteration than its 18O/16O. However, the first stage of 18O/16O exchange with the 18O-rich aqueous fluid derived from the dehydration within the deep crust caused the Δ18OQuariz-Feidspar reversal. It is inferred that the dehydration-melting may have been an important mechanism for anatexis. It is shown that the deep fluid encircled the Ertaibei pluton like an envelope which serves as an effective screen to the surface waters.

  18. A gas-loading system for LANL two-stage gas guns

    Science.gov (United States)

    Gibson, L. L.; Bartram, B. D.; Dattelbaum, D. M.; Lang, J. M.; Morris, J. S.

    2017-01-01

    A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures. The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez® and Teflon®. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system and example data from the plate impact experiments will be shown.

  19. The clustering of ALFALFA galaxies: dependence on HI mass, relationship to optical samples & clues on host halo properties

    CERN Document Server

    Papastergis, Emmanouil; Haynes, Martha P; Rodríguez-Puebla, Aldo; Jones, Michael G

    2013-01-01

    We use a sample of ~6000 galaxies detected by the Arecibo Legacy Fast ALFA (ALFALFA) 21cm survey, to measure the clustering properties of HI-selected galaxies. We find no convincing evidence for a dependence of clustering on the galactic atomic hydrogen (HI) mass, over the range M_HI ~ 10^{8.5} - 10^{10.5} M_sun. We show that previously reported results of weaker clustering for low-HI mass galaxies are probably due to finite-volume effects. In addition, we compare the clustering of ALFALFA galaxies with optically selected samples drawn from the Sloan Digital Sky Survey (SDSS). We find that HI-selected galaxies cluster more weakly than even relatively optically faint galaxies, when no color selection is applied. Conversely, when SDSS galaxies are split based on their color, we find that the correlation function of blue optical galaxies is practically indistinguishable from that of HI-selected galaxies. At the same time, SDSS galaxies with red colors are found to cluster significantly more than HI-selected gala...

  20. Statistics and implications of substructure detected in a representative sample of X-ray clusters

    CERN Document Server

    Chon, Gayoung; Smith, Graham

    2012-01-01

    We present a morphological study of 35 X-ray luminous galaxy clusters at 0.15Cluster Substructure Survey (LoCuSS), for which deep XMM-Newton observations are available. We characterise the structure of the X-ray surface brightness distribution of each cluster by measuring both their power ratios and centroid shift, and thus rank the clusters by the degree of substructure. These complementary probes give a consistent description of the cluster morphologies with some well understood exceptions. We find a remarkably tight correlation of regular morphology with the occurrence of cool cores in clusters. We also compare our measurements of X-ray morphology with measurements of the luminosity gap statistics and ellipticity of the brightest cluster galaxy (BCG). We check how our new X-ray morphological analysis maps onto cluster scaling relations, finding that (i) clusters with relatively undisturbed X-ray morphologies are on average more luminous at fixed X-ray...

  1. Two-stage crystallization of charged colloids under low supersaturation conditions.

    Science.gov (United States)

    Kratzer, Kai; Arnold, Axel

    2015-03-21

    We report simulations on the homogeneous liquid-fcc nucleation of charged colloids for both low and high contact energy values. As a precursor for crystal formation, we observe increased local order at the position where the crystal will form, but no correlations with the local density. Thus, the nucleation is driven by order fluctuations rather than density fluctuations. Our results also show that the transition involves two stages in both cases, first a transition of liquid → bcc, followed by a bcc → hcp/fcc transition. Both transitions have to overcome free energy barriers, so that a spherical bcc-like cluster is formed first, in which the final fcc structure is nucleated mainly at the surface of the crystallite. This means that the second stage bcc-fcc phase transition is a heterogeneous nucleation in the partially grown solid phase, even though we start from a homogeneous bulk liquid. The height of the bcc → hcp/fcc free energy barrier strongly depends on the contact energies of the colloids. For low contact energy this barrier is low, so that the bcc → hcp/fcc transition occurs spontaneously. For the higher contact energy, the second barrier is too high to be crossed spontaneously by the colloidal system. However, it was possible to ratchet the system over the second barrier and to transform the bcc nuclei into the stable hcp/fcc phase. The transitions are dominated by the first liquid-bcc transition and can be described by classical nucleation theory using an effective surface tension.

  2. EMPCA and Cluster Analysis of Quasar Spectra: Sample Preparation and Validation

    Science.gov (United States)

    Wagner, Cassidy; Leighly, Karen; Macinnis, Francis; Marrs, Adam; Richards, Gordon T.

    2017-01-01

    All quasars are fundamentally similar, powered by accretion of matter onto a super massive black hole. However, patterns of differences can be identified through the emission lines. Quasar broad absorption lines have been postulated to be responsible for feedback in galaxy evolution. Principal component analysis (PCA) quantifies trends in emission lines of quasars that can be used to predict and reconstruct the underlying continuum in broad absorption line quasars.Richards et al. 2011 hypothesized that emission-line variance across the rest-UV spectrum is correlated with C IV blueshift and equivalent width. We fit their composite spectra, constructed based on these properties, to identify trends for the purpose of creating simulated spectra to test the weighted Expectation Maximization PCA (EMPCA; Bailey 2012) and cluster analysis method discussed in adjacent poster by Marrs et al.More than 800 SDSS spectra from Allen et al. 2011, with a redshift range of z = 2.2 - 2.3, were selected for analysis, particularly spectra with high signal to noise ratios, without broad absorption lines, and without numerous narrow absorption lines. Interstellar and intergalactic absorption lines add variance that contaminates the principal components. To remove these lines, we smoothed the spectra using a Fourier transform and a low-pass filter. We then used a line-finding and -removal program to remove or flag narrow absorption lines. From the principal components that resulted from the PCA analysis we were able to reconstruct the continua of a small sample of BAL QSOs.

  3. DEVELOPMENT OF COLD CLIMATE HEAT PUMP USING TWO-STAGE COMPRESSION

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Bo [ORNL; Rice, C Keith [ORNL; Abdelaziz, Omar [ORNL; Shrestha, Som S [ORNL

    2015-01-01

    This paper uses a well-regarded, hardware based heat pump system model to investigate a two-stage economizing cycle for cold climate heat pump applications. The two-stage compression cycle has two variable-speed compressors. The high stage compressor was modelled using a compressor map, and the low stage compressor was experimentally studied using calorimeter testing. A single-stage heat pump system was modelled as the baseline. The system performance predictions are compared between the two-stage and single-stage systems. Special considerations for designing a cold climate heat pump are addressed at both the system and component levels.

  4. DEVELOPMENT OF COLD CLIMATE HEAT PUMP USING TWO-STAGE COMPRESSION

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Bo [ORNL; Rice, C Keith [ORNL; Abdelaziz, Omar [ORNL; Shrestha, Som S [ORNL

    2015-01-01

    This paper uses a well-regarded, hardware based heat pump system model to investigate a two-stage economizing cycle for cold climate heat pump applications. The two-stage compression cycle has two variable-speed compressors. The high stage compressor was modelled using a compressor map, and the low stage compressor was experimentally studied using calorimeter testing. A single-stage heat pump system was modelled as the baseline. The system performance predictions are compared between the two-stage and single-stage systems. Special considerations for designing a cold climate heat pump are addressed at both the system and component levels.

  5. Modeling of inter-sample variation in flow cytometric data with the joint clustering and matching procedure.

    Science.gov (United States)

    Lee, Sharon X; McLachlan, Geoffrey J; Pyne, Saumyadipta

    2016-01-01

    We present an algorithm for modeling flow cytometry data in the presence of large inter-sample variation. Large-scale cytometry datasets often exhibit some within-class variation due to technical effects such as instrumental differences and variations in data acquisition, as well as subtle biological heterogeneity within the class of samples. Failure to account for such variations in the model may lead to inaccurate matching of populations across a batch of samples and poor performance in classification of unlabeled samples. In this paper, we describe the Joint Clustering and Matching (JCM) procedure for simultaneous segmentation and alignment of cell populations across multiple samples. Under the JCM framework, a multivariate mixture distribution is used to model the distribution of the expressions of a fixed set of markers for each cell in a sample such that the components in the mixture model may correspond to the various populations of cells, which have similar expressions of markers (that is, clusters), in the composition of the sample. For each class of samples, an overall class template is formed by the adoption of random-effects terms to model the inter-sample variation within a class. The construction of a parametric template for each class allows for direct quantification of the differences between the template and each sample, and also between each pair of samples, both within or between classes. The classification of a new unclassified sample is then undertaken by assigning the unclassified sample to the class that minimizes the distance between its fitted mixture density and each class density as provided by the class templates. For illustration, we use a symmetric form of the Kullback-Leibler divergence as a distance measure between two densities, but other distance measures can also be applied. We show and demonstrate on four real datasets how the JCM procedure can be used to carry out the tasks of automated clustering and alignment of cell

  6. Chandra measurements of a complete sample of X-ray luminous galaxy clusters: the luminosity-mass relation

    Science.gov (United States)

    Giles, P. A.; Maughan, B. J.; Dahle, H.; Bonamente, M.; Landry, D.; Jones, C.; Joy, M.; Murray, S. S.; van der Pyl, N.

    2017-02-01

    We present the results of work involving a statistically complete sample of 34 galaxy clusters, in the redshift range 0.15 ≤ z ≤ 0.3 observed with Chandra. We investigate the luminosity-mass (LM) relation for the cluster sample, with the masses obtained via a full hydrostatic mass analysis. We utilize a method to fully account for selection biases when modelling the LM relation, and find that the LM relation is significantly different from the relation modelled when not account for selection effects. We find that the luminosity of our clusters is 2.2 ± 0.4 times higher (when accounting for selection effects) than the average for a given mass and its mass is 30 per cent lower than the population average for a given luminosity. Equivalently, using the LM relation measured from this sample without correcting for selection biases would lead to the underestimation by 40 per cent of the average mass of a cluster with a given luminosity. Comparing the hydrostatic masses to mass estimates determined from the YX parameter, we find that they are entirely consistent, irrespective of the dynamical state of the cluster.

  7. Psychopathology, social adjustment and personality correlates of schizotypy clusters in a large nonclinical sample.

    Science.gov (United States)

    Barrantes-Vidal, Neus; Lewandowski, Kathryn E; Kwapil, Thomas R

    2010-09-01

    Correlational methods, unlike cluster analyses, cannot take into account the possibility that individuals score highly on more than one symptom dimension simultaneously. This may account for some of the inconsistency found in the literature of correlates of schizotypy dimensions. This study explored the clustering of positive and negative schizotypy dimensions in nonclinical subjects and whether schizotypy clusters have meaningful patterns of adjustment in terms of psychopathology, social functioning, and personality. Positive and negative schizotypy dimensional scores were derived from the Chapman Psychosis-Proneness Scales for 6137 college students and submitted to cluster analysis. Of these, 780 completed the NEO-PI-R and Social Adjustment Scale-self report version, and a further 430 were interviewed for schizophrenia-spectrum, mood, and substance use psychopathology. Four clusters were obtained: low (nonschizotypic), high positive, high negative, and mixed (high positive and negative) schizotypy. The positive schizotypy cluster presented high rates of psychotic-like experiences, schizotypal and paranoid symptoms, had affective and substance abuse pathology, and was open to experience and extraverted. The negative schizotypy cluster had high rates of negative and schizoid symptoms, impaired social adjustment, high conscientiousness and low agreeableness. The mixed cluster was the most deviant on almost all aspects. Our cluster solution is consistent with the limited cluster analytic studies reported in schizotypy and schizophrenia, indicating that meaningful profiles of schizotypy features can be detected in nonclinical populations. The clusters identified displayed a distinct and meaningful pattern of correlates in different domains, thus providing construct validity to the schizotypy types defined. (c) 2010 Elsevier B.V. All rights reserved.

  8. A Nexessary Condition about the Optimum Partition on a Finite Set of Samples and Its Application to Clustering Analysis

    Institute of Scientific and Technical Information of China (English)

    叶世伟; 史忠植

    1995-01-01

    This paper presents another necessary condition about the optimum partition on a finite set of samples.From this condition,a corresponding generalized sequential hard k-means (GSHKM) clustering algorithm is built and many well-known clustering algorithms are found to be included in it.Under some assumptions the well-known MacQueen;s SHKM (Sequential Hard K-Means) algorithm,FSCL(Frequency Sensitive Competitive Learning) algorithm and RPCL (Rival Penalized Competitive Learning) algorithm are derived.It is shown that FSCL in fact still belongs to the kind of GSHKM clustering algorithm and is more suitable for producing means of K-partition of sample data,which is illustrated by numerical experiment.Meanwhile,some improvements on these algorithms are also given.

  9. The ESO Distant Cluster Sample: galaxy evolution and environment out to z=1

    CERN Document Server

    Poggianti, Bianca M; Bamford, Steven; Barazza, Fabio; Best, Philip; Clowe, Douglas; Dalcanton, Julianne; De Lucia, Gabriella; Desai, Vandana; Finn, Rose; Halliday, Claire; Jablonka, Pascale; Johnson, Olivia; Milvang-Jensen, Bo; Moustakas, John; Noll, Stefan; Nowak, Nina; Pello, Roser; Poirier, Sebastien; Rudnick, Gregory; Saglia, Roberto; Sanchez-Blazquez, Patricia; Simard, Luc; Varela, Jesus; von der Linden, Anja; Whiley, Ian; White, Simon D M; Zaritsky, Dennis

    2009-01-01

    The ESO Distant Cluster Survey (EDisCS, P.I. Simon D.M. White, LP 166.A-0162) is an ESO large programme aimed at studying clusters and cluster galaxies at z=0.4-1. How different is the evolution of the star formation activity in clusters, in groups and in the field? Does it depend on cluster mass and/or the local galaxy density? How relevant are starburst and post-starburst galaxies in the different environments? Is there an evolution in the galaxies' structures, and if so, is this related to the changes in their star formation activity? These are some of the main questions that have been investigated using the EDisCS dataset.

  10. JOINT ANALYSIS OF CLUSTER OBSERVATIONS. II. CHANDRA/XMM-NEWTON X-RAY AND WEAK LENSING SCALING RELATIONS FOR A SAMPLE OF 50 RICH CLUSTERS OF GALAXIES

    Energy Technology Data Exchange (ETDEWEB)

    Mahdavi, Andisheh [Department of Physics and Astronomy, San Francisco State University, San Francisco, CA 94131 (United States); Hoekstra, Henk [Leiden Observatory, Leiden University, Niels Bohrweg 2, NL-2333 CA Leiden (Netherlands); Babul, Arif; Bildfell, Chris [Department of Physics and Astronomy, University of Victoria, Victoria, BC V8W 3P6 (Canada); Jeltema, Tesla [Santa Cruz Institute for Particle Physics, UC Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States); Henry, J. Patrick [Institute for Astronomy, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States)

    2013-04-20

    We present a study of multiwavelength X-ray and weak lensing scaling relations for a sample of 50 clusters of galaxies. Our analysis combines Chandra and XMM-Newton data using an energy-dependent cross-calibration. After considering a number of scaling relations, we find that gas mass is the most robust estimator of weak lensing mass, yielding 15% {+-} 6% intrinsic scatter at r{sub 500}{sup WL} (the pseudo-pressure Y{sub X} yields a consistent scatter of 22% {+-} 5%). The scatter does not change when measured within a fixed physical radius of 1 Mpc. Clusters with small brightest cluster galaxy (BCG) to X-ray peak offsets constitute a very regular population whose members have the same gas mass fractions and whose even smaller (<10%) deviations from regularity can be ascribed to line of sight geometrical effects alone. Cool-core clusters, while a somewhat different population, also show the same (<10%) scatter in the gas mass-lensing mass relation. There is a good correlation and a hint of bimodality in the plane defined by BCG offset and central entropy (or central cooling time). The pseudo-pressure Y{sub X} does not discriminate between the more relaxed and less relaxed populations, making it perhaps the more even-handed mass proxy for surveys. Overall, hydrostatic masses underestimate weak lensing masses by 10% on the average at r{sub 500}{sup WL}; but cool-core clusters are consistent with no bias, while non-cool-core clusters have a large and constant 15%-20% bias between r{sub 2500}{sup WL} and r{sub 500}{sup WL}, in agreement with N-body simulations incorporating unthermalized gas. For non-cool-core clusters, the bias correlates well with BCG ellipticity. We also examine centroid shift variance and power ratios to quantify substructure; these quantities do not correlate with residuals in the scaling relations. Individual clusters have for the most part forgotten the source of their departures from self-similarity.

  11. Numerical simulation of a step-piston type series two-stage pulse tube refrigerator

    Science.gov (United States)

    Zhu, Shaowei; Nogawa, Masafumi; Inoue, Tatsuo

    2007-09-01

    A two-stage pulse tube refrigerator has a great advantage in that there are no moving parts at low temperatures. The problem is low theoretical efficiency. In an ordinary two-stage pulse tube refrigerator, the expansion work of the first stage pulse tube is rather large, but is changed to heat. The theoretical efficiency is lower than that of a Stirling refrigerator. A series two-stage pulse tube refrigerator was introduced for solving this problem. The hot end of the regenerator of the second stage is connected to the hot end of the first stage pulse tube. The expansion work in the first stage pulse tube is part of the input work of the second stage, therefore the efficiency is increased. In a simulation result for a step-piston type two-stage series pulse tube refrigerator, the efficiency is increased by 13.8%.

  12. Theory and calculation of two-stage voltage stabilizer on zener diodes

    Directory of Open Access Journals (Sweden)

    G. S. Veksler

    1966-12-01

    Full Text Available Two-stage stabilizer is compared with one-stage. There have been got formulas, which give the possibility to make an engineering calculation. There is an example of the calculation.

  13. Two-stage fungal pre-treatment for improved biogas production from sisal leaf decortication residues

    National Research Council Canada - National Science Library

    Muthangya, Mutemi; Mshandete, Anthony Manoni; Kivaisi, Amelia Kajumulo

    2009-01-01

    .... Pre-treatment of the residue prior to its anaerobic digestion (AD) was investigated using a two-stage pre-treatment approach with two fungal strains, CCHT-1 and Trichoderma reesei in succession in anaerobic batch bioreactors...

  14. Experiment research on two-stage dry-fed entrained flow coal gasifier

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The process flow and the main devices of a new two-stage dry-fed coal gasification pilot plant with a throughout of 36 t/d are introduced in this paper. For comparison with the traditional one-stage gasifiers, the influences of the coal feed ratio between two stages on the performance of the gasifier are detailedly studied by a series of experiments. The results reveal that the two-stage gasification decreases the temperature of the syngas at the outlet of the gasifier, simplifies the gasification process, and reduces the size of the syngas cooler. Moreover, the cold gas efficiency of the gasifier can be improved by using the two-stage gasification. In our experiments, the efficiency is about 3%-6% higher than the existing one-stage gasifiers.

  15. A Two-Stage Waste Gasification Reactor for Mars In-Situ Resource Utilization Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to design, build, and test a two-stage waste processing reactor for space applications. Our proposed technology converts waste from space missions into...

  16. A Two-Stage Bayesian Network Method for 3D Human Pose Estimation from Monocular Image Sequences

    Directory of Open Access Journals (Sweden)

    Wang Yuan-Kai

    2010-01-01

    Full Text Available Abstract This paper proposes a novel human motion capture method that locates human body joint position and reconstructs the human pose in 3D space from monocular images. We propose a two-stage framework including 2D and 3D probabilistic graphical models which can solve the occlusion problem for the estimation of human joint positions. The 2D and 3D models adopt directed acyclic structure to avoid error propagation of inference. Image observations corresponding to shape and appearance features of humans are considered as evidence for the inference of 2D joint positions in the 2D model. Both the 2D and 3D models utilize the Expectation Maximization algorithm to learn prior distributions of the models. An annealed Gibbs sampling method is proposed for the two-stage method to inference the maximum posteriori distributions of joint positions. The annealing process can efficiently explore the mode of distributions and find solutions in high-dimensional space. Experiments are conducted on the HumanEva dataset with image sequences of walking motion, which has challenges of occlusion and loss of image observations. Experimental results show that the proposed two-stage approach can efficiently estimate more accurate human poses.

  17. Edge principal components and squash clustering: using the special structure of phylogenetic placement data for sample comparison.

    Directory of Open Access Journals (Sweden)

    Frederick A Matsen

    Full Text Available Principal components analysis (PCA and hierarchical clustering are two of the most heavily used techniques for analyzing the differences between nucleic acid sequence samples taken from a given environment. They have led to many insights regarding the structure of microbial communities. We have developed two new complementary methods that leverage how this microbial community data sits on a phylogenetic tree. Edge principal components analysis enables the detection of important differences between samples that contain closely related taxa. Each principal component axis is a collection of signed weights on the edges of the phylogenetic tree, and these weights are easily visualized by a suitable thickening and coloring of the edges. Squash clustering outputs a (rooted clustering tree in which each internal node corresponds to an appropriate "average" of the original samples at the leaves below the node. Moreover, the length of an edge is a suitably defined distance between the averaged samples associated with the two incident nodes, rather than the less interpretable average of distances produced by UPGMA, the most widely used hierarchical clustering method in this context. We present these methods and illustrate their use with data from the human microbiome.

  18. The X-ray luminosity-temperature relation of a complete sample of low-mass galaxy clusters

    DEFF Research Database (Denmark)

    Zou, S.; Maughan, B. J.; Giles, P. A.

    2016-01-01

    We present Chandra observations of 23 galaxy groups and low-mass galaxy clusters at 0.03 sample is a statistically complete flux-limited subset of the 400 deg2 survey. We investigated the scaling relation between X-ray luminosity (L) and temperatur...

  19. A new multi-motor drive system based on two-stage direct power converter

    OpenAIRE

    Kumar, Dinesh

    2011-01-01

    The two-stage AC to AC direct power converter is an alternative matrix converter topology, which offers the benefits of sinusoidal input currents and output voltages, bidirectional power flow and controllable input power factor. The absence of any energy storage devices, such as electrolytic capacitors, has increased the potential lifetime of the converter. In this research work, a new multi-motor drive system based on a two-stage direct power converter has been proposed, with two motors c...

  20. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts

    Directory of Open Access Journals (Sweden)

    Chia-Chang Chien

    2009-01-01

    Full Text Available Chia-Chang Chien1, Shu-Fen Huang1,2,3,4, For-Wey Lung1,2,3,41Department of Psychiatry, Kaohsiung Armed Forces General Hospital, Kaohsiung, Taiwan; 2Graduate Institute of Behavioral Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan; 3Department of Psychiatry, National Defense Medical Center, Taipei, Taiwan; 4Calo Psychiatric Center, Pingtung County, TaiwanObjective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts.Methods: We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST and the Wechsler Adult Intelligence Scale-Revised (WAIS-R assessments.Results: Logistic regression analysis showed the conceptual level responses (CLR index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84. We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%.Conclusion: The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.Keywords: intellectual disability, intelligence screening, two-stage positive screening, Wisconsin Card Sorting Test, Wechsler Adult Intelligence Scale-Revised

  1. Two-Stage Conversion of Land and Marine Biomass for Biogas and Biohydrogen Production

    OpenAIRE

    Nkemka, Valentine

    2012-01-01

    The replacement of fossil fuels by renewable fuels such as biogas and biohydrogen will require efficient and economically competitive process technologies together with new kinds of biomass. A two-stage system for biogas production has several advantages over the widely used one-stage continuous stirred tank reactor (CSTR). However, it has not yet been widely implemented on a large scale. Biohydrogen can be produced in the anaerobic two-stage system. It is considered to be a useful fuel for t...

  2. Hydrogen production from cellulose in a two-stage process combining fermentation and electrohydrogenesis

    KAUST Repository

    Lalaurette, Elodie

    2009-08-01

    A two-stage dark-fermentation and electrohydrogenesis process was used to convert the recalcitrant lignocellulosic materials into hydrogen gas at high yields and rates. Fermentation using Clostridium thermocellum produced 1.67 mol H2/mol-glucose at a rate of 0.25 L H2/L-d with a corn stover lignocellulose feed, and 1.64 mol H2/mol-glucose and 1.65 L H2/L-d with a cellobiose feed. The lignocelluose and cellobiose fermentation effluent consisted primarily of: acetic, lactic, succinic, and formic acids and ethanol. An additional 800 ± 290 mL H2/g-COD was produced from a synthetic effluent with a wastewater inoculum (fermentation effluent inoculum; FEI) by electrohydrogensis using microbial electrolysis cells (MECs). Hydrogen yields were increased to 980 ± 110 mL H2/g-COD with the synthetic effluent by combining in the inoculum samples from multiple microbial fuel cells (MFCs) each pre-acclimated to a single substrate (single substrate inocula; SSI). Hydrogen yields and production rates with SSI and the actual fermentation effluents were 980 ± 110 mL/g-COD and 1.11 ± 0.13 L/L-d (synthetic); 900 ± 140 mL/g-COD and 0.96 ± 0.16 L/L-d (cellobiose); and 750 ± 180 mL/g-COD and 1.00 ± 0.19 L/L-d (lignocellulose). A maximum hydrogen production rate of 1.11 ± 0.13 L H2/L reactor/d was produced with synthetic effluent. Energy efficiencies based on electricity needed for the MEC using SSI were 270 ± 20% for the synthetic effluent, 230 ± 50% for lignocellulose effluent and 220 ± 30% for the cellobiose effluent. COD removals were ∼90% for the synthetic effluents, and 70-85% based on VFA removal (65% COD removal) with the cellobiose and lignocellulose effluent. The overall hydrogen yield was 9.95 mol-H2/mol-glucose for the cellobiose. These results show that pre-acclimation of MFCs to single substrates improves performance with a complex mixture of substrates, and that high hydrogen yields and gas production rates can be achieved using a two-stage fermentation and MEC

  3. The impact of alcohol marketing on youth drinking behaviour: a two-stage cohort study.

    Science.gov (United States)

    Gordon, Ross; MacKintosh, Anne Marie; Moodie, Crawford

    2010-01-01

    To examine whether awareness of, and involvement with alcohol marketing at age 13 is predictive of initiation of drinking, frequency of drinking and units of alcohol consumed at age 15. A two-stage cohort study, involving a questionnaire survey, combining interview and self-completion, was administered in respondents' homes. Respondents were drawn from secondary schools in three adjoining local authority areas in the West of Scotland, UK. From a baseline sample of 920 teenagers (aged 12-14, mean age 13), in 2006, a cohort of 552 was followed up 2 years later (aged 14-16, mean age 15). Data were gathered on multiple forms of alcohol marketing and measures of drinking initiation, frequency and consumption. At follow-up, logistic regression demonstrated that, after controlling for confounding variables, involvement with alcohol marketing at baseline was predictive of both uptake of drinking and increased frequency of drinking. Awareness of marketing at baseline was also associated with an increased frequency of drinking at follow-up. Our findings demonstrate an association between involvement with, and awareness of, alcohol marketing and drinking uptake or increased drinking frequency, and we consider whether the current regulatory environment affords youth sufficient protection from alcohol marketing.

  4. SUCCESS FACTORS IN GROWING SMBs: A STUDY OF TWO INDUSTRIES AT TWO STAGES OF DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Tor Jarl Trondsen

    2002-01-01

    Full Text Available The study attempts to identify factors for growing SMBs. An evolutionary phase approach has been used. The study also aims to find out if there are common and different denominators for newer and older firms that can affect their profitability. The study selects a sampling frame that isolates two groups of firms in two industries at two stages of development. A variety of organizational and structural data was collected and analyzed. Amongst the conclusions that may be drawn from the study are that it is not easy to find a common definition of success, it is important to stratify SMBs when studying them, an evolutionary stage approach helps to compare firms with roughly the same external and internal dynamics and each industry has its own set of success variables.The study has identified three success variables for older firms that reflect contemporary strategic thinking such as crafting a good strategy and changing it only incrementally, building core competencies and outsourcing the rest, and keeping up with innovation and honing competitive skills.

  5. A Two-Stage Compression Method for the Fault Detection of Roller Bearings

    Directory of Open Access Journals (Sweden)

    Huaqing Wang

    2016-01-01

    Full Text Available Data measurement of roller bearings condition monitoring is carried out based on the Shannon sampling theorem, resulting in massive amounts of redundant information, which will lead to a big-data problem increasing the difficulty of roller bearing fault diagnosis. To overcome the aforementioned shortcoming, a two-stage compressed fault detection strategy is proposed in this study. First, a sliding window is utilized to divide the original signals into several segments and a selected symptom parameter is employed to represent each segment, through which a symptom parameter wave can be obtained and the raw vibration signals are compressed to a certain level with the faulty information remaining. Second, a fault detection scheme based on the compressed sensing is applied to extract the fault features, which can compress the symptom parameter wave thoroughly with a random matrix called the measurement matrix. The experimental results validate the effectiveness of the proposed method and the comparison of the three selected symptom parameters is also presented in this paper.

  6. The Brera Multi-scale Wavelet HRI Cluster Survey: I Selection of the Sample and Number Counts

    CERN Document Server

    Moretti, A; Campana, S; Lazzati, D; Panzera, M R; Tagliaferri, G; Arena, S; Braglia, F; Dell'Antonio, I; Longhetti, M

    2004-01-01

    We describe the construction of the Brera Multi-scale Wavelet (BMW) HRI Cluster Survey, a deep sample of serendipitous X-ray selected clusters of galaxies based on the ROSAT HRI archive. This is the first cluster catalog exploiting the high angular resolution of this instrument. Cluster candidates are selected on the basis of their X-ray extension only, a parameter which is well measured by the BMW wavelet detection algorithm. The survey includes 154 candidates over a total solid angle of ~160 deg2 at 10^{-12}erg s^{-1} cm^{-2} and ~80 deg^2 at 1.8*10^{-13} erg s^{-1}$ cm^{-2}. At the same time, a fairly good sky coverage in the faintest flux bins (3-5*10^{-14}erg s^{-1} cm^{-2}) gives this survey the capability to detect a few clusters with z\\sim 1-1.2, depending on evolution. We present the results of extensive Monte Carlo simulations, providing a complete statistical characterization of the survey selection function and contamination level. We also present a new estimate of the surface density of clusters ...

  7. An evaluation of a two-stage spiral processing ultrafine bituminous coal

    Energy Technology Data Exchange (ETDEWEB)

    Matthew D. Benusa; Mark S. Klima [Penn State University, University Park, PA (United States). Energy and Mineral Engineering

    2008-10-15

    Testing was conducted to evaluate the performance of a multistage Multotec SX7 spiral concentrator treating ultrafine bituminous coal. This spiral mimics a two-stage separation in that the refuse is removed after four turns, and the clean coal and middlings are repulped (without water addition) and then separated in the final three turns. Feed samples were collected from the spiral circuit of a coal cleaning plant located in southwestern Pennsylvania. The samples consisted of undeslimed cyclone feed (nominal -0.15 mm) and deslimed spiral feed (nominal 0.15 x 0.053 mm). Testing was carried out to investigate the effects of slurry flow rate and solids concentration on spiral performance. Detailed size and ash analyses were performed on the spiral feed and product samples. For selected tests, float-sink and sulfur analyses were performed. In nearly all cases, ash reduction occurred down to approximately 0.025 mm, with some sulfur reduction occurring even in the -0.025 mm interval. The separation of the +0.025 mm material was not significantly affected by the presence of the -0.025 mm material when treating the undeslimed feed. The -0.025 mm material split in approximately the same ratio as the slurry, and the majority of the water traveled to the clean coal stream. This split ultimately increased the overall clean coal ash value. A statistical analysis determined that both flow rate and solids concentration affected the clean coal ash value and yield, though the flow rate had a greater effect on the separation. 23 refs.

  8. Alcohol consumption and metabolic syndrome among Shanghai adults: A randomized multistage stratified cluster sampling investigation

    Institute of Scientific and Technical Information of China (English)

    Jian-Gao Fan; Xiao-Bu Cai; Lui Li; Xing-Jian Li; Fei Dai; Jun Zhu

    2008-01-01

    AIM: To examine the relations of alcohol consumption to the prevalence of metabolic syndrome in Shanghai adults.METHODS: We performed a cross-sectional analysis of data from the randomized multistage stratified cluster sampling of Shanghai adults, who were evaluated for alcohol consumption and each component of metabolic syndrome, using the adapted U.S. National Cholesterol Education Program criteria. Current alcohol consumption was defined as more than once of alcohol drinking per month.RESULTS: The study population consisted of 3953participants (1524 men) with a mean age of 54.3 ± 12.1years. Among them, 448 subjects (11.3%) were current alcohol drinkers, including 405 males and 43 females.After adjustment for age and sex, the prevalence of current alcohol drinking and metabolic syndrome in the general population of Shanghai was 13.0% and 15.3%,respectively. Compared with nondrinkers, the prevalence of hypertriglyceridemia and hypertension was higher while the prevalence of abdominal obesity, low serum high-density-lipoprotein cholesterol (HDL-C) and diabetes mellitus was lower in subjects who consumed alcohol twice or more per month, with a trend toward reducing the prevalence of metabolic syndrome. Among the current alcohol drinkers, systolic blood pressure, HDL-C, fasting plasma glucose, and prevalence of hypertriglyceridemia tended to increase with increased alcohol consumption.However, Iow-density-lipoprotein cholesterol concentration,prevalence of abdominal obesity, low serum HDL-C andmetabolic syndrome showed the tendency to decrease.Moreover, these statistically significant differences were independent of gender and age.CONCLUSION: Current alcohol consumption is associatedwith a lower prevalence of metabolic syndrome irrespe-ctive of alcohol intake (g/d), and has a favorable influence on HDL-C, waist circumference, and possible diabetes mellitus. However, alcohol intake increases the likelihoodof hypertension, hypertriglyceridemia and hyperglycemia

  9. Cluster analysis and food group consumption in a national sample of Australian girls.

    Science.gov (United States)

    Grieger, J A; Scott, J; Cobiac, L

    2012-02-01

    Food preferences develop early in life and track into later life. There is limited information on food consumption and dietary patterns in Australian girls. The present study aimed to: (i) determine the frequency of food groups consumed over 1day; (ii) identify dietary clusters based on food group consumption; and (iii) compare dietary intakes and activity variables between clusters. A cross-sectional analysis of 9-16-year-old girls (n=1114) from the 2007 Australian National Children's Nutrition and Physical Activity Survey was performed. Over the whole day, 30% of all girls consumed carbonated sugar drinks, 46% consumed take-away food, 56% consumed fruit, 70% consumed at least one vegetable, and 19% and 30% consumed white and/or red meat, respectively. K-means cluster analysis derived four clusters. Approximately one-third of girls were identified in a Meat and vegetable cluster; these girls had the highest intakes of red meat and vegetables, and tended to have higher intakes of fruit, whole grain breads, low fat yoghurt, and lower intakes of take-away foods and soft drinks. They also had the highest intakes of protein, fibre and micronutrients; and tended to perform more physical activity, compared to girls in the remaining clusters. Girls identified in the Meat and vegetable cluster, on average, consumed more lean red meat, vegetables, fruits, and low-fat dairy products, and had a higher intakes of many nutrients. The high percentage of girls not identified in this cluster suggests the need to inform them on how to make healthy, nutrient dense food choices, and why they require increased nutrient intakes at this time. © 2011 The Authors. Journal of Human Nutrition and Dietetics © 2011 The British Dietetic Association Ltd.

  10. Photochemistry with fast sample renewal using cluster beams: formation of rare-gas halides in charge-transfer reactions in NF 3-doped rare-gas clusters

    Science.gov (United States)

    Moussavizadeh, L.; von Haeften, K.; Museur, L.; Kanaev, A. V.; Castex, M. C.; von Pietrowski, R.; Möller, T.

    1999-05-01

    Charge transfer reactions in free clusters are observed in a photoluminescence study on doped rare-gas clusters (Rg clusters, Rg=Ar, Kr and Xe). Following photoexcitation into the first absorption bands of Rg clusters, fluorescence from free RgF* excimers ejected from the clusters and from Rg 2F* excimers localized in the interior of the clusters is observed. The results show that the reaction dynamics in clusters differs considerably from that in the gas and solid phase.

  11. Non-thermal emission and dynamical state of massive galaxy clusters from CLASH sample

    CERN Document Server

    Pandey-Pommier, M; Combes, F; Edge, A; Guiderdoni, B; Narasimha, D; Bagchi, J; Jacob, J

    2016-01-01

    Massive galaxy clusters are the most violent large scale structures undergoing merger events in the Universe. Based upon their morphological properties in X-rays, they are classified as un-relaxed and relaxed clusters and often host (a fraction of them) different types of non-thermal radio emitting components, viz., haloes, mini-haloes, relics and phoenix within their Intra Cluster Medium (ICM). The radio haloes show steep (alpha = -1.2) and ultra steep (alpha < -1.5) spectral properties at low radio frequencies, giving important insights on the merger (pre or post) state of the cluster. Ultra steep spectrum radio halo emissions are rare and expected to be the dominating population to be discovered via LOFAR and SKA in the future. Further, the distribution of matter (morphological information), alignment of hot X-ray emitting gas from the ICM with the total mass (dark + baryonic matter) and the bright cluster galaxy (BCG) is generally used to study the dynamical state of the cluster. We present here a mult...

  12. The X-ray luminosity-temperature relation of a complete sample of low-mass galaxy clusters

    CERN Document Server

    Zou, Siwei; Giles, P A; Vikhlinin, A; Pacaud, F; Burenin, R; Hornstrup, A

    2016-01-01

    We present \\Chandra\\ observations of 23 galaxy groups and low-mass galaxy clusters at $0.03sample is a statistically complete flux-limited subset of the 400 deg$^2$ survey. We investigated the scaling relation between X-ray luminosity ($L$) and temperature ($T$), taking selection biases fully into account. The logarithmic slope of the bolometric \\LT\\ relation was found to be $3.29\\pm0.33$, consistent with values typically found for samples of more massive clusters. In combination with other recent studies of the \\LT\\ relation we show that there is no evidence for the slope, normalisation, or scatter of the \\LT\\ relation of galaxy groups being different than that of massive clusters. The exception to this is that in the special case of the most relaxed systems, the slope of the core-excised \\LT\\ relation appears to steepen from the self-similar value found for massive clusters to a steeper slope for the lower mass sample studied here. Thanks to our rigorou...

  13. THE CLUSTERING OF ALFALFA GALAXIES: DEPENDENCE ON H I MASS, RELATIONSHIP WITH OPTICAL SAMPLES, AND CLUES OF HOST HALO PROPERTIES

    Energy Technology Data Exchange (ETDEWEB)

    Papastergis, Emmanouil; Giovanelli, Riccardo; Haynes, Martha P.; Jones, Michael G. [Center for Radiophysics and Space Research, Space Sciences Building, Cornell University, Ithaca, NY 14853 (United States); Rodríguez-Puebla, Aldo, E-mail: papastergis@astro.cornell.edu, E-mail: riccardo@astro.cornell.edu, E-mail: haynes@astro.cornell.edu, E-mail: jonesmg@astro.cornell.edu, E-mail: apuebla@astro.unam.mx [Instituto de Astronomía, Universidad Nacional Autónoma de México, A. P. 70-264, 04510 México, D.F. (Mexico)

    2013-10-10

    We use a sample of ≈6000 galaxies detected by the Arecibo Legacy Fast ALFA (ALFALFA) 21 cm survey to measure the clustering properties of H I-selected galaxies. We find no convincing evidence for a dependence of clustering on galactic atomic hydrogen (H I) mass, over the range M{sub H{sub I}} ≈ 10{sup 8.5}-10{sup 10.5} M{sub ☉}. We show that previously reported results of weaker clustering for low H I mass galaxies are probably due to finite-volume effects. In addition, we compare the clustering of ALFALFA galaxies with optically selected samples drawn from the Sloan Digital Sky Survey (SDSS). We find that H I-selected galaxies cluster more weakly than even relatively optically faint galaxies, when no color selection is applied. Conversely, when SDSS galaxies are split based on their color, we find that the correlation function of blue optical galaxies is practically indistinguishable from that of H I-selected galaxies. At the same time, SDSS galaxies with red colors are found to cluster significantly more than H I-selected galaxies, a fact that is evident in both the projected as well as the full two-dimensional correlation function. A cross-correlation analysis further reveals that gas-rich galaxies 'avoid' being located within ≈3 Mpc of optical galaxies with red colors. Next, we consider the clustering properties of halo samples selected from the Bolshoi ΛCDM simulation. A comparison with the clustering of ALFALFA galaxies suggests that galactic H I mass is not tightly related to host halo mass and that a sizable fraction of subhalos do not host H I galaxies. Lastly, we find that we can recover fairly well the correlation function of H I galaxies by just excluding halos with low spin parameter. This finding lends support to the hypothesis that halo spin plays a key role in determining the gas content of galaxies.

  14. Susceptibility to Exercise-Induced Muscle Damage: a Cluster Analysis with a Large Sample.

    Science.gov (United States)

    Damas, F; Nosaka, K; Libardi, C A; Chen, T C; Ugrinowitsch, C

    2016-07-01

    We investigated the responses of indirect markers of exercise-induced muscle damage (EIMD) among a large number of young men (N=286) stratified in clusters based on the largest decrease in maximal voluntary contraction torque (MVC) after an unaccustomed maximal eccentric exercise bout of the elbow flexors. Changes in MVC, muscle soreness (SOR), creatine kinase (CK) activity, range of motion (ROM) and upper-arm circumference (CIR) before and for several days after exercise were compared between 3 clusters established based on MVC decrease (low, moderate, and high responders; LR, MR and HR). Participants were allocated to LR (n=61), MR (n=152) and HR (n=73) clusters, which depicted significantly different cluster centers of 82%, 61% and 42% of baseline MVC, respectively. Once stratified by MVC decrease, all muscle damage markers were significantly different between clusters following the same pattern: small changes for LR, larger changes for MR, and the largest changes for HR. Stratification of individuals based on the magnitude of MVC decrease post-exercise greatly increases the precision in estimating changes in EIMD by proxy markers such as SOR, CK activity, ROM and CIR. This indicates that the most commonly used markers are valid and MVC orchestrates their responses, consolidating the role of MVC as the best EIMD indirect marker.

  15. A volume-limited sample of X-ray galaxy groups and clusters - I. Radial entropy and cooling time profiles

    CERN Document Server

    Panagoulia, Electra; Sanders, Jeremy

    2013-01-01

    We present the first results of our study of a sample of 101 X-ray galaxy groups and clusters, which is volume-limited in each of three X-ray luminosity bins. The aim of this work is to study the properties of the innermost ICM in the cores of our groups and clusters, and to determine the effect of non-gravitational processes, such as active galactic nucleus (AGN) feedback, on the ICM. The entropy of the ICM is of special interest, as it bears the imprint of the thermal history of a cluster, and it also determines a cluster's global properties. Entropy profiles can therefore be used to examine any deviations from cluster self-similarity, as well as the effects of feedback on the ICM. We find that the entropy profiles are well-fitted by a simple powerlaw model, of the form $K(r) = \\alpha\\times(r/100 \\rm{kpc})^{\\beta}$, where $\\alpha$ and $\\beta$ are constants. We do not find evidence for the existence of an "entropy floor", i.e. our entropy profiles do not flatten out at small radii, as suggested by some previ...

  16. Analysis of Turbulence Datasets using a Database Cluster: Requirements, Design, and Sample Applications

    Science.gov (United States)

    Meneveau, Charles

    2007-11-01

    The massive datasets now generated by Direct Numerical Simulations (DNS) of turbulent flows create serious new challenges. During a simulation, DNS provides only a few time steps at any instant, owing to storage limitations within the computational cluster. Therefore, traditional numerical experiments done during the simulation examine each time slice only a few times before discarding it. Conversely, if a few large datasets from high-resolution simulations are stored, they are practically inaccessible to most in the turbulence research community, who lack the cyber resources to handle the massive amounts of data. Even those who can compute at that scale must run simulations again forward in time in order to answer new questions about the dynamics, duplicating computational effort. The result is that most turbulence datasets are vastly underutilized and not available as they should be for creative experimentation. In this presentation, we discuss the desired features and requirements of a turbulence database that will enable its widest access to the research community. The guiding principle of large databases is ``move the program to the data'' (Szalay et al. ``Designing and mining multi-terabyte Astronomy archives: the Sloan Digital Sky Survey,'' in ACM SIGMOD, 2000). However, in the case of turbulence research, the questions and analysis techniques are highly specific to the client and vary widely from one client to another. This poses particularly hard challenges in the design of database analysis tools. We propose a minimal set of such tools that are of general utility across various applications. And, we describe a new approach based on a Web services interface that allows a client to access the data in a user-friendly fashion while allowing maximum flexibility to execute desired analysis tasks. Sample applications will be discussed. This work is performed by the interdisciplinary ITR group, consisting of the author and Yi Li(1), Eric Perlman(2), Minping Wan(1

  17. Method of oxygen-enriched two-stage underground coal gasification

    Institute of Scientific and Technical Information of China (English)

    Liu Hongtao; Chen Feng; Pan Xia; Yao Kai; Liu Shuqin

    2011-01-01

    Two-stage underground coal gasification was studied to improve the caloric value of the syngas and to extend gas production times. A model test using the oxygen-enriched two-stage coal gasification method was carried out. The composition of the gas produced, the time ratio of the two stages, and the role of the temperature field were analysed. The results show that oxygen-enriched two-stage gasification shortens the time of the first stage and prolongs the time of the second stage. Feed oxygen concentrations of 30%,35%, 40%, 45%, 60%, or 80% gave time ratios (first stage to second stage) of 1:0.12, 1:0.21, 1:0.51, 1:0.64,1:0.90, and 1:4.0 respectively. Cooling rates of the temperature field after steam injection decreased with time from about 19.1-27.4 ℃/min to 2.3-6.8 ℃/min. But this rate increased with increasing oxygen concentrations in the first stage. The caloric value of the syngas improves with increased oxygen concentration in the first stage. Injection of 80% oxygen-enriched air gave gas with the highest caloric value and also gave the longest production time. The caloric value of the gas obtained from the oxygenenriched two-stage gasification method lies in the range from 5.31 MJ/Nm3 to 10.54 MJ/Nm3.

  18. 13 K thermally coupled two-stage Stirling-type pulse tube refrigerator

    Institute of Scientific and Technical Information of China (English)

    TANG Ke; CHEN Guobang; THUMMES Günter

    2005-01-01

    Stirling-type pulse tube refrigerators have attracted academic and commercial interest in recent years due to their more compact configuration and higher efficiency than those of G-M type pulse tube refrigerators. In order to achieve a no-load cooling temperature below 20 K, a thermally coupled two-stage Stirling-type pulse tube refrigerator has been built. The thermally coupled arrangement was expected to minimize the interference between the two stages and to simplify the adjustment and optimization of the phase shifters. A no-load cooling temperature of 14.97 K has been realized with the two-stage cooler driven by one linear compressor of 200 W electric input. When the two stages are driven by two compressors respectively, with total electric input of 400 W, the prototype has attained a no-load cooling temperature of 12.96 K, which is the lowest temperature ever reported with two-stage Stirling-type pulse tube refrigerators.

  19. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Ladan Jamshidy

    2016-01-01

    Full Text Available Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  20. The X-ray luminosity-temperature relation of a complete sample of low-mass galaxy clusters

    Science.gov (United States)

    Zou, S.; Maughan, B. J.; Giles, P. A.; Vikhlinin, A.; Pacaud, F.; Burenin, R.; Hornstrup, A.

    2016-11-01

    We present Chandra observations of 23 galaxy groups and low-mass galaxy clusters at 0.03 account. The logarithmic slope of the bolometric L-T relation was found to be 3.29 ± 0.33, consistent with values typically found for samples of more massive clusters. In combination with other recent studies of the L-T relation, we show that there is no evidence for the slope, normalization, or scatter of the L-T relation of galaxy groups being different than that of massive clusters. The exception to this is that in the special case of the most relaxed systems, the slope of the core-excised L-T relation appears to steepen from the self-similar value found for massive clusters to a steeper slope for the lower mass sample studied here. Thanks to our rigorous treatment of selection biases, these measurements provide a robust reference against which to compare predictions of models of the impact of feedback on the X-ray properties of galaxy groups.

  1. Design and construction of the X-2 two-stage free piston driven expansion tube

    Science.gov (United States)

    Doolan, Con

    1995-01-01

    This report outlines the design and construction of the X-2 two-stage free piston driven expansion tube. The project has completed its construction phase and the facility has been installed in the new impulsive research laboratory where commissioning is about to take place. The X-2 uses a unique, two-stage driver design which allows a more compact and lower overall cost free piston compressor. The new facility has been constructed in order to examine the performance envelope of the two-stage driver and how well it couple to sub-orbital and super-orbital expansion tubes. Data obtained from these experiments will be used for the design of a much larger facility, X-3, utilizing the same free piston driver concept.

  2. Analysis of performance and optimum configuration of two-stage semiconductor thermoelectric module

    Institute of Scientific and Technical Information of China (English)

    Li Kai-Zhen; Liang Rui-Sheng; Wei Zheng-Jun

    2008-01-01

    In this paper, the theoretical analysis and simulating calculation were conducted for a basic two-stage semiconductor thermoelectric module, which contains one thermocouple in the second stage and several thermocouples in the first stage. The study focused on the configuration of the two-stage semiconductor thermoelectric cooler, especially investigating the influences of some parameters, such as the current I1 of the first stage, the area A1 of every thermocouple and the number n of thermocouples in the first stage, on the cooling performance of the module. The obtained results of analysis indicate that changing the current I1 of the first stage, the area A1 of thcrmocouples and the number n of thermocouples in the first stage can improve the cooling performance of the module. These results can be used to optimize the configuration of the two-stage semiconductor thermoelectric module and provide guides for the design and application of thermoelectric cooler.

  3. Effects of earthworm casts and zeolite on the two-stage composting of green waste.

    Science.gov (United States)

    Zhang, Lu; Sun, Xiangyang

    2015-05-01

    Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21days with the optimized two-stage composting method rather than in the 90-270days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL.

  4. Two-Stage Revision Anterior Cruciate Ligament Reconstruction: Bone Grafting Technique Using an Allograft Bone Matrix.

    Science.gov (United States)

    Chahla, Jorge; Dean, Chase S; Cram, Tyler R; Civitarese, David; O'Brien, Luke; Moulton, Samuel G; LaPrade, Robert F

    2016-02-01

    Outcomes of primary anterior cruciate ligament (ACL) reconstruction have been reported to be far superior to those of revision reconstruction. However, as the incidence of ACL reconstruction is rapidly increasing, so is the number of failures. The subsequent need for revision ACL reconstruction is estimated to occur in up to 13,000 patients each year in the United States. Revision ACL reconstruction can be performed in one or two stages. A two-stage approach is recommended in cases of improper placement of the original tunnels or in cases of unacceptable tunnel enlargement. The aim of this study was to describe the technique for allograft ACL tunnel bone grafting in patients requiring a two-stage revision ACL reconstruction.

  5. A two-stage subsurface vertical flow constructed wetland for high-rate nitrogen removal.

    Science.gov (United States)

    Langergraber, Guenter; Leroch, Klaus; Pressl, Alexander; Rohrhofer, Roland; Haberl, Raimund

    2008-01-01

    By using a two-stage constructed wetland (CW) system operated with an organic load of 40 gCOD.m(-2).d(-1) (2 m2 per person equivalent) average nitrogen removal efficiencies of about 50% and average nitrogen elimination rates of 980 g N.m(-2).yr(-1) could be achieved. Two vertical flow beds with intermittent loading have been operated in series. The first stage uses sand with a grain size of 2-3.2 mm for the main layer and has a drainage layer that is impounded; the second stage sand with a grain size of 0.06-4 mm and a drainage layer with free drainage. The high nitrogen removal can be achieved without recirculation thus it is possible to operate the two-stage CW system without energy input. The paper shows performance data for the two-stage CW system regarding removal of organic matter and nitrogen for the two year operating period of the system. Additionally, its efficiency is compared with the efficiency of a single-stage vertical flow CW system designed and operated according to the Austrian design standards with 4 m2 per person equivalent. The comparison shows that a higher effluent quality could be reached with the two-stage system although the two-stage CW system is operated with the double organic load or half the specific surface area requirement, respectively. Another advantage is that the specific investment costs of the two-stage CW system amount to 1,200 EUR per person (without mechanical pre-treatment) and are only about 60% of the specific investment costs of the singe-stage CW system. IWA Publishing 2008.

  6. Clustering properties of a type-selected volume-limited sample of galaxies in the CFHTLS

    CERN Document Server

    McCracken, H J; Mellier, Y; Bertin, E; Guzzo, L; Arnouts, S; Le Fèvre, O; Zamorani, G

    2007-01-01

    (abridged) We present an investigation of the clustering of i'AB<24.5 galaxies in the redshift interval 0.2clustering amplitudes between two and three times higher than bluer ones. 3. For bright red and blue galaxies, the clustering amplitude is invariant with redshift. 4. At z~0.5, less luminous galaxies have higher clustering amplitudes of around 6 h-1 Mpc. 5. The relative bias between galaxies with red and blue rest-frame colours increases gradually towards fainter absolute magnitud...

  7. Ultraviolet tails and trails in cluster galaxies: A sample of candidate gaseous stripping events in Coma

    CERN Document Server

    Smith, Russell J; Hammer, Derek; Hornschemeier, Ann E; Carter, David; Hudson, Michael J; Marzke, Ronald O; Mouhcine, Mustapha; Eftekharzadeh, Sareh; James, Phil; Khosroshahi, Habib; Kourkchi, Ehsan; Karick, Arna

    2010-01-01

    We have used new deep observations of the Coma cluster from GALEX to identify 13 star-forming galaxies with asymmetric morphologies in the ultraviolet. Aided by optical broad-band and H-alpha imaging, we interpret the asymmetric features as being due to star formation within gas stripped from the galaxies by interaction with the cluster environment. The selected objects display a range of structures from broad fan-shaped systems of filaments and knots (`jellyfish') to narrower and smoother tails extending up to 100 kpc in length. Some of the features have been discussed previously in the literature, while others are newly identified here. As an ensemble, the candidate stripping events are located closer to the cluster centre than other star-forming galaxies; their radial distribution is similar to that of all cluster members, dominated by passive galaxies. The fraction of blue galaxies which are undergoing stripping falls from 40% in the central 500 kpc, to less than 5% beyond 1 Mpc. We find that tails pointi...

  8. Methane production from sweet sorghum residues via a two-stage process

    Energy Technology Data Exchange (ETDEWEB)

    Stamatelatou, K.; Dravillas, K.; Lyberatos, G. [University of Patras (Greece). Department of Chemical Engineering, Laboratory of Biochemical Engineering and Environmental Technology

    2003-07-01

    The start-up of a two-stage reactor configuration for the anaerobic digestion of sweet sorghum residues was evaluated. The sweet sorghum residues were a waste stream originating from the alcoholic fermentation of sweet sorghum and the subsequent distillation step. This waste stream contained high concentration of solid matter (9% TS) and thus could be characterized as a semi-solid, not easily biodegradable wastewater with high COD (115 g/l). The application of the proposed two-stage configuration (consisting of one thermophilic hydrolyser and one mesophilic methaniser) achieved a methane production of 16 l/l wastewater under a hydraulic retention time of 19 d. (author)

  9. One-stage and two-stage penile buccal mucosa urethroplasty

    Directory of Open Access Journals (Sweden)

    G. Barbagli

    2016-03-01

    Full Text Available The paper provides the reader with the detailed description of current techniques of one-stage and two-stage penile buccal mucosa urethroplasty. The paper provides the reader with the preoperative patient evaluation paying attention to the use of diagnostic tools. The one-stage penile urethroplasty using buccal mucosa graft with the application of glue is preliminary showed and discussed. Two-stage penile urethroplasty is then reported. A detailed description of first-stage urethroplasty according Johanson technique is reported. A second-stage urethroplasty using buccal mucosa graft and glue is presented. Finally postoperative course and follow-up are addressed.

  10. Development of a linear compressor for two-stage pulse tube cryocoolers

    Institute of Scientific and Technical Information of China (English)

    Peng-da YAN; Wei-li GAO; Guo-bang CHEN

    2009-01-01

    A valveless linear compressor was built up to drive a self-made two-stage pulse tube cryocooler. With a designed maximum swept volume of 60 cm~3, the compressor can provide the cryocooler with a pressure volume (PV) power of 400 W.Preliminary measurements of the compressor indicated that both an efficiency of 35%~55% and a pressure ratio of 1.3~1.4 could be obtained. The two-stage pulse tube cryocooler driven by this compressor achieved the lowest temperature of 14.2 K.

  11. Terephthalic acid wastewater treatment by using two-stage aerobic process

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    Based on the tests between anoxic and aerobic process, the two-stage aerobic process with a biological selector was chosen to treat terephthalic acid wastewater (PTA). By adopting the two- stage aerobic process, the CODCr in PTA wastewater could be reduced from 4000-6000 mg/L to below 100 mg/L; the COD loading in the first aerobic tank could reach 7.0-8.0 kgCODCr/(m3.d) and that of the second stage was from 0.2 to 0.4 kgCODCr/(m3.d). Further researches on the kinetics of substrate degradation were carried out.

  12. First Law Analysis of a Two-stage Ejector-vapor Compression Refrigeration Cycle working with R404A

    National Research Council Canada - National Science Library

    Feiza Memet; Daniela-Elena Mitu

    2011-01-01

    The traditional two-stage vapor compression refrigeration cycle might be replaced by a two-stage ejector-vapor compression refrigeration cycle if it is aimed the decrease of irreversibility during expansion...

  13. Chandra X-ray Observations of the 0.6 < z < 1.1 Red-Sequence Cluster Survey Sample

    CERN Document Server

    Hicks, Amalia K; Bautz, Mark; Cain, Benjamin; Gilbank, David; Gladders, M D; Hoekstra, Henk; Yee, Howard; Garmire, Gordon

    2007-01-01

    We present the results of Chandra observations of 13 optically-selected clusters with 0.63; though 3 were not observed long enough to support detailed analysis. Surface brightness profiles are fit to beta-models. Integrated spectra are extracted within R(2500), and Tx and Lx information is obtained. We derive gas and total masses within R(2500) and R(500). Cosmologically corrected scaling relations are investigated, and we find the RCS clusters to be consistent with self-similar scaling expectations. However discrepancies exist between the RCS sample and lower-z X-ray selected samples for relationships involving Lx, with the higher-z RCS clusters having lower Lx for a given Tx. In addition, we find that gas mass fractions within R(2500) for the high-z RCS sample are lower than expected by a factor of ~2. This suggests that the central entropy of these high-z objects has been elevated by processes such as pre-heating, mergers, and/or AGN outbursts, that their gas is still infalling, or that they contain compar...

  14. Sunyaev-Zel'dovich-Measured Pressure Profiles from the Bolocam X-ray/SZ Galaxy Cluster Sample

    CERN Document Server

    Sayers, Jack; Mantz, Adam; Golwala, Sunil R; Ameglio, Silvia; Downes, Tom P; Koch, Patrick M; Lin, Kai-Yang; Maughan, Ben J; Molnar, Sandor M; Moustakas, Leonidas; Mroczkowski, Tony; Pierpaoli, Elena; Shitanishi, Jennifer A; Siegel, Seth; Umetsu, Keiichi; Van der Pyl, Nina

    2012-01-01

    We describe Sunyaev-Zel'dovich (SZ) effect measurements and analysis of the intracluster medium (ICM) pressure profiles of a set of 45 massive galaxy clusters imaged using Bolocam at the Caltech Submillimeter Observatory. We have used masses determined from Chandra X-ray observations to scale each cluster's profile by the overdensity radius R500 and the mass-and-redshift-dependent normalization factor P500. We deproject the average pressure profile of our sample into 13 logarithmically spaced radial bins between 0.07R500 and 3.5R500. We find that a generalized Navarro, Frenk, and White (gNFW) profile describes our data with sufficient goodness-of-fit and best-fit parameters (C500, alpha, beta, gamma, P0 = 1.18, 0.86, 3.67, 0.67, 4.29). We also use the X-ray data to define cool-core and disturbed subsamples of clusters, and we constrain the average pressure profiles of each of these subsamples. We find that given the precision of our data the average pressure profiles of disturbed and cool-core clusters are co...

  15. Optical Emission Line Nebulae in Galaxy Cluster Cores 1: The Morphological, Kinematic and Spectral Properties of the Sample

    CERN Document Server

    Hamer, S L; Swinbank, A M; Wilman, R J; Combes, F; Salomé, P; Fabian, A C; Crawford, C S; Russell, H R; Hlavacek-Larrondo, J; McNamara, B; Bremer, M N

    2016-01-01

    We present an Integral Field Unit survey of 73 galaxy clusters and groups with the VIsible Multi Object Spectrograph (VIMOS) on VLT. We exploit the data to determine the H$\\alpha$ gas dynamics on kpc-scales to study the feedback processes occurring within the dense cluster cores. We determine the kinematic state of the ionised gas and show that the majority of systems ($\\sim$ 2/3) have relatively ordered velocity fields on kpc scales that are similar to the kinematics of rotating discs and are decoupled from the stellar kinematics of the Brightest Cluster Galaxy. The majority of the H$\\alpha$ flux ($>$ 50%) is typically associated with these ordered kinematics and most systems show relatively simple morphologies suggesting they have not been disturbed by a recent merger or interaction. Approximately 20% of the sample (13/73) have disturbed morphologies which can typically be attributed to AGN activity disrupting the gas. Only one system shows any evidence of an interaction with another cluster member. A spect...

  16. Binary Frequencies in a Sample of Globular Clusters. I. Methodology and Initial Results

    CERN Document Server

    Ji, Jun

    2013-01-01

    Binary stars are thought to be a controlling factor in globular cluster evolution, since they can heat the environmental stars by converting their binding energy to kinetic energy during dynamical interactions. Through such interaction, the binaries determine the time until core collapse. To test predictions of this model, we have determined binary fractions for 35 clusters. Here we present our methodology with a representative globular cluster NGC 4590. We use HST archival ACS data in the F606W and F814W bands and apply PSF-fitting photometry to obtain high quality color-magnitude diagrams. We formulate the star superposition effect as a Poisson probability distribution function, with parameters optimized through Monte-Carlo simulations. A model-independent binary fraction of (6.2 +- 0.3)% is obtained by counting stars that extend to the red side of the residual color distribution after accounting for the photometric errors and the star superposition effect. A model-dependent binary fraction is obtained by c...

  17. Overcoming the bottlenecks of anaerobic digestion of olive mill solid waste by two-stage fermentation.

    Science.gov (United States)

    Stoyanova, Elitza; Lundaa, Tserennyam; Bochmann, Günther; Fuchs, Werner

    2017-02-01

    Two-stage anaerobic digestion (AD) of two-phase olive mill solid waste (OMSW) was applied for reducing the inhibiting factors by optimizing the acidification stage. Single-stage AD and co-fermentation with chicken manure were conducted coinstantaneous for direct comparison. Degradation of the polyphenols up to 61% was observed during the methanogenic stage. Nevertheless the concentration of phenolic substances was still high; the two-stage fermentation remained stable at OLR 1.5 kgVS/m³day. The buffer capacity of the system was twice as high, compared to the one-stage fermentation, without additives. The two-stage AD was a combined process - thermophilic first stage and mesophilic second stage, which pointed out to be the most profitable for AD of OMSW for the reduced hydraulic retention time (HRT) from 230 to 150 days, and three times faster than the single-stage and the co-fermentation start-up of the fermentation. The optimal HRT and incubation temperature for the first stage were determined to four days and 55°C. The performance of the two-stage AD concerning the stability of the process was followed by the co-digestion of OMSW with chicken manure as a nitrogen-rich co-substrate, which makes them viable options for waste disposal with concomitant energy recovery.

  18. The Design, Construction and Operation of a 75 kW Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Henriksen, Ulrik Birk; Ahrenfeldt, Jesper; Jensen, Torben Kvist

    2003-01-01

    The Two-Stage Gasifier was operated for several weeks (465 hours) and of these 190 hours continuously. The gasifier is operated automatically unattended day and night, and only small adjustments of the feeding rate were necessary once or twice a day. The operation was successful, and the output a...... of the reactor had to be constructed in some other material....

  19. Treatment of corn ethanol distillery wastewater using two-stage anaerobic digestion.

    Science.gov (United States)

    Ráduly, B; Gyenge, L; Szilveszter, Sz; Kedves, A; Crognale, S

    In this study the mesophilic two-stage anaerobic digestion (AD) of corn bioethanol distillery wastewater is investigated in laboratory-scale reactors. Two-stage AD technology separates the different sub-processes of the AD in two distinct reactors, enabling the use of optimal conditions for the different microbial consortia involved in the different process phases, and thus allowing for higher applicable organic loading rates (OLRs), shorter hydraulic retention times (HRTs) and better conversion rates of the organic matter, as well as higher methane content of the produced biogas. In our experiments the reactors have been operated in semi-continuous phase-separated mode. A specific methane production of 1,092 mL/(L·d) has been reached at an OLR of 6.5 g TCOD/(L·d) (TCOD: total chemical oxygen demand) and a total HRT of 21 days (5.7 days in the first-stage, and 15.3 days in the second-stage reactor). Nonetheless the methane concentration in the second-stage reactor was very high (78.9%); the two-stage AD outperformed the reference single-stage AD (conducted at the same reactor loading rate and retention time) by only a small margin in terms of volumetric methane production rate. This makes questionable whether the higher methane content of the biogas counterbalances the added complexity of the two-stage digestion.

  20. A two-stage ethanol-based biodiesel production in a packed bed reactor

    DEFF Research Database (Denmark)

    Xu, Yuan; Nordblad, Mathias; Woodley, John

    2012-01-01

    A two-stage enzymatic process for producing fatty acid ethyl ester (FAEE) in a packed bed reactor is reported. The process uses an experimental immobilized lipase (NS 88001) and Novozym 435 to catalyze transesterification (first stage) and esterification (second stage), respectively. Both stages...

  1. Two-Stage MAS Technique for Analysis of DRA Elements and Arrays on Finite Ground Planes

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2007-01-01

    A two-stage Method of Auxiliary Sources (MAS) technique is proposed for analysis of dielectric resonator antenna (DRA) elements and arrays on finite ground planes (FGPs). The problem is solved by first analysing the DRA on an infinite ground plane (IGP) and then using this solution to model the FGP...... problem....

  2. Use a Log Splitter to Demonstrate Two-Stage Hydraulic Pump

    Science.gov (United States)

    Dell, Timothy W.

    2012-01-01

    The two-stage hydraulic pump is commonly used in many high school and college courses to demonstrate hydraulic systems. Unfortunately, many textbooks do not provide a good explanation of how the technology works. Another challenge that instructors run into with teaching hydraulic systems is the cost of procuring an expensive real-world machine…

  3. Some design aspects of a two-stage rail-to-rail CMOS op amp

    NARCIS (Netherlands)

    Gierkink, S.L.J.; Holzmann, Peter J.; Wiegerink, R.J.; Wassenaar, R.F.

    1999-01-01

    A two-stage low-voltage CMOS op amp with rail-to-rail input and output voltage ranges is presented. The circuit uses complementary differential input pairs to achieve the rail-to-rail common-mode input voltage range. The differential pairs operate in strong inversion, and the constant transconductan

  4. Capacity Analysis of Two-Stage Production lines with Many Products

    NARCIS (Netherlands)

    M.B.M. de Koster (René)

    1987-01-01

    textabstractWe consider two-stage production lines with an intermediate buffer. A buffer is needed when fluctuations occur. For single-product production lines fluctuations in capacity availability may be caused by random processing times, failures and random repair times. For multi-product producti

  5. Kinetics analysis of two-stage austenitization in supermartensitic stainless steel

    DEFF Research Database (Denmark)

    Nießen, Frank; Villa, Matteo; Hald, John

    2017-01-01

    The martensite-to-austenite transformation in X4CrNiMo16-5-1 supermartensitic stainless steel was followed in-situ during isochronal heating at 2, 6 and 18 K min−1 applying energy-dispersive synchrotron X-ray diffraction at the BESSY II facility. Austenitization occurred in two stages, separated...

  6. An intracooling system for a novel two-stage sliding-vane air compressor

    Science.gov (United States)

    Murgia, Stefano; Valenti, Gianluca; Costanzo, Ida; Colletta, Daniele; Contaldi, Giulio

    2017-08-01

    Lube-oil injection is used in positive-displacement compressors and, among them, in sliding-vane machines to guarantee the correct lubrication of the moving parts and as sealing to prevent air leakage. Furthermore, lube-oil injection allows to exploit lubricant also as thermal ballast with a great thermal capacity to minimize the temperature increase during the compression. This study presents the design of a two-stage sliding-vane rotary compressor in which the air cooling is operated by high-pressure cold oil injection into a connection duct between the two stages. The heat exchange between the atomized oil jet and the air results in a decrease of the air temperature before the second stage, improving the overall system efficiency. This cooling system is named here intracooling, as opposed to intercooling. The oil injection is realized via pressure-swirl nozzles, both within the compressors and inside the intracooling duct. The design of the two-stage sliding-vane compressor is accomplished by way of a lumped parameter model. The model predicts an input power reduction as large as 10% for intercooled and intracooled two-stage compressors, the latter being slightly better, with respect to a conventional single-stage compressor for compressed air applications. An experimental campaign is conducted on a first prototype that comprises the low-pressure compressor and the intracooling duct, indicating that a significant temperature reduction is achieved in the duct.

  7. Development of a heavy-duty diesel engine with two-stage turbocharging

    NARCIS (Netherlands)

    Sturm, L.; Kruithof, J.

    2001-01-01

    A mean value model was developed by using Matrixx/ Systembuild simulation tool for designing real-time control algorithms for the two-stage engine. All desired characteristics are achieved, apart from lower A/F ratio at lower engine speeds and Turbocharger matches calculations. The CANbus is used to

  8. Two-stage, dilute sulfuric acid hydrolysis of wood : an investigation of fundamentals

    Science.gov (United States)

    John F. Harris; Andrew J. Baker; Anthony H. Conner; Thomas W. Jeffries; James L. Minor; Roger C. Pettersen; Ralph W. Scott; Edward L Springer; Theodore H. Wegner; John I. Zerbe

    1985-01-01

    This paper presents a fundamental analysis of the processing steps in the production of methanol from southern red oak (Quercus falcata Michx.) by two-stage dilute sulfuric acid hydrolysis. Data for hemicellulose and cellulose hydrolysis are correlated using models. This information is used to develop and evaluate a process design.

  9. Two-stage data envelopment analysis technique for evaluating internal supply chain efficiency

    Directory of Open Access Journals (Sweden)

    Nisakorn Somsuk

    2014-12-01

    Full Text Available A two-stage data envelopment analysis (DEA which uses mathematical linear programming techniques is applied to evaluate the efficiency of a system composed of two relational sub-processes, by which the outputs from the first sub-process (as the intermediate outputs of the system are the inputs for the second sub-process. The relative efficiencies of the system and its sub-processes can be measured by applying the two-stage DEA. According to the literature review on the supply chain management, this technique can be used as a tool for evaluating the efficiency of the supply chain composed of two relational sub-processes. The technique can help to determine the inefficient sub-processes. Once the inefficient sub-process was improved its efficiency, it would result in better aggregate efficiency of the supply chain. This paper aims to present a procedure for evaluating the efficiency of the supply chain by using the two-stage DEA, under the assumption of constant returns to scale, with an example of internal supply chain efficiency measurement of insurance companies by applying the two-stage DEA for illustration. Moreover, in this paper the authors also present some observations on the application of this technique.

  10. Two-stage estimation in copula models used in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2005-01-01

    In this paper register based family studies provide the motivation for studying a two-stage estimation procedure in copula models for multivariate failure time data. The asymptotic properties of the estimators in both parametric and semi-parametric models are derived, generalising the approach by...

  11. Innovative two-stage anaerobic process for effective codigestion of cheese whey and cattle manure.

    Science.gov (United States)

    Bertin, Lorenzo; Grilli, Selene; Spagni, Alessandro; Fava, Fabio

    2013-01-01

    The valorisation of agroindustrial waste through anaerobic digestion represents a significant opportunity for refuse treatment and renewable energy production. This study aimed to improve the codigestion of cheese whey (CW) and cattle manure (CM) by an innovative two-stage process, based on concentric acidogenic and methanogenic phases, designed for enhancing performance and reducing footprint. The optimum CW to CM ratio was evaluated under batch conditions. Thereafter, codigestion was implemented under continuous-flow conditions comparing one- and two-stage processes. The results demonstrated that the addition of CM in codigestion with CW greatly improved the anaerobic process. The highest methane yield was obtained co-treating the two substrates at equal ratio by using the innovative two-stage process. The proposed system reached the maximum value of 258 mL(CH4) g(gv(-1), which was more than twice the value obtained by the one-stage process and 10% higher than the value obtained by the two-stage one.

  12. Extraoral implants for orbit rehabilitation: a comparison between one-stage and two-stage surgeries.

    Science.gov (United States)

    de Mello, M C L M P; Guedes, R; de Oliveira, J A P; Pecorari, V A; Abrahão, M; Dib, L L

    2014-03-01

    The aim of the study was to compare the osseointegration success rate and time for delivery of the prosthesis among cases treated by two-stage or one-stage surgery for orbit rehabilitation between 2003 and 2011. Forty-five patients were included, 31 males and 14 females; 22 patients had two-stage surgery and 23 patients had one-stage surgery. A total 138 implants were installed, 42 (30.4%) on previously irradiated bone. The implant survival rate was 96.4%, with a success rate of 99.0% among non-irradiated patients and 90.5% among irradiated patients. Two-stage patients received 74 implants with a survival rate of 94.6% (four implants lost); one-stage surgery patients received 64 implants with a survival rate of 98.4% (one implant lost). The median time interval between implant fixation and delivery of the prosthesis for the two-stage group was 9.6 months and for the one-stage group was 4.0 months (P < 0.001). The one-stage technique proved to be reliable and was associated with few risks and complications; the rate of successful osseointegration was similar to those reported in the literature. The one-stage technique should be considered a viable procedure that shortens the time to final rehabilitation and facilitates appropriate patient follow-up treatment.

  13. Validation of Continuous CHP Operation of a Two-Stage Biomass Gasifier

    DEFF Research Database (Denmark)

    Ahrenfeldt, Jesper; Henriksen, Ulrik Birk; Jensen, Torben Kvist

    2006-01-01

    The Viking gasification plant at the Technical University of Denmark was built to demonstrate a continuous combined heat and power operation of a two-stage gasifier fueled with wood chips. The nominal input of the gasifier is 75 kW thermal. To validate the continuous operation of the plant, a 9-d...

  14. High rate treatment of terephthalic acid production wastewater in a two-stage anaerobic bioreactor

    NARCIS (Netherlands)

    Kleerebezem, R.; Beckers, J.; Pol, L.W.H.; Lettinga, G.

    2005-01-01

    The feasibility was studied of anaerobic treatment of wastewater generated during purified terephthalic acid (PTA) production in two-stage upflow anaerobic sludge blanket (UASB) reactor system. The artificial influent of the system contained the main organic substrates of PTA-wastewater: acetate, be

  15. Thermal design of two-stage evaporative cooler based on thermal comfort criterion

    Science.gov (United States)

    Gilani, Neda; Poshtiri, Amin Haghighi

    2017-04-01

    Performance of two-stage evaporative coolers at various outdoor air conditions was numerically studied, and its geometric and physical characteristics were obtained based on thermal comfort criteria. For this purpose, a mathematical model was developed based on conservation equations of mass, momentum and energy to determine heat and mass transfer characteristics of the system. The results showed that two-stage indirect/direct cooler can provide the thermal comfort condition when outdoor air temperature and relative humidity are located in the range of 34-54 °C and 10-60 %, respectively. Moreover, as relative humidity of the ambient air rises, two-stage evaporative cooler with the smaller direct and larger indirect cooler will be needed. In building with high cooling demand, thermal comfort may be achieved at a greater air change per hour number, and thus an expensive two-stage evaporative cooler with a higher electricity consumption would be required. Finally, a design guideline was proposed to determine the size of required plate heat exchangers at various operating conditions.

  16. ADM1-based modeling of methane production from acidified sweet sorghum extractin a two stage process

    DEFF Research Database (Denmark)

    Antonopoulou, Georgia; Gavala, Hariklia N.; Skiadas, Ioannis

    2012-01-01

    The present study focused on the application of the Anaerobic Digestion Model 1 οn the methane production from acidified sorghum extract generated from a hydrogen producing bioreactor in a two-stage anaerobic process. The kinetic parameters for hydrogen and volatile fatty acids consumption were...

  17. Thermal design of two-stage evaporative cooler based on thermal comfort criterion

    Science.gov (United States)

    Gilani, Neda; Poshtiri, Amin Haghighi

    2016-09-01

    Performance of two-stage evaporative coolers at various outdoor air conditions was numerically studied, and its geometric and physical characteristics were obtained based on thermal comfort criteria. For this purpose, a mathematical model was developed based on conservation equations of mass, momentum and energy to determine heat and mass transfer characteristics of the system. The results showed that two-stage indirect/direct cooler can provide the thermal comfort condition when outdoor air temperature and relative humidity are located in the range of 34-54 °C and 10-60 %, respectively. Moreover, as relative humidity of the ambient air rises, two-stage evaporative cooler with the smaller direct and larger indirect cooler will be needed. In building with high cooling demand, thermal comfort may be achieved at a greater air change per hour number, and thus an expensive two-stage evaporative cooler with a higher electricity consumption would be required. Finally, a design guideline was proposed to determine the size of required plate heat exchangers at various operating conditions.

  18. A Two-Stage Exercise on the Binomial Distribution Using Minitab.

    Science.gov (United States)

    Shibli, M. Abdullah

    1990-01-01

    Describes a two-stage experiment that was designed to explain binomial distribution to undergraduate statistics students. A manual coin flipping exercise is explained as the first stage; a computerized simulation using MINITAB software is presented as stage two; and output from the MINITAB exercises is included. (two references) (LRW)

  19. The rearrangement process in a two-stage broadcast switching network

    DEFF Research Database (Denmark)

    Jacobsen, Søren B.

    1988-01-01

    The rearrangement process in the two-stage broadcast switching network presented by F.K. Hwang and G.W. Richards (ibid., vol.COM-33, no.10, p.1025-1035, Oct. 1985) is considered. By defining a certain function it is possible to calculate an upper bound on the number of connections to be moved...

  20. Two-stage laparoscopic resection of colon cancer and metastatic liver tumour

    Directory of Open Access Journals (Sweden)

    Yukio Iwashita

    2012-01-01

    Full Text Available We report herein the case of 70-year-old woman in whom colon cancer and a synchronous metastatic liver tumour were successfully resected laparoscopically. The tumours were treated in two stages. Both post-operative courses were uneventful, and there has been no recurrence during the 8 months since the second procedure.

  1. Two-stage laparoscopic resection of colon cancer and metastatic liver tumour

    Directory of Open Access Journals (Sweden)

    Iwashita Yukio

    2005-01-01

    Full Text Available We report herein the case of 70-year-old woman in whom colon cancer and a synchronous metastatic liver tumour were successfully resected laparoscopically. The tumours were treated in two stages. Both postoperative courses were uneventful, and there has been no recurrence during the 8 months since the second procedure.

  2. Two-stage bargaining with coverage extension in a dual labour market

    DEFF Research Database (Denmark)

    Roberts, Mark A.; Stæhr, Karsten; Tranæs, Torben

    2000-01-01

    This paper studies coverage extension in a simple general equilibrium model with a dual labour market. The union sector is characterized by two-stage bargaining whereas the firms set wages in the non-union sector. In this model firms and unions of the union sector have a commonality of interest...

  3. Genetic modifiers of Hb E/β0 thalassemia identified by a two-stage genome-wide association study

    Directory of Open Access Journals (Sweden)

    Winichagoon Pranee

    2010-03-01

    Full Text Available Abstract Background Patients with Hb E/β0 thalassemia display remarkable variability in disease severity. To identify genetic modifiers influencing disease severity, we conducted a two-stage genome scan in groups of 207 mild and 305 severe unrelated patients from Thailand with Hb E/β0 thalassemia and normal α-globin genes. Methods First, we estimated and compared the allele frequencies of approximately 110,000 gene-based single nucleotide polymorphisms (SNPs in pooled DNAs from different severity groups. The 756 SNPs that showed reproducible allelic differences at P Results After adjustment for age, gender and geographic region, logistic regression models showed 50 SNPs significantly associated with disease severity (P P = 2.6 × 10-13. Seven SNPs in two distinct LD blocks within a region centromeric to the β-globin gene cluster that contains many olfactory receptor genes were also associated with disease severity; rs3886223 had the strongest association (OR = 3.03, P = 3.7 × 10-11. Several previously unreported SNPs were also significantly associated with disease severity. Conclusions These results suggest that there may be an additional regulatory region centromeric to the β-globin gene cluster that affects disease severity by modulating fetal hemoglobin expression.

  4. A two-stage short-term traffic flow prediction method based on AVL and AKNN techniques

    Institute of Scientific and Technical Information of China (English)

    孟梦; 王博彬; 邵春福; 李慧轩; 黃育兆

    2015-01-01

    Short-term traffic flow prediction is one of the essential issues in intelligent transportation systems (ITS). A new two-stage traffic flow prediction method named AKNN-AVL method is presented, which combines an advancedk-nearest neighbor (AKNN) method and balanced binary tree (AVL) data structure to improve the prediction accuracy. The AKNN method uses pattern recognition two times in the searching process, which considers the previous sequences of traffic flow to forecast the future traffic state. Clustering method and balanced binary tree technique are introduced to build case database to reduce the searching time. To illustrate the effects of these developments, the accuracies performance of AKNN-AVL method,k-nearest neighbor (KNN) method and the auto-regressive and moving average (ARMA) method are compared. These methods are calibrated and evaluated by the real-time data from a freeway traffic detector near North 3rd Ring Road in Beijing under both normal and incident traffic conditions. The comparisons show that the AKNN-AVL method with the optimal neighbor and pattern size outperforms both KNN method and ARMA method under both normal and incident traffic conditions. In addition, the combinations of clustering method and balanced binary tree technique to the prediction method can increase the searching speed and respond rapidly to case database fluctuations.

  5. The Bracka two-stage repair for severe proximal hypospadias: A single center experience

    Directory of Open Access Journals (Sweden)

    Rakesh S Joshi

    2015-01-01

    Full Text Available Background: Surgical correction of severe proximal hypospadias represents a significant surgical challenge and single-stage corrections are often associated with complications and reoperations. Bracka two-stage repair is an attractive alternative surgical procedure with superior, reliable, and reproducible results. Purpose: To study the feasibility and applicability of Bracka two-stage repair for the severe proximal hypospadias and to analyze the outcomes and complications of this surgical technique. Materials and Methods: This prospective study was conducted from January 2011 to December 2013. Bracka two-stage repair was performed using inner preputial skin as a free graft in subjects with proximal hypospadias in whom severe degree of chordee and/or poor urethral plate was present. Only primary cases were included in this study. All subjects received three doses of intra-muscular testosterone 3 weeks apart before first stage. Second stage was performed 6 months after the first stage. Follow-up ranged from 6 months to 24 months. Results: A total of 43 patients operated for Bracka repair, out of which 30 patients completed two-stage repair. Mean age of the patients was 4 years and 8 months. We achieved 100% graft uptake and no revision was required. Three patients developed fistula, while two had metal stenosis. Glans dehiscence, urethral stricture and the residual chordee were not found during follow-up and satisfactory cosmetic results with good urinary stream were achieved in all cases. Conclusion: The Bracka two-stage repair is a safe and reliable approach in select patients in whom it is impractical to maintain the axial integrity of the urethral plate, and, therefore, a full circumference urethral reconstruction become necessary. This gives good results both in terms of restoration of normal function with minimal complication.

  6. Optimisation of two-stage screw expanders for waste heat recovery applications

    Science.gov (United States)

    Read, M. G.; Smith, I. K.; Stosic, N.

    2015-08-01

    It has previously been shown that the use of two-phase screw expanders in power generation cycles can achieve an increase in the utilisation of available energy from a low temperature heat source when compared with more conventional single-phase turbines. However, screw expander efficiencies are more sensitive to expansion volume ratio than turbines, and this increases as the expander inlet vapour dryness fraction decreases. For singlestage screw machines with low inlet dryness, this can lead to under expansion of the working fluid and low isentropic efficiency for the expansion process. The performance of the cycle can potentially be improved by using a two-stage expander, consisting of a low pressure machine and a smaller high pressure machine connected in series. By expanding the working fluid over two stages, the built-in volume ratios of the two machines can be selected to provide a better match with the overall expansion process, thereby increasing efficiency for particular inlet and discharge conditions. The mass flow rate though both stages must however be matched, and the compromise between increasing efficiency and maximising power output must also be considered. This research uses a rigorous thermodynamic screw machine model to compare the performance of single and two-stage expanders over a range of operating conditions. The model allows optimisation of the required intermediate pressure in the two- stage expander, along with the rotational speed and built-in volume ratio of both screw machine stages. The results allow the two-stage machine to be fully specified in order to achieve maximum efficiency for a required power output.

  7. A two-stage procedure for determining unsaturated hydraulic characteristics using a syringe pump and outflow observations

    DEFF Research Database (Denmark)

    Wildenschild, Dorthe; Jensen, Karsten Høgh; Hollenbeck, Karl-Josef;

    1997-01-01

    A fast two-stage methodology for determining unsaturated flow characteristics is presented. The procedure builds on direct measurement of the retention characteristic using a syringe pump technique, combined with inverse estimation of the hydraulic conductivity characteristic based on one......-step outflow experiments. The direct measurements are obtained with a commercial syringe pump, which continuously withdraws fluid from a soil sample at a very low and accurate how rate, thus providing the water content in the soil sample. The retention curve is then established by simultaneously monitoring......-step outflow data and the independently measured retention data are included in the objective function of a traditional least-squares minimization routine, providing unique estimates of the unsaturated hydraulic characteristics by means of numerical inversion of Richards equation. As opposed to what is often...

  8. A two-stage adaptive stochastic collocation method on nested sparse grids for multiphase flow in randomly heterogeneous porous media

    Science.gov (United States)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-02-01

    A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.

  9. Understanding the cluster randomised crossover design: a graphical illustraton of the components of variation and a sample size tutorial.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B

    2017-08-15

    In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of

  10. Health and human rights in eastern Myanmar after the political transition: a population-based assessment using multistaged household cluster sampling.

    Directory of Open Access Journals (Sweden)

    Parveen Kaur Parmar

    Full Text Available Myanmar transitioned to a nominally civilian parliamentary government in March 2011. Qualitative reports suggest that exposure to violence and displacement has declined while international assistance for health services has increased. An assessment of the impact of these changes on the health and human rights situation has not been published.Five community-based organizations conducted household surveys using two-stage cluster sampling in five states in eastern Myanmar from July 2013-September 2013. Data was collected from 6, 178 households on demographics, mortality, health outcomes, water and sanitation, food security and nutrition, malaria, and human rights violations (HRV. Among children aged 6-59 months screened, the prevalence of global acute malnutrition (representing moderate or severe malnutrition was 11.3% (8.0-14.7. A total of 250 deaths occurred during the year prior to the survey. Infant deaths accounted for 64 of these (IMR 94.2; 95% CI 66.5-133.5 and there were 94 child deaths (U5MR 141.9; 95% CI 94.8-189.0. 10.7% of households (95% CI 7.0-14.5 experienced at least one HRV in the past year, while four percent reported 2 or more HRVs. Household exposure to one or more HRVs was associated with moderate-severe malnutrition among children (14.9 vs. 6.8%; prevalence ratio 2.2, 95% CI 1.2-4.2. Household exposure to HRVs was associated with self-reported fair or poor health status among respondents (PR 1.3; 95% CI 1.1-1.5.This large survey of health and human rights demonstrates that two years after political transition, vulnerable populations of eastern Myanmar are less likely to experience human rights violations compared to previous surveys. However, access to health services remains constrained, and risk of disease and death remains higher than the country as a whole. Efforts to address these poor health indicators should prioritize support for populations that remain outside the scope of most formal government and donor programs.

  11. SUNYAEV-ZEL'DOVICH-MEASURED PRESSURE PROFILES FROM THE BOLOCAM X-RAY/SZ GALAXY CLUSTER SAMPLE

    Energy Technology Data Exchange (ETDEWEB)

    Sayers, J.; Czakon, N. G.; Golwala, S. R.; Downes, T. P.; Mroczkowski, T.; Siegel, S. [Division of Physics, Math, and Astronomy, California Institute of Technology, Pasadena, CA 91125 (United States); Mantz, A. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Ameglio, S.; Pierpaoli, E.; Shitanishi, J. A. [University of Southern California, Los Angeles, CA 90089 (United States); Koch, P. M.; Lin, K.-Y.; Umetsu, K. [Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 10617, Taiwan (China); Maughan, B. J.; Van der Pyl, N. [H. H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol Bs8 ITL (United Kingdom); Molnar, S. M. [LeCosPA Center, National Taiwan University, Taipei 10617, Taiwan (China); Moustakas, L., E-mail: jack@caltech.edu [Jet Propulsion Laboratory, Pasadena, CA 91109 (United States)

    2013-05-10

    We describe Sunyaev-Zel'dovich (SZ) effect measurements and analysis of the intracluster medium (ICM) pressure profiles of a set of 45 massive galaxy clusters imaged using Bolocam at the Caltech Submillimeter Observatory. We deproject the average pressure profile of our sample into 13 logarithmically spaced radial bins between 0.07R{sub 500} and 3.5R{sub 500}, and we find that a generalized Navarro, Frenk, and White (gNFW) profile describes our data with sufficient goodness-of-fit and best-fit parameters (C{sub 500}, {alpha}, {beta}, {gamma}, P{sub 0} = 1.18, 0.86, 3.67, 0.67, 4.29). We use X-ray data to define cool-core and disturbed subsamples of clusters, and we constrain the average pressure profiles of each of these subsamples. We find that, given the precision of our data, the average pressure profiles of disturbed and cool-core clusters are consistent with one another at R {approx}> 0.15R{sub 500}, with cool-core systems showing indications of higher pressure at R {approx}< 0.15R{sub 500}. In addition, for the first time, we place simultaneous constraints on the mass scaling of cluster pressure profiles, their ensemble mean profile, and their radius-dependent intrinsic scatter between 0.1R{sub 500} and 2.0R{sub 500}. The scatter among profiles is minimized at radii between {approx_equal} 0.2R{sub 500} and {approx_equal} 0.5R{sub 500}, with a value of {approx_equal} 20%. These results for the intrinsic scatter are largely consistent with previous analyses, most of which have relied heavily on X-ray derived pressures of clusters at significantly lower masses and redshifts compared to our sample. Therefore, our data provide further evidence that cluster pressure profiles are largely universal with scatter of {approx_equal} 20%-40% about the universal profile over a wide range of masses and redshifts.

  12. Preliminary chemical analysis and biological testing of materials from the HRI catalytic two-stage liquefaction (CTSL) process. [Aliphatic hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Later, D.W.; Wilson, B.W.

    1985-01-01

    Coal-derived materials from experimental runs of Hydrocarbon Research Incorporated's (HRI) catalytic two-stage liquefaction (CTSL) process were chemically characterized and screened for microbial mutagenicity. This process differs from two-stage coal liquefaction processes in that catalyst is used in both stages. Samples from both the first and second stages were class-fractionated by alumina adsorption chromatography. The fractions were analyzed by capillary column gas chromatography; gas chromatography/mass spectrometry; direct probe, low voltage mass spectrometry; and proton nuclear magnetic resonance spectrometry. Mutagenicity assays were performed with the crude and class fractions in Salmonella typhimurium, TA98. Preliminary results of chemical analyses indicate that >80% CTSL materials from both process stages were aliphatic hydrocarbon and polynuclear aromatic hydrocarbon (PAH) compounds. Furthermore, the gross and specific chemical composition of process materials from the first stage were very similar to those of the second stage. In general, the unfractionated materials were only slightly active in the TA98 mutagenicity assay. Like other coal liquefaction materials investigated in this laboratory, the nitrogen-containing polycyclic aromatic compound (N-PAC) class fractions were responsible for the bulk of the mutagenic activity of the crudes. Finally, it was shown that this activity correlated with the presence of amino-PAH. 20 figures, 9 tables.

  13. CN and CH Abundance Analysis in a Sample of Eight Galactic Globular Clusters

    Science.gov (United States)

    Smolinski, Jason P.; Lee, Y.; Beers, T. C.; Martell, S. L.; An, D.; Sivarani, T.

    2011-01-01

    Galactic globular clusters exhibit star-to-star variations in their light element abundances that are not predicted by formation and evolution models involving single stellar generations. Recently it has been suggested that internal pollution from early supernovae and AGB winds may have played important roles in forming a second generation of enriched stars. We present updated results of a CN and CH abundance analysis of stars from the base to the tip of the red giant branch, and in some cases down onto the main sequence, for eight globular clusters with available photometric and spectroscopic data from SDSS-I and SDSS-II/SEGUE. These results include a discussion of the radial distribution of CN enrichment and how this may impact the current paradigm. Funding for SDSS-I and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. This work was supported in part by grants PHY 02-16783 and PHY 08-22648: Physics Frontiers Center/Joint Institute for Nuclear Astrophysics (JINA), awarded by the U.S. National Science Foundation.

  14. The Australian longitudinal study on male health sampling design and survey weighting: implications for analysis and interpretation of clustered data

    Directory of Open Access Journals (Sweden)

    Matthew J. Spittal

    2016-10-01

    Full Text Available Abstract Background The Australian Longitudinal Study on Male Health (Ten to Men used a complex sampling scheme to identify potential participants for the baseline survey. This raises important questions about when and how to adjust for the sampling design when analyzing data from the baseline survey. Methods We describe the sampling scheme used in Ten to Men focusing on four important elements: stratification, multi-stage sampling, clustering and sample weights. We discuss how these elements fit together when using baseline data to estimate a population parameter (e.g., population mean or prevalence or to estimate the association between an exposure and an outcome (e.g., an odds ratio. We illustrate this with examples using a continuous outcome (weight in kilograms and a binary outcome (smoking status. Results Estimates of a population mean or disease prevalence using Ten to Men baseline data are influenced by the extent to which the sampling design is addressed in an analysis. Estimates of mean weight and smoking prevalence are larger in unweighted analyses than weighted analyses (e.g., mean = 83.9 kg vs. 81.4 kg; prevalence = 18.0 % vs. 16.7 %, for unweighted and weighted analyses respectively and the standard error of the mean is 1.03 times larger in an analysis that acknowledges the hierarchical (clustered structure of the data compared with one that does not. For smoking prevalence, the corresponding standard error is 1.07 times larger. Measures of association (mean group differences, odds ratios are generally similar in unweighted or weighted analyses and whether or not adjustment is made for clustering. Conclusions The extent to which the Ten to Men sampling design is accounted for in any analysis of the baseline data will depend on the research question. When the goals of the analysis are to estimate the prevalence of a disease or risk factor in the population or the magnitude of a population-level exposure

  15. The Australian longitudinal study on male health sampling design and survey weighting: implications for analysis and interpretation of clustered data.

    Science.gov (United States)

    Spittal, Matthew J; Carlin, John B; Currier, Dianne; Downes, Marnie; English, Dallas R; Gordon, Ian; Pirkis, Jane; Gurrin, Lyle

    2016-10-31

    The Australian Longitudinal Study on Male Health (Ten to Men) used a complex sampling scheme to identify potential participants for the baseline survey. This raises important questions about when and how to adjust for the sampling design when analyzing data from the baseline survey. We describe the sampling scheme used in Ten to Men focusing on four important elements: stratification, multi-stage sampling, clustering and sample weights. We discuss how these elements fit together when using baseline data to estimate a population parameter (e.g., population mean or prevalence) or to estimate the association between an exposure and an outcome (e.g., an odds ratio). We illustrate this with examples using a continuous outcome (weight in kilograms) and a binary outcome (smoking status). Estimates of a population mean or disease prevalence using Ten to Men baseline data are influenced by the extent to which the sampling design is addressed in an analysis. Estimates of mean weight and smoking prevalence are larger in unweighted analyses than weighted analyses (e.g., mean = 83.9 kg vs. 81.4 kg; prevalence = 18.0 % vs. 16.7 %, for unweighted and weighted analyses respectively) and the standard error of the mean is 1.03 times larger in an analysis that acknowledges the hierarchical (clustered) structure of the data compared with one that does not. For smoking prevalence, the corresponding standard error is 1.07 times larger. Measures of association (mean group differences, odds ratios) are generally similar in unweighted or weighted analyses and whether or not adjustment is made for clustering. The extent to which the Ten to Men sampling design is accounted for in any analysis of the baseline data will depend on the research question. When the goals of the analysis are to estimate the prevalence of a disease or risk factor in the population or the magnitude of a population-level exposure-outcome association, our advice is to adopt an analysis that respects the

  16. Respondent-driven sampling bias induced by clustering and community structure in social networks

    CERN Document Server

    Rocha, Luis Enrique Correa; Lambiotte, Renaud; Liljeros, Fredrik

    2015-01-01

    Sampling hidden populations is particularly challenging using standard sampling methods mainly because of the lack of a sampling frame. Respondent-driven sampling (RDS) is an alternative methodology that exploits the social contacts between peers to reach and weight individuals in these hard-to-reach populations. It is a snowball sampling procedure where the weight of the respondents is adjusted for the likelihood of being sampled due to differences in the number of contacts. In RDS, the structure of the social contacts thus defines the sampling process and affects its coverage, for instance by constraining the sampling within a sub-region of the network. In this paper we study the bias induced by network structures such as social triangles, community structure, and heterogeneities in the number of contacts, in the recruitment trees and in the RDS estimator. We simulate different scenarios of network structures and response-rates to study the potential biases one may expect in real settings. We find that the ...

  17. Matching tutor to student: rules and mechanisms for efficient two-stage learning in neural circuits

    CERN Document Server

    Tesileanu, Tiberiu; Balasubramanian, Vijay

    2016-01-01

    Existing models of birdsong learning assume that brain area LMAN introduces variability into song for trial-and-error learning. Recent data suggest that LMAN also encodes a corrective bias driving short-term improvements in song. These later consolidate in area RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using a stochastic gradient descent approach, we derive how 'tutor' circuits should match plasticity mechanisms in 'student' circuits for efficient learning. We further describe a reinforcement learning framework with which the tutor can build its teaching signal. We show that mismatching the tutor signal and plasticity mechanism can impair or abolish learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning.

  18. HRI catalytic two-stage liquefaction (CTSL) process materials: chemical analysis and biological testing

    Energy Technology Data Exchange (ETDEWEB)

    Wright, C.W.; Later, D.W.

    1985-12-01

    This report presents data from the chemical analysis and biological testing of coal liquefaction materials obtained from the Hydrocarbon Research, Incorporated (HRI) catalytic two-stage liquefaction (CTSL) process. Materials from both an experimental run and a 25-day demonstration run were analyzed. Chemical methods of analysis included adsorption column chromatography, high-resolution gas chromatography, gas chromatography/mass spectrometry, low-voltage probe-inlet mass spectrometry, and proton nuclear magnetic resonance spectroscopy. The biological activity was evaluated using the standard microbial mutagenicity assay and an initiation/promotion assay for mouse-skin tumorigenicity. Where applicable, the results obtained from the analyses of the CTSL materials have been compared to those obtained from the integrated and nonintegrated two-stage coal liquefaction processes. 18 refs., 26 figs., 22 tabs.

  19. Two-stage precipitation process of iron and arsenic from acid leaching solutions

    Institute of Scientific and Technical Information of China (English)

    N.J.BOLIN; J.E.SUNDKVIST

    2008-01-01

    A leaching process for base metals recovery often generates considerable amounts of impurities such as iron and arsenic into the solution.It is a challenge to separate the non-valuable metals into manageable and stable waste products for final disposal,without loosing the valuable constituents.Boliden Mineral AB has patented a two-stage precipitation process that gives a very clean iron-arsenic precipitate by a minimum of coprecipitation of base metals.The obtained product shows to have good sedimentation and filtration properties,which makes it easy to recover the iron-arsenic depleted solution by filtration and washing of the precipitate.Continuos bench scale tests have been done,showing the excellent results achieved by the two-stage precipitation process.

  20. S-band gain-flattened EDFA with two-stage double-pass configuration

    Science.gov (United States)

    Fu, Hai-Wei; Xu, Shi-Chao; Qiao, Xue-Guang; Jia, Zhen-An; Liu, Ying-Gang; Zhou, Hong

    2011-11-01

    A gain-flattened S-band erbium-doped fiber amplifier (EDFA) using standard erbium-doped fiber (EDF) is proposed and experimentally demonstrated. The proposed amplifier with two-stage double-pass configuration employs two C-band suppressing filters to obtain the optical gain in S-band. The amplifier provides a maximum signal gain of 41.6 dB at 1524 nm with the corresponding noise figure of 3.8 dB. Furthermore, with a well-designed short-pass filter as a gain flattening filter (GFF), we are able to develop the S-band EDFA with a flattened gain of more than 20 dB in 1504-1524 nm. In the experiment, the two-stage double-pass amplifier configuration improves performance of gain and noise figure compared with the configuration of single-stage double-pass S-band EDFA.

  1. Power Frequency Oscillation Suppression Using Two-Stage Optimized Fuzzy Logic Controller for Multigeneration System

    Directory of Open Access Journals (Sweden)

    Y. K. Bhateshvar

    2016-01-01

    Full Text Available This paper attempts to develop a linearized model of automatic generation control (AGC for an interconnected two-area reheat type thermal power system in deregulated environment. A comparison between genetic algorithm optimized PID controller (GA-PID, particle swarm optimized PID controller (PSO-PID, and proposed two-stage based PSO optimized fuzzy logic controller (TSO-FLC is presented. The proposed fuzzy based controller is optimized at two stages: one is rule base optimization and other is scaling factor and gain factor optimization. This shows the best dynamic response following a step load change with different cases of bilateral contracts in deregulated environment. In addition, performance of proposed TSO-FLC is also examined for ±30% changes in system parameters with different type of contractual demands between control areas and compared with GA-PID and PSO-PID. MATLAB/Simulink® is used for all simulations.

  2. A two-stage scheme for multi-view human pose estimation

    Science.gov (United States)

    Yan, Junchi; Sun, Bing; Liu, Yuncai

    2010-08-01

    We present a two-stage scheme integrating voxel reconstruction and human motion tacking. By combining voxel reconstruction with human motion tracking interactively, our method can work in a cluttered background where perfect foreground silhouettes are hardly available. For each frame, a silhouette-based 3D volume reconstruction method and hierarchical tracking algorithm are applied in two stages. In the first stage, coarse reconstruction and tracking results are obtained, and then the refinement for reconstruction is applied in the second stage. The experimental results demonstrate our approach is promising. Although our method focuses on the problem of human body voxel reconstruction and motion tracking in this paper, our scheme can be used to reconstruct voxel data and infer the pose of many specified rigid and articulated objects.

  3. Toward Improving Electrocardiogram (ECG) Biometric Verification using Mobile Sensors: A Two-Stage Classifier Approach.

    Science.gov (United States)

    Tan, Robin; Perkowski, Marek

    2017-02-20

    Electrocardiogram (ECG) signals sensed from mobile devices pertain the potential for biometric identity recognition applicable in remote access control systems where enhanced data security is demanding. In this study, we propose a new algorithm that consists of a two-stage classifier combining random forest and wavelet distance measure through a probabilistic threshold schema, to improve the effectiveness and robustness of a biometric recognition system using ECG data acquired from a biosensor integrated into mobile devices. The proposed algorithm is evaluated using a mixed dataset from 184 subjects under different health conditions. The proposed two-stage classifier achieves a total of 99.52% subject verification accuracy, better than the 98.33% accuracy from random forest alone and 96.31% accuracy from wavelet distance measure algorithm alone. These results demonstrate the superiority of the proposed algorithm for biometric identification, hence supporting its practicality in areas such as cloud data security, cyber-security or remote healthcare systems.

  4. Effect of two-stage aging on superplasticity of Al-Li alloy

    Institute of Scientific and Technical Information of China (English)

    LUO Zhi-hui; ZHANG Xin-ming; DU Yu-xuan; YE Ling-ying

    2006-01-01

    The effect of two-stage aging on the microstructures and superplasticity of 01420 Al-Li alloy was investigated by means of OM, TEM analysis and stretching experiment. The results demonstrate that the second phase particles distributed more uniformly with a larger volume fraction can be observed after the two-stage aging (120 ℃, 12 h+300 ℃, 36 h) compared with the single-aging(300 ℃, 48 h). After rolling and recrystallization annealing, fine grains with size of 8-10 μm are obtained, and the superplastic elongation of the specimens reaches 560% at strain rate of 8×10-4 s-1 and 480 ℃. Uniformly distributed fine particles precipitate both on grain boundaries and in grains at lower temperature. When the sheet is aged at high temperature, the particles become coarser with a large volume fraction.

  5. Two stage bioethanol refining with multi litre stacked microbial fuel cell and microbial electrolysis cell.

    Science.gov (United States)

    Sugnaux, Marc; Happe, Manuel; Cachelin, Christian Pierre; Gloriod, Olivier; Huguenin, Gérald; Blatter, Maxime; Fischer, Fabian

    2016-12-01

    Ethanol, electricity, hydrogen and methane were produced in a two stage bioethanol refinery setup based on a 10L microbial fuel cell (MFC) and a 33L microbial electrolysis cell (MEC). The MFC was a triple stack for ethanol and electricity co-generation. The stack configuration produced more ethanol with faster glucose consumption the higher the stack potential. Under electrolytic conditions ethanol productivity outperformed standard conditions and reached 96.3% of the theoretically best case. At lower external loads currents and working potentials oscillated in a self-synchronized manner over all three MFC units in the stack. In the second refining stage, fermentation waste was converted into methane, using the scale up MEC stack. The bioelectric methanisation reached 91% efficiency at room temperature with an applied voltage of 1.5V using nickel cathodes. The two stage bioethanol refining process employing bioelectrochemical reactors produces more energy vectors than is possible with today's ethanol distilleries.

  6. HRI catalytic two-stage liquefaction (CTSL) process materials: chemical analysis and biological testing

    Energy Technology Data Exchange (ETDEWEB)

    Wright, C.W.; Later, D.W.

    1985-12-01

    This report presents data from the chemical analysis and biological testing of coal liquefaction materials obtained from the Hydrocarbon Research, Incorporated (HRI) catalytic two-stage liquefaction (CTSL) process. Materials from both an experimental run and a 25-day demonstration run were analyzed. Chemical methods of analysis included adsorption column chromatography, high-resolution gas chromatography, gas chromatography/mass spectrometry, low-voltage probe-inlet mass spectrometry, and proton nuclear magnetic resonance spectroscopy. The biological activity was evaluated using the standard microbial mutagenicity assay and an initiation/promotion assay for mouse-skin tumorigenicity. Where applicable, the results obtained from the analyses of the CTSL materials have been compared to those obtained from the integrated and nonintegrated two-stage coal liquefaction processes. 18 refs., 26 figs., 22 tabs.

  7. Performance measurement of insurance firms using a two-stage DEA method

    Directory of Open Access Journals (Sweden)

    Raha Jalili Sabet

    2013-01-01

    Full Text Available Measuring the relative performance of insurance firms plays an important role in this industry. In this paper, we present a two-stage data envelopment analysis to measure the performance of insurance firms, which were active over the period of 2006-2010. The proposed study of this paper performs DEA method in two stages where the first stage considers five inputs and three outputs while the second stage considers the outputs of the first stage as the inputs of the second stage and uses three different outputs for this stage. The results of our survey have indicated that while there were 4 efficient insurance firms most other insurances were noticeably inefficient. This means market was monopolized mostly by a limited number of insurance firms and competition was not fare enough to let other firms participate in economy, more efficiently.

  8. Direct Torque Control of Sensorless Induction Machine Drives: A Two-Stage Kalman Filter Approach

    Directory of Open Access Journals (Sweden)

    Jinliang Zhang

    2015-01-01

    Full Text Available Extended Kalman filter (EKF has been widely applied for sensorless direct torque control (DTC in induction machines (IMs. One key problem associated with EKF is that the estimator suffers from computational burden and numerical problems resulting from high order mathematical models. To reduce the computational cost, a two-stage extended Kalman filter (TEKF based solution is presented for closed-loop stator flux, speed, and torque estimation of IM to achieve sensorless DTC-SVM operations in this paper. The novel observer can be similarly derived as the optimal two-stage Kalman filter (TKF which has been proposed by several researchers. Compared to a straightforward implementation of a conventional EKF, the TEKF estimator can reduce the number of arithmetic operations. Simulation and experimental results verify the performance of the proposed TEKF estimator for DTC of IMs.

  9. Syme's two-stage amputation in insulin-requiring diabetics with gangrene of the forefoot.

    Science.gov (United States)

    Pinzur, M S; Morrison, C; Sage, R; Stuck, R; Osterman, H; Vrbos, L

    1991-06-01

    Thirty-five insulin-requiring adult diabetic patients underwent 38 Syme's Two-Stage amputations for gangrene of the forefoot with nonreconstructible peripheral vascular insufficiency. All had a minimum Doppler ischemic index of 0.5, serum albumin of 3.0 gm/dl, and total lymphocyte count of 1500. Thirty-one (81.6%) eventually healed and were uneventfully fit with a prosthesis. Regional anesthesia was used in all of the patients, with 22 spinal and 16 ankle block anesthetics. Twenty-seven (71%) returned to their preamputation level of ambulatory function. Six (16%) had major, and fifteen (39%) minor complications following the first stage surgery. The results of this study support the use of the Syme's Two-Stage amputation in adult diabetic patients with gangrene of the forefoot requiring amputation.

  10. Low-noise SQUIDs with large transfer: two-stage SQUIDs based on DROSs

    Science.gov (United States)

    Podt, M.; Flokstra, J.; Rogalla, H.

    2002-08-01

    We have realized a two-stage integrated superconducting quantum interference device (SQUID) system with a closed loop bandwidth of 2.5 MHz, operated in a direct voltage readout mode. The corresponding flux slew rate was 1.3×10 5Φ0/s and the measured white flux noise was 1.3 μ Φ0/√Hz at 4.2 K. The system is based on a conventional dc SQUID with a double relaxation oscillation SQUID (DROS) as the second stage. Because of the large flux-to-voltage transfer, the sensitivity of the system is completely determined by the sensor SQUID and not by the DROS or the room-temperature preamplifier. Decreasing the Josephson junction area enables a further improvement of the sensitivity of the two-stage SQUID systems.

  11. Interval estimation of binomial proportion in clinical trials with a two-stage design.

    Science.gov (United States)

    Tsai, Wei-Yann; Chi, Yunchan; Chen, Chia-Min

    2008-01-15

    Generally, a two-stage design is employed in Phase II clinical trials to avoid giving patients an ineffective drug. If the number of patients with significant improvement, which is a binomial response, is greater than a pre-specified value at the first stage, then another binomial response at the second stage is also observed. This paper considers interval estimation of the response probability when the second stage is allowed to continue. Two asymptotic interval estimators, Wald and score, as well as two exact interval estimators, Clopper-Pearson and Sterne, are constructed according to the two binomial responses from this two-stage design, where the binomial response at the first stage follows a truncated binomial distribution. The mean actual coverage probability and expected interval width are employed to evaluate the performance of these interval estimators. According to the comparison results, the score interval is recommended for both Simon's optimal and minimax designs.

  12. Experiment and surge analysis of centrifugal two-stage turbocharging system

    Institute of Scientific and Technical Information of China (English)

    Yituan HE; Chaochen MA

    2008-01-01

    To study a centrifugal two-stage turbocharging system's surge and influencing factors, a special test bench was set up and the system surge test was performed. The test results indicate that the measured parameters such as air mass flow and rotation speed of a high pressure (HP) stage compressor can be converted into corrected para-meters under a standard condition according to the Mach number similarity criterion, because the air flow in a HP stage compressor has entered the Reynolds number (Re) auto-modeling range. Accordingly, the reasons leading to a two-stage turbocharging system's surge can be analyzed according to the corrected mass flow characteristic maps and actual operating conditions of HP and low pressure (LP) stage compressors.

  13. Mass Calibration and Cosmological Analysis of the SPT-SZ Galaxy Cluster Sample Using Velocity Dispersion $\\sigma_v$ and X-ray $Y_\\textrm{X}$ Measurements

    CERN Document Server

    Bocquet, S; Mohr, J J; Aird, K A; Ashby, M L N; Bautz, M; Bayliss, M; Bazin, G; Benson, B A; Bleem, L E; Brodwin, M; Carlstrom, J E; Chang, C L; Chiu, I; Cho, H M; Clocchiatti, A; Crawford, T M; Crites, A T; Desai, S; de Haan, T; Dietrich, J P; Dobbs, M A; Foley, R J; Forman, W R; Gangkofner, D; George, E M; Gladders, M D; Gonzalez, A H; Halverson, N W; Hennig, C; Hlavacek-Larrondo, J; Holder, G P; Holzapfel, W L; Hrubes, J D; Jones, C; Keisler, R; Knox, L; Lee, A T; Leitch, E M; Liu, J; Lueker, M; Luong-Van, D; Marrone, D P; McDonald, M; McMahon, J J; Meyer, S S; Mocanu, L; Murray, S S; Padin, S; Pryke, C; Reichardt, C L; Rest, A; Ruel, J; Ruhl, J E; Saliwanchik, B R; Sayre, J T; Schaffer, K K; Shirokoff, E; Spieler, H G; Stalder, B; Stanford, S A; Staniszewski, Z; Stark, A A; Story, K; Stubbs, C W; Vanderlinde, K; Vieira, J D; Vikhlinin, A; Williamson, R; Zahn, O; Zenteno, A

    2014-01-01

    We present a velocity dispersion-based mass calibration of the South Pole Telescope Sunyaev-Zel'dovich effect survey (SPT-SZ) galaxy cluster sample. Using a homogeneously selected sample of 100 cluster candidates from 720 deg2 of the survey along with 63 velocity dispersion ($\\sigma_v$) and 16 X-ray Yx measurements of sample clusters, we simultaneously calibrate the mass-observable relation and constrain cosmological parameters. The calibrations using $\\sigma_v$ and Yx are consistent at the $0.6\\sigma$ level, with the $\\sigma_v$ calibration preferring ~16% higher masses. We use the full cluster dataset to measure $\\sigma_8(\\Omega_ m/0.27)^{0.3}=0.809\\pm0.036$. The SPT cluster abundance is lower than preferred by either the WMAP9 or Planck+WMAP9 polarization (WP) data, but assuming the sum of the neutrino masses is $\\sum m_\

  14. Two-staged management for all types of congenital pouch colon

    Directory of Open Access Journals (Sweden)

    Rajendra K Ghritlaharey

    2013-01-01

    Full Text Available Background: The aim of this study was to review our experience with two-staged management for all types of congenital pouch colon (CPC. Patients and Methods: This retrospective study included CPC cases that were managed with two-staged procedures in the Department of Paediatric Surgery, over a period of 12 years from 1 January 2000 to 31 December 2011. Results: CPC comprised of 13.71% (97 of 707 of all anorectal malformations (ARM and 28.19% (97 of 344 of high ARM. Eleven CPC cases (all males were managed with two-staged procedures. Distribution of cases (Narsimha Rao et al.′s classification into types I, II, III, and IV were 1, 2, 6, and 2, respectively. Initial operative procedures performed were window colostomy (n = 6, colostomy proximal to pouch (n = 4, and ligation of colovesical fistula and end colostomy (n = 1. As definitive procedures, pouch excision with abdomino-perineal pull through (APPT of colon in eight, and pouch excision with APPT of ileum in three were performed. The mean age at the time of definitive procedures was 15.6 months (ranges from 3 to 53 months and the mean weight was 7.5 kg (ranges from 4 to 11 kg. Good fecal continence was observed in six and fair in two cases in follow-up periods, while three of our cases lost to follow up. There was no mortality following definitive procedures amongst above 11 cases. Conclusions: Two-staged procedures for all types of CPC can also be performed safely with good results. The most important fact that the definitive procedure is being done without protective stoma and therefore, it avoids stoma closure, stoma-related complications, related cost of stoma closure and hospital stay.

  15. Hybrid staging of a Lysholm positive displacement engine with two Westinghouse two stage impulse Curtis turbines

    Energy Technology Data Exchange (ETDEWEB)

    Parker, D.A.

    1982-06-01

    The University of California at Berkeley has tested and modeled satisfactorly a hybrid staged Lysholm engine (positive displacement) with a two stage Curtis wheel turbine. The system operates in a stable manner over its operating range (0/1-3/1 water ratio, 120 psia input). Proposals are made for controlling interstage pressure with a partial admission turbine and volume expansion to control mass flow and pressure ratio for the Lysholm engine.

  16. Full noise characterization of a low-noise two-stage SQUID amplifier

    Energy Technology Data Exchange (ETDEWEB)

    Falferi, P [Istituto di Fotonica e Nanotecnologie, CNR-Fondazione Bruno Kessler, 38100 Povo, Trento (Italy); Mezzena, R [INFN, Gruppo Collegato di Trento, Sezione di Padova, 38100 Povo, Trento (Italy); Vinante, A [INFN, Sezione di Padova, 35131 Padova (Italy)], E-mail: falferi@science.unitn.it

    2009-07-15

    From measurements performed on a low-noise two-stage SQUID amplifier coupled to a high- Q electrical resonator we give a complete noise characterization of the SQUID amplifier around the resonator frequency of 11 kHz in terms of additive, back action and cross-correlation noise spectral densities. The minimum noise temperature evaluated at 135 mK is 10 {mu}K and corresponds to an energy resolution of 18{Dirac_h}.

  17. A covariate adjusted two-stage allocation design for binary responses in randomized clinical trials.

    Science.gov (United States)

    Bandyopadhyay, Uttam; Biswas, Atanu; Bhattacharya, Rahul

    2007-10-30

    In the present work, we develop a two-stage allocation rule for binary response using the log-odds ratio within the Bayesian framework allowing the current allocation to depend on the covariate value of the current subject. We study, both numerically and theoretically, several exact and limiting properties of this design. The applicability of the proposed methodology is illustrated by using some data set. We compare this rule with some of the existing rules by computing various performance measures.

  18. Development of a Novel Type Catalyst SY-2 for Two-Stage Hydrogenation of Pyrolysis Gasoline

    Institute of Scientific and Technical Information of China (English)

    Wu Linmei; Zhang Xuejun; Zhang Zhihua; Wang Fucun

    2004-01-01

    By using the group ⅢB or groupⅦB metals and modulating the characteristics of electric charges on carrier surface, improving the catalyst preparation process and techniques for loading the active metal components, a novel type SY-2 catalyst earmarked for two-stage hydrogenation of pyrolysis gasoline has been developed. The catalyst evaluation results have indicated that the novel catalyst is characterized by a better hydrogenation reaction activity to give higher aromatic yield.

  19. Investigation on a two-stage solvay refrigerator with magnetic material regenerator

    Science.gov (United States)

    Chen, Guobang; Zheng, Jianyao; Zhang, Fagao; Yu, Jianping; Tao, Zhenshi; Ding, Cenyu; Zhang, Liang; Wu, Peiyi; Long, Yi

    This paper describes experimental results that the no-load temperature of a two-stage Solvay refrigerator has been reached in liquid helium temperature region from the original 11.5 K by using magnetic regenerative material instead of lead. The structure and technological characteristics of the prototype machine are presented. The effects of operating frequency and pressure on the refrigerating temperature have been discussed in this paper.

  20. Biological hydrogen production from olive mill wastewater with two-stage processes

    Energy Technology Data Exchange (ETDEWEB)

    Eroglu, Ela; Eroglu, Inci [Department of Chemical Engineering, Middle East Technical University, 06531, Ankara (Turkey); Guenduez, Ufuk; Yuecel, Meral [Department of Biology, Middle East Technical University, 06531, Ankara (Turkey); Tuerker, Lemi [Department of Chemistry, Middle East Technical University, 06531, Ankara (Turkey)

    2006-09-15

    In the present work two novel two-stage hydrogen production processes from olive mill wastewater (OMW) have been introduced. The first two-stage process involved dark-fermentation followed by a photofermentation process. Dark-fermentation by activated sludge cultures and photofermentation by Rhodobacter sphaeroides O.U.001 were both performed in 55ml glass vessels, under anaerobic conditions. In some cases of dark-fermentation, activated sludge was initially acclimatized to the OMW to provide the adaptation of microorganisms to the extreme conditions of OMW. The highest hydrogen production potential obtained was 29l{sub H{sub 2}}/l{sub OMW} after photofermentation with 50% (v/v) effluent of dark fermentation with activated sludge. Photofermentation with 50% (v/v) effluent of dark fermentation with acclimated activated sludge had the highest hydrogen production rate (0.008ll{sup -1}h{sup -1}). The second two-stage process involved a clay treatment step followed by photofermentation by R. sphaeroides O.U.001. Photofermentation with the effluent of the clay pretreatment process (4% (v/v)) gives the highest hydrogen production potential (35l{sub H{sub 2}}/l{sub OMW}), light conversion efficiency (0.42%) and COD conversion efficiency (52%). It was concluded that both pretreatment processes enhanced the photofermentative hydrogen production process. Moreover, hydrogen could be produced with highly concentrated OMW. Two-stage processes developed in the present investigation have a high potential for solving the environmental problems caused by OMW. (author)

  1. The two-stage aegean extension, from localized to distributed, a result of slab rollback acceleration

    OpenAIRE

    Brun, Jean-Pierre; Faccenna, Claudio; Gueydan, Frédéric; Sokoutis, Dimitrios; Philippon, Mélody; Kydonakis, Konstantinos; Gorini, Christian

    2016-01-01

    International audience; Back-arc extension in the Aegean, which was driven by slab rollback since 45 Ma, is described here for the first time in two stages. From Middle Eocene to Middle Miocene, deformation was localized leading to i) the exhumation of high-pressure metamorphic rocks to crustal depths, ii) the exhumation of high-temperature metamorphic rocks in core complexes and iii) the deposition of sedimentary basins. Since Middle Miocene, extension distributed over the whole Aegean domai...

  2. A Two-stage Discriminating Framework for Making Supply Chain Operation Decisions under Uncertainties

    OpenAIRE

    Gu, H; Rong, G

    2010-01-01

    This paper addresses the problem of making supply chain operation decisions for refineries under two types of uncertainties: demand uncertainty and incomplete information shared with suppliers and transport companies. Most of the literature only focus on one uncertainty or treat more uncertainties identically. However, we note that refineries have more power to control uncertainties in procurement and transportation than in demand in the real world. Thus, a two-stage framework for dealing wit...

  3. Low-noise SQUIDs with large transfer: two-stage SQUIDs based on DROSs

    NARCIS (Netherlands)

    Podt, M.; Flokstra, Jakob; Rogalla, Horst

    2002-01-01

    We have realized a two-stage integrated superconducting quantum interference device (SQUID) system with a closed loop bandwidth of 2.5 MHz, operated in a direct voltage readout mode. The corresponding flux slew rate was 1.3×105 Φ0/s and the measured white flux noise was 1.3 μΦ0/√Hz at 4.2 K. The

  4. Latent Inhibition as a Function of US Intensity in a Two-Stage CER Procedure

    Science.gov (United States)

    Rodriguez, Gabriel; Alonso, Gumersinda

    2004-01-01

    An experiment is reported in which the effect of unconditioned stimulus (US) intensity on latent inhibition (LI) was examined, using a two-stage conditioned emotional response (CER) procedure in rats. A tone was used as the pre-exposed and conditioned stimulus (CS), and a foot-shock of either a low (0.3 mA) or high (0.7 mA) intensity was used as…

  5. Two stage dual gate MESFET monolithic gain control amplifier for Ka-band

    Science.gov (United States)

    Sokolov, V.; Geddes, J.; Contolatis, A.

    A monolithic two stage gain control amplifier has been developed using submicron gate length dual gate MESFETs fabricated on ion implanted material. The amplifier has a gain of 12 dB at 30 GHz with a gain control range of over 30 dB. This ion implanted monolithic IC is readily integrable with other phased array receiver functions such as low noise amplifiers and phase shifters.

  6. Exergy analysis of vapor compression refrigeration cycle with two-stage and intercooler

    Science.gov (United States)

    Kılıç, Bayram

    2012-07-01

    In this study, exergy analyses of vapor compression refrigeration cycle with two-stage and intercooler using refrigerants R507, R407c, R404a were carried out. The necessary thermodynamic values for analyses were calculated by Solkane program. The coefficient of performance, exergetic efficiency and total irreversibility rate of the system in the different operating conditions for these refrigerants were investigated. The coefficient of performance, exergetic efficiency and total irreversibility rate for alternative refrigerants were compared.

  7. Exergy analysis of vapor compression refrigeration cycle with two-stage and intercooler

    Energy Technology Data Exchange (ETDEWEB)

    Kilic, Bayram [Mehmet Akif Ersoy University, Bucak Emin Guelmez Vocational School, Bucak, Burdur (Turkey)

    2012-07-15

    In this study, exergy analyses of vapor compression refrigeration cycle with two-stage and intercooler using refrigerants R507, R407c, R404a were carried out. The necessary thermodynamic values for analyses were calculated by Solkane program. The coefficient of performance, exergetic efficiency and total irreversibility rate of the system in the different operating conditions for these refrigerants were investigated. The coefficient of performance, exergetic efficiency and total irreversibility rate for alternative refrigerants were compared. (orig.)

  8. Performance of Combined Water Turbine Darrieus-Savonius with Two Stage Savonius Buckets and Single Deflector

    OpenAIRE

    Sahim, Kaprawi; Santoso, Dyos; Sipahutar, Riman

    2016-01-01

    The objective of this study is to show the effect of single deflector plate on the performance of combined Darrieus-Savonius water turbine. In order to overcome the disadvantages of low torque of solo Darrieus turbine, a plate deflector mounted in front of returning Savonius bucket of combined water turbine composing of Darrieus and Savonius rotor has been proposed in this study. Some configurations of combined turbines with two stage Savonius rotors were experimentally tested in a river of c...

  9. Perceived Health Benefits and Soy Consumption Behavior: Two-Stage Decision Model Approach

    OpenAIRE

    Moon, Wanki; Balasubramanian, Siva K.; Rimal, Arbindra

    2005-01-01

    A two-stage decision model is developed to assess the effect of perceived soy health benefits on consumers' decisions with respect to soy food. The first stage captures whether or not to consume soy food, while the second stage reflects how often to consume. A conceptual/analytical framework is also employed, combining Lancaster's characteristics model and Fishbein's multi-attribute model. Results show that perceived soy health benefits significantly influence both decision stages. Further, c...

  10. High quantum efficiency mid-wavelength interband cascade infrared photodetectors with one and two stages

    Science.gov (United States)

    Zhou, Yi; Chen, Jianxin; Xu, Zhicheng; He, Li

    2016-08-01

    In this paper, we report on mid-wavelength infrared interband cascade photodetectors grown on InAs substrates. We studied the transport properties of the photon-generated carriers in the interband cascade structures by comparing two different detectors, a single stage detector and a two-stage cascade detector. The two-stage device showed quantum efficiency around 19.8% at room temperature, and clear optical response was measured even at a temperature of 323 K. The two detectors showed similar Johnson-noise limited detectivity. The peak detectivity of the one- and two-stage devices was measured to be 2.15 × 1014 cm·Hz1/02/W and 2.19 × 1014 cm·Hz1/02/W at 80 K, 1.21 × 109 cm·Hz1/02/W and 1.23 × 109 cm·Hz1/02/W at 300 K, respectively. The 300 K background limited infrared performance (BLIP) operation temperature is estimated to be over 140 K.

  11. Development of Two-Stage Stirling Cooler for ASTRO-F

    Science.gov (United States)

    Narasaki, K.; Tsunematsu, S.; Ootsuka, K.; Kyoya, M.; Matsumoto, T.; Murakami, H.; Nakagawa, T.

    2004-06-01

    A two-stage small Stirling cooler has been developed and tested for the infrared astronomical satellite ASTRO-F that is planned to be launched by Japanese M-V rocket in 2005. ASTRO-F has a hybrid cryogenic system that is a combination of superfluid liquid helium (HeII) and two-stage Stirling coolers. The mechanical cooler has a two-stage displacer driven by a linear motor in a cold head and a new linear-ball-bearing system for the piston-supporting structure in a compressor. The linear-ball-bearing supporting system achieves the piston clearance seal, the long piston-stroke operation and the low frequency operation. The typical cooling power is 200 mW at 20 K and the total input power to the compressor and the cold head is below 90 W without driver electronics. The engineering, the prototype and the flight models of the cooler have been fabricated and evaluated to verify the capability for ASTRO-F. This paper describes the design of the cooler and the results from verification tests including cooler performance test, thermal vacuum test, vibration test and lifetime test.

  12. Performance analysis of RDF gasification in a two stage fluidized bed-plasma process.

    Science.gov (United States)

    Materazzi, M; Lettieri, P; Taylor, R; Chapman, C

    2016-01-01

    The major technical problems faced by stand-alone fluidized bed gasifiers (FBG) for waste-to gas applications are intrinsically related to the composition and physical properties of waste materials, such as RDF. The high quantity of ash and volatile material in RDF can provide a decrease in thermal output, create high ash clinkering, and increase emission of tars and CO2, thus affecting the operability for clean syngas generation at industrial scale. By contrast, a two-stage process which separates primary gasification and selective tar and ash conversion would be inherently more forgiving and stable. This can be achieved with the use of a separate plasma converter, which has been successfully used in conjunction with conventional thermal treatment units, for the ability to 'polish' the producer gas by organic contaminants and collect the inorganic fraction in a molten (and inert) state. This research focused on the performance analysis of a two-stage fluid bed gasification-plasma process to transform solid waste into clean syngas. Thermodynamic assessment using the two-stage equilibrium method was carried out to determine optimum conditions for the gasification of RDF and to understand the limitations and influence of the second stage on the process performance (gas heating value, cold gas efficiency, carbon conversion efficiency), along with other parameters. Comparison with a different thermal refining stage, i.e. thermal cracking (via partial oxidation) was also performed. The analysis is supported by experimental data from a pilot plant.

  13. Continuous removal of endocrine disruptors by versatile peroxidase using a two-stage system.

    Science.gov (United States)

    Taboada-Puig, Roberto; Lu-Chau, Thelmo A; Eibes, Gemma; Feijoo, Gumersindo; Moreira, Maria T; Lema, Juan M

    2015-01-01

    The oxidant Mn(3+) -malonate, generated by the ligninolytic enzyme versatile peroxidase in a two-stage system, was used for the continuous removal of endocrine disrupting compounds (EDCs) from synthetic and real wastewaters. One plasticizer (bisphenol-A), one bactericide (triclosan) and three estrogenic compounds (estrone, 17β-estradiol, and 17α-ethinylestradiol) were removed from wastewater at degradation rates in the range of 28-58 µg/L·min, with low enzyme inactivation. First, the optimization of three main parameters affecting the generation of Mn(3+) -malonate (hydraulic retention time as well as Na-malonate and H2 O2 feeding rates) was conducted following a response surface methodology (RSM). Under optimal conditions, the degradation of the EDCs was proven at high (1.3-8.8 mg/L) and environmental (1.2-6.1 µg/L) concentrations. Finally, when the two-stage system was compared with a conventional enzymatic membrane reactor (EMR) using the same enzyme, a 14-fold increase of the removal efficiency was observed. At the same time, operational problems found during EDCs removal in the EMR system (e.g., clogging of the membrane and enzyme inactivation) were avoided by physically separating the stages of complex formation and pollutant oxidation, allowing the system to be operated for a longer period (∼8 h). This study demonstrates the feasibility of the two-stage enzymatic system for removing EDCs both at high and environmental concentrations.

  14. A two-stage Stirling-type pulse tube cryocooler with a cold inertance tube

    Science.gov (United States)

    Gan, Z. H.; Fan, B. Y.; Wu, Y. Z.; Qiu, L. M.; Zhang, X. J.; Chen, G. B.

    2010-06-01

    A thermally coupled two-stage Stirling-type pulse tube cryocooler (PTC) with inertance tubes as phase shifters has been designed, manufactured and tested. In order to obtain a larger phase shift at the low acoustic power of about 2.0 W, a cold inertance tube as well as a cold reservoir for the second stage, precooled by the cold end of the first stage, was introduced into the system. The transmission line model was used to calculate the phase shift produced by the cold inertance tube. Effect of regenerator material, geometry and charging pressure on the performance of the second stage of the two-stage PTC was investigated based on the well known regenerator model REGEN. Experimental results of the two-stage PTC were carried out with an emphasis on the performance of the second stage. A lowest cooling temperature of 23.7 K and 0.50 W at 33.9 K were obtained with an input electric power of 150.0 W and an operating frequency of 40 Hz.

  15. Rehabilitation outcomes in patients with early and two-stage reconstruction of flexor tendon injuries.

    Science.gov (United States)

    Sade, Ilgin; İnanir, Murat; Şen, Suzan; Çakmak, Esra; Kablanoğlu, Serkan; Selçuk, Barin; Dursun, Nigar

    2016-08-01

    [Purpose] The primary aim of this study was to assess rehabilitation outcomes for early and two-stage repair of hand flexor tendon injuries. The secondary purpose of this study was to compare the findings between treatment groups. [Subjects and Methods] Twenty-three patients were included in this study. Early repair (n=14) and two-stage repair (n=9) groups were included in a rehabilitation program that used hand splints. This retrospective evaluated patients according to their demographic characteristics, including age, gender, injured hand, dominant hand, cause of injury, zone of injury, number of affected fingers, and accompanying injuries. Pain, range of motion, and grip strength were evaluated using a visual analog scale, goniometer, and dynamometer, respectively. [Results] Both groups showed significant improvements in pain and finger flexion after treatment compared with baseline measurements. However, no significant differences were observed between the two treatment groups. Similar results were obtained for grip strength and pinch grip, whereas gross grip was better in the early tendon repair group. [Conclusion] Early and two-stage reconstruction of patients with flexor tendon injuries can be performed with similarly favorable responses and effective rehabilitation programs.

  16. A Comparison of Direct and Two-Stage Transportation of Patients to Hospital in Poland

    Directory of Open Access Journals (Sweden)

    Anna Rosiek

    2015-04-01

    Full Text Available Background: The rapid international expansion of telemedicine reflects the growth of technological innovations. This technological advancement is transforming the way in which patients can receive health care. Materials and Methods: The study was conducted in Poland, at the Department of Cardiology of the Regional Hospital of Louis Rydygier in Torun. The researchers analyzed the delay in the treatment of patients with acute coronary syndrome. The study was conducted as a survey and examined 67 consecutively admitted patients treated invasively in a two-stage transport system. Data were analyzed statistically. Results: Two-stage transportation does not meet the timeframe guidelines for the treatment of patients with acute myocardial infarction. Intervals for the analyzed group of patients were statistically significant (p < 0.0001. Conclusions: Direct transportation of the patient to a reference center with interventional cardiology laboratory has a significant impact on reducing in-hospital delay in case of patients with acute coronary syndrome. Perspectives: This article presents the results of two-stage transportation of the patient with acute coronary syndrome. This measure could help clinicians who seek to assess time needed for intervention. It also shows how time from the beginning of pain in chest is important and may contribute to patient disability, death or well-being.

  17. Two-Stage Liver Transplantation with Temporary Porto-Middle Hepatic Vein Shunt

    Directory of Open Access Journals (Sweden)

    Giovanni Varotti

    2010-01-01

    Full Text Available Two-stage liver transplantation (LT has been reported for cases of fulminant liver failure that can lead to toxic hepatic syndrome, or massive hemorrhages resulting in uncontrollable bleeding. Technically, the first stage of the procedure consists of a total hepatectomy with preservation of the recipient's inferior vena cava (IVC, followed by the creation of a temporary end-to-side porto-caval shunt (TPCS. The second stage consists of removing the TPCS and implanting a liver graft when one becomes available. We report a case of a two-stage total hepatectomy and LT in which a temporary end-to-end anastomosis between the portal vein and the middle hepatic vein (TPMHV was performed as an alternative to the classic end-to-end TPCS. The creation of a TPMHV proved technically feasible and showed some advantages compared to the standard TPCS. In cases in which a two-stage LT with side-to-side caval reconstruction is utilized, TPMHV can be considered as a safe and effective alternative to standard TPCS.

  18. Two-stage residual inclusion estimation: addressing endogeneity in health econometric modeling.

    Science.gov (United States)

    Terza, Joseph V; Basu, Anirban; Rathouz, Paul J

    2008-05-01

    The paper focuses on two estimation methods that have been widely used to address endogeneity in empirical research in health economics and health services research-two-stage predictor substitution (2SPS) and two-stage residual inclusion (2SRI). 2SPS is the rote extension (to nonlinear models) of the popular linear two-stage least squares estimator. The 2SRI estimator is similar except that in the second-stage regression, the endogenous variables are not replaced by first-stage predictors. Instead, first-stage residuals are included as additional regressors. In a generic parametric framework, we show that 2SRI is consistent and 2SPS is not. Results from a simulation study and an illustrative example also recommend against 2SPS and favor 2SRI. Our findings are important given that there are many prominent examples of the application of inconsistent 2SPS in the recent literature. This study can be used as a guide by future researchers in health economics who are confronted with endogeneity in their empirical work.

  19. Two-stage solar concentrators based on parabolic troughs: asymmetric versus symmetric designs.

    Science.gov (United States)

    Schmitz, Max; Cooper, Thomas; Ambrosetti, Gianluca; Steinfeld, Aldo

    2015-11-20

    While nonimaging concentrators can approach the thermodynamic limit of concentration, they generally suffer from poor compactness when designed for small acceptance angles, e.g., to capture direct solar irradiation. Symmetric two-stage systems utilizing an image-forming primary parabolic concentrator in tandem with a nonimaging secondary concentrator partially overcome this compactness problem, but their achievable concentration ratio is ultimately limited by the central obstruction caused by the secondary. Significant improvements can be realized by two-stage systems having asymmetric cross-sections, particularly for 2D line-focus trough designs. We therefore present a detailed analysis of two-stage line-focus asymmetric concentrators for flat receiver geometries and compare them to their symmetric counterparts. Exemplary designs are examined in terms of the key optical performance metrics, namely, geometric concentration ratio, acceptance angle, concentration-acceptance product, aspect ratio, active area fraction, and average number of reflections. Notably, we show that asymmetric designs can achieve significantly higher overall concentrations and are always more compact than symmetric systems designed for the same concentration ratio. Using this analysis as a basis, we develop novel asymmetric designs, including two-wing and nested configurations, which surpass the optical performance of two-mirror aplanats and are comparable with the best reported 2D simultaneous multiple surface designs for both hollow and dielectric-filled secondaries.

  20. Industrial demonstration plant for the gasification of herb residue by fluidized bed two-stage process.

    Science.gov (United States)

    Zeng, Xi; Shao, Ruyi; Wang, Fang; Dong, Pengwei; Yu, Jian; Xu, Guangwen

    2016-04-01

    A fluidized bed two-stage gasification process, consisting of a fluidized-bed (FB) pyrolyzer and a transport fluidized bed (TFB) gasifier, has been proposed to gasify biomass for fuel gas production with low tar content. On the basis of our previous fundamental study, an autothermal two-stage gasifier has been designed and built for gasify a kind of Chinese herb residue with a treating capacity of 600 kg/h. The testing data in the operational stable stage of the industrial demonstration plant showed that when keeping the reaction temperatures of pyrolyzer and gasifier respectively at about 700 °C and 850 °C, the heating value of fuel gas can reach 1200 kcal/Nm(3), and the tar content in the produced fuel gas was about 0.4 g/Nm(3). The results from this pilot industrial demonstration plant fully verified the feasibility and technical features of the proposed FB two-stage gasification process.

  1. Study on two stage activated carbon/HFC-134a based adsorption chiller

    Science.gov (United States)

    >K Habib,

    2013-06-01

    In this paper, a theoretical analysis on the performance of a thermally driven two-stage four-bed adsorption chiller utilizing low-grade waste heat of temperatures between 50°C and 70°C in combination with a heat sink (cooling water) of 30°C for air-conditioning applications has been described. Activated carbon (AC) of type Maxsorb III/HFC-134a pair has been examined as an adsorbent/refrigerant pair. FORTRAN simulation program is developed to analyze the influence of operating conditions (hot and cooling water temperatures and adsorption/desorption cycle times) on the cycle performance in terms of cooling capacity and COP. The main advantage of this two-stage chiller is that it can be operational with smaller regenerating temperature lifts than other heat-driven single-stage chillers. Simulation results shows that the two-stage chiller can be operated effectively with heat sources of 50°C and 70°C in combination with a coolant at 30°C.

  2. Effects of earthworm casts and zeolite on the two-stage composting of green waste

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lu, E-mail: zhanglu1211@gmail.com; Sun, Xiangyang, E-mail: xysunbjfu@gmail.com

    2015-05-15

    Highlights: • Earthworm casts (EWCs) and clinoptilolite (CL) were used in green waste composting. • Addition of EWCs + CL improved physico-chemical and microbiological properties. • Addition of EWCs + CL extended the duration of thermophilic periods during composting. • Addition of EWCs + CL enhanced humification, cellulose degradation, and nutrients. • Combined addition of 0.30% EWCs + 25% CL reduced composting time to 21 days. - Abstract: Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21 days with the optimized two-stage composting method rather than in the 90–270 days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL.

  3. A Two-stage injection-locked magnetron for accelerators with superconducting cavities

    CERN Document Server

    Kazakevich, Grigory; Flanagan, Gene; Marhauser, Frank; Neubauer, Mike; Yakovlev, Vyacheslav; Chase, Brian; Nagaitsev, Sergey; Pasquinelli, Ralph; Solyak, Nikolay; Tupikov, Vitali; Wolff, Daniel

    2013-01-01

    A concept for a two-stage injection-locked CW magnetron intended to drive Superconducting Cavities (SC) for intensity-frontier accelerators has been proposed. The concept considers two magnetrons in which the output power differs by 15-20 dB and the lower power magnetron being frequency-locked from an external source locks the higher power magnetron. The injection-locked two-stage CW magnetron can be used as an RF power source for Fermilab's Project-X to feed separately each of the 1.3 GHz SC of the 8 GeV pulsed linac. We expect output/locking power ratio of about 30-40 dB assuming operation in a pulsed mode with pulse duration of ~ 8 ms and repetition rate of 10 Hz. The experimental setup of a two-stage magnetron utilising CW, S-band, 1 kW tubes operating at pulse duration of 1-10 ms, and the obtained results are presented and discussed in this paper.

  4. Study on the Control Algorithm of Two-Stage DC-DC Converter for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Changhao Piao

    2014-01-01

    Full Text Available The fast response, high efficiency, and good reliability are very important characteristics to electric vehicles (EVs dc/dc converters. Two-stage dc-dc converter is a kind of dc-dc topologies that can offer those characteristics to EVs. Presently, nonlinear control is an active area of research in the field of the control algorithm of dc-dc converters. However, very few papers research on two-stage converter for EVs. In this paper, a fixed switching frequency sliding mode (FSFSM controller and double-integral sliding mode (DISM controller for two-stage dc-dc converter are proposed. And a conventional linear control (lag is chosen as the comparison. The performances of the proposed FSFSM controller are compared with those obtained by the lag controller. In consequence, the satisfactory simulation and experiment results show that the FSFSM controller is capable of offering good large-signal operations with fast dynamical responses to the converter. At last, some other simulation results are presented to prove that the DISM controller is a promising method for the converter to eliminate the steady-state error.

  5. The Hubble Space Telescope Cluster Supernova Survey: V. Improving the Dark Energy Constraints Above z>1 and Building an Early-Type-Hosted Supernova Sample

    CERN Document Server

    Suzuki, N; Lidman, C; Aldering, G; Amanullah, R; Barbary, K; Barrientos, L F; Botyanszki, J; Brodwin, M; Connolly, N; Dawson, K S; Dey, A; Doi, M; Donahue, M; Deustua, S; Eisenhardt, P; Ellingson, E; Faccioli, L; Fadeyev, V; Fakhouri, H K; Fruchter, A S; Gilbank, D G; Gladders, M D; Goldhaber, G; Gonzalez, A H; Goobar, A; Gude, A; Hattori, T; Hoekstra, H; Hsiao, E; Huang, X; Ihara, Y; Jee, M J; Johnston, D; Kashikawa, N; Koester, B; Konishi, K; Kowalski, M; Linder, E V; Lubin, L; Melbourne, J; Meyers, J; Morokuma, T; Munshi, F; Mullis, C; Oda, T; Panagia, N; Perlmutter, S; Postman, M; Pritchard, T; Rhodes, J; Ripoche, P; Rosati, P; Schlegel, D J; Spadafora, A; Stanford, S A; Stanishev, V; Stern, D; Strovink, M; Takanashi, N; Tokita, K; Wagner, M; Wang, L; Yasuda, N; Yee, H K C

    2011-01-01

    We present ACS, NICMOS, and Keck AO-assisted photometry of 20 Type Ia supernovae SNe Ia from the HST Cluster Supernova Survey. The SNe Ia were discovered over the redshift interval 0.623 1 SNe Ia. We describe how such a sample could be efficiently obtained by targeting cluster fields with WFC3 on HST.

  6. An improved two stages dynamic programming/artificial neural network solution model to the unit commitment of thermal units

    Energy Technology Data Exchange (ETDEWEB)

    Abbasy, N.H. [College of Technological Studies, Shuwaikh (Kuwait); Elfayoumy, M.K. [Univ. of Alexandria (Egypt). Dept. of Electrical Engineering

    1995-11-01

    An improved two stages solution model to the unit commitment of thermal units is developed in this paper. In the first stage a pre-schedule is generated using a high quality trained artificial neural net (ANN). A dynamic programming (DP) algorithm is implemented and applied in the second stage for the final determination of the commitment states. The developed solution model avoids the complications imposed by the generation of the variable window structure, proposed by other techniques. A unified approach for the treatment of the ANN is also developed in the paper. The validity of the proposed technique is proved via numerical applications to both sample and small practical power systems. 12 refs, 9 tabs

  7. Clustered drug and sexual HIV risk among a sample of middle-aged injection drug users, Houston, Texas.

    Science.gov (United States)

    Noor, Syed W B; Ross, Michael W; Lai, Dejian; Risser, Jan M

    2013-01-01

    Recent studies have reported a clustered pattern of high-risk drug using and sexual behaviors among younger injection drug users (IDUs), however, no studies have looked at this clustering pattern in relatively older IDUs. This analysis examines the interplay and overlap of drug and sexual HIV risk among a sample of middle-aged, long-term IDUs in Houston, Texas. Our study includes 452 eligible IDUs, recruited into the 2009 National HIV Behavioral Surveillance project. Four separate multiple logistic regression models were built to examine the odds of reporting a given risk behavior. We constructed the most parsimonious multiple logistic regression model using a manual backward stepwise process. Participants were mostly male, older (mean age: 49.5±6.63), and nonHispanic Black. Prevalence of receptive needle sharing as well as having multiple sex partners and having unprotected sex with a partner in exchange for money, drugs, or other things at last sex were high. Unsafe injecting practices were associated with high-risk sexual behaviors. IDUs, who used a needle after someone else had injected with it had higher odds of having more than three sex partners (odds ratio (OR) = 2.10, 95% confidence interval (CI): 1.40-3.12) in last year and who shared drug preparation equipment had higher odds of having unprotected sex with an exchange partner (OR = 3.89, 95% CI: 1.66-9.09) at last sex. Additionally, homelessness was associated with unsafe injecting practices but not with high-risk sexual behaviors. Our results show that a majority of the sample IDUs are practicing sexual as well as drug-using HIV risk behaviors. The observed clustering pattern of drug and sexual risk behavior among this middle-aged population is alarming and deserve attention of HIV policy-makers and planners.

  8. Time clustered sampling can inflate the inferred substitution rate in foot-and-mouth disease virus analyses

    DEFF Research Database (Denmark)

    Pedersen, Casper-Emil Tingskov; Frandsen, Peter; Wekesa, Sabenzia N.;

    2015-01-01

    With the emergence of analytical software for the inference of viral evolution, a number of studies have focused on estimating important parameters such as the substitution rate and the time to the most recent common ancestor (tMRCA) for rapidly evolving viruses. Coupled with an increasing...... through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer...... to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully...

  9. Spectroscopy of 3, 4, 9, 10-perylenetetracarboxylic dianhydride (PTCDA) attached to rare gas samples: clusters vs. bulk matrices. I. Absorption spectroscopy.

    Science.gov (United States)

    Dvorak, Matthieu; Müller, Markus; Knoblauch, Tobias; Bünermann, Oliver; Rydlo, Alexandre; Minniberger, Stefan; Harbich, Wolfgang; Stienkemeier, Frank

    2012-10-28

    The interaction between 3, 4, 9, 10-perylenetetracarboxylic dianhydride (PTCDA) and rare gas or para-hydrogen samples is studied by means of laser-induced fluorescence excitation spectroscopy. The comparison between spectra of PTCDA embedded in a neon matrix and spectra attached to large neon clusters shows that these large organic molecules reside on the surface of the clusters when doped by the pick-up technique. PTCDA molecules can adopt different conformations when attached to argon, neon, and para-hydrogen clusters which implies that the surface of such clusters has a well-defined structure without liquid or fluxional properties. Moreover, a precise analysis of the doping process of these clusters reveals that the mobility of large molecules on the cluster surface is quenched, preventing agglomeration and complex formation.

  10. A two-stage optimization model for emergency material reserve layout planning under uncertainty in response to environmental accidents.

    Science.gov (United States)

    Liu, Jie; Guo, Liang; Jiang, Jiping; Jiang, Dexun; Liu, Rentao; Wang, Peng

    2016-06-05

    In the emergency management relevant to pollution accidents, efficiency emergency rescues can be deeply influenced by a reasonable assignment of the available emergency materials to the related risk sources. In this study, a two-stage optimization framework is developed for emergency material reserve layout planning under uncertainty to identify material warehouse locations and emergency material reserve schemes in pre-accident phase coping with potential environmental accidents. This framework is based on an integration of Hierarchical clustering analysis - improved center of gravity (HCA-ICG) model and material warehouse location - emergency material allocation (MWL-EMA) model. First, decision alternatives are generated using HCA-ICG to identify newly-built emergency material warehouses for risk sources which cannot be satisfied by existing ones with a time-effective manner. Second, emergency material reserve planning is obtained using MWL-EMA to make emergency materials be prepared in advance with a cost-effective manner. The optimization framework is then applied to emergency management system planning in Jiangsu province, China. The results demonstrate that the developed framework not only could facilitate material warehouse selection but also effectively provide emergency material for emergency operations in a quick response.

  11. A two stage algorithm for target and suspect analysis of produced water via gas chromatography coupled with high resolution time of flight mass spectrometry.

    Science.gov (United States)

    Samanipour, Saer; Langford, Katherine; Reid, Malcolm J; Thomas, Kevin V

    2016-09-09

    Gas chromatography coupled with high resolution time of flight mass spectrometry (GC-HR-TOFMS) has gained popularity for the target and suspect analysis of complex samples. However, confident detection of target/suspect analytes in complex samples, such as produced water, remains a challenging task. Here we report on the development and validation of a two stage algorithm for the confident target and suspect analysis of produced water extracts. We performed both target and suspect analysis for 48 standards, which were a mixture of 28 aliphatic hydrocarbons and 20 alkylated phenols, in 3 produced water extracts. The two stage algorithm produces a chemical standard database of spectra, in the first stage, which is used for target and suspect analysis during the second stage. The first stage is carried out through five steps via an algorithm here referred to as unique ion extractor (UIE). During the first step the m/z values in the spectrum of a standard that do not belong to that standard are removed in order to produce a clean spectrum and then during the last step the cleaned spectrum is calibrated. The Dot-product algorithm, during the second stage, uses the cleaned and calibrated spectra of the standards for both target and suspect analysis. We performed the target analysis of 48 standards in all 3 samples via conventional methods, in order to validate the two stage algorithm. The two stage algorithm was demonstrated to be more robust, reliable, and less sensitive to the signal-to-noise ratio (S/N), when compared to the conventional method. The Dot-product algorithm showed lower potential in producing false positives compared to the conventional methods, when dealing with complex samples. We also evaluated the effect of the mass accuracy on the performances of Dot-product algorithm. Our results indicated the crucial importance of HR-MS data and the mass accuracy for confident suspect analysis in complex samples.

  12. Kinematic analysis of a sample of X-ray luminous distant galaxy clusters. The $L_X$ - $\\sigma_v$ relation in the $z>0.6$ universe

    CERN Document Server

    Nastasi, Alessandro; Fassbender, Rene; de Hoon, Arjen; Lamer, Georg; Mohr, Joseph J; Padilla, Nelson; Pratt, Gabriel W; Quintana, Hernan; Rosati, Piero; Santos, Joana S; Schwope, Axel D; Šuhada, Robert; Verdugo, Miguel

    2013-01-01

    Observations and cosmological simulations show galaxy clusters as a family of nearly self-similar objects with properties that can be described by scaling relations as a function of e.g. mass and time. Here we study the scaling relations between the galaxy velocity dispersion and X-ray quantities like X-ray bolometric luminosity and temperature in galaxy clusters at high redshifts (0.64 $\\leq$ z $\\leq$ 1.46). We also compare our results with the similar study of the local HIFLUGCS sample. For the analysis, we use a set of 15 distant galaxy clusters extracted from the literature plus a sample of 10 newly discovered clusters selected in X-rays by the \\XMM Distant Cluster Project (XDCP) with more than 10 confirmed spectroscopic members per cluster. We also study the evolution of this scaling relation by comparing the high redshift results with the data from the local HIFLUGCS sample. We also investigated the $L_X - T_X$ and the $\\sigma_v - T_X$ relations for the 15 clusters in the literature sample. We report th...

  13. Two-Stage Power Factor Corrected Power Supplies: The Low Component-Stress Approach

    DEFF Research Database (Denmark)

    Petersen, Lars; Andersen, Michael Andreas E.

    2002-01-01

    The discussion concerning the use of single-stage contra two-stage PFC solutions has been going on for the last decade and it continues. The purpose of this paper is to direct the focus back on how the power is processed and not so much as to the number of stages or the amount of power processed....... The performance of the basic DC/DC topologies is reviewed with focus on the component stress. The knowledge obtained in this process is used to review some examples of the alternative PFC solutions and compare these solutions with the basic twostage PFC solution....

  14. Two-stage bargaining with coverage extension in a dual labour market

    DEFF Research Database (Denmark)

    Roberts, Mark A.; Stæhr, Karsten; Tranæs, Torben

    2000-01-01

    in extending coverage of a minimum wage to the non-union sector. Furthermore, the union sector does not seek to increase the non-union wage to a level above the market-clearing wage. In fact, it is optimal for the union sector to impose a market-clearing wage on the non-union sector. Finally, coverage......This paper studies coverage extension in a simple general equilibrium model with a dual labour market. The union sector is characterized by two-stage bargaining whereas the firms set wages in the non-union sector. In this model firms and unions of the union sector have a commonality of interest...

  15. SQL/JavaScript Hybrid Worms As Two-stage Quines

    CERN Document Server

    Orlicki, José I

    2009-01-01

    Delving into present trends and anticipating future malware trends, a hybrid, SQL on the server-side, JavaScript on the client-side, self-replicating worm based on two-stage quines was designed and implemented on an ad-hoc scenario instantiating a very common software pattern. The proof of concept code combines techniques seen in the wild, in the form of SQL injections leading to cross-site scripting JavaScript inclusion, and seen in the laboratory, in the form of SQL quines propa- gated via RFIDs, resulting in a hybrid code injection. General features of hybrid worms are also discussed.

  16. Two stage DOA and Fundamental Frequency Estimation based on Subspace Techniques

    DEFF Research Database (Denmark)

    Zhou, Zhenhua; Christensen, Mads Græsbøll; So, Hing-Cheung

    2012-01-01

    optimally weighted harmonic multiple signal classification (MCOW-HMUSIC) estimator is devised for the estimation of fundamental frequencies. Secondly, the spatio- temporal multiple signal classification (ST-MUSIC) estimator is proposed for the estimation of DOA with the estimated frequencies. Statistical......In this paper, the problem of fundamental frequency and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signal is addressed. The estimation procedure consists of two stages. Firstly, by making use of the subspace technique and Markov-based eigenanalysis, a multi- channel...... evaluation with synthetic signals shows the high accuracy of the proposed methods compared with their non-weighting versions....

  17. Performance of the SITP 35K two-stage Stirling cryocooler

    Science.gov (United States)

    Liu, Dongyu; Li, Ao; Li, Shanshan; Wu, Yinong

    2010-04-01

    This paper presents the design, development, optimization experiment and performance of the SITP two-stage Stirling cryocooler. The geometry size of the cooler, especially the diameter and length of the regenerator were analyzed. Operating parameters by experiments were optimized to maximize the second stage cooling performance. In the test the cooler was operated at various drive frequency, phase shift between displacer and piston, fill pressure. The experimental results indicate that the cryocooler has a higher efficiency with a performance of 0.85W at 35K with a compressor input power of 56W at a phase shift of 65°, an operating frequency of 40Hz, 1MPa fill pressure.

  18. Two-Stage Bulk Electron Heating in the Diffusion Region of Anti-Parallel Symmetric Reconnection

    CERN Document Server

    Le, Ari; Daughton, William

    2016-01-01

    Electron bulk energization in the diffusion region during anti-parallel symmetric reconnection entails two stages. First, the inflowing electrons are adiabatically trapped and energized by an ambipolar parallel electric field. Next, the electrons gain energy from the reconnection electric field as they undergo meandering motion. These collisionless mechanisms have been decribed previously, and they lead to highly-structured electron velocity distributions. Nevertheless, a simplified control-volume analysis gives estimates for how the net effective heating scales with the upstream plasma conditions in agreement with fully kinetic simulations and spacecraft observations.

  19. Use of two-stage membrane countercurrent cascade for natural gas purification from carbon dioxide

    Science.gov (United States)

    Kurchatov, I. M.; Laguntsov, N. I.; Karaseva, M. D.

    2016-09-01

    Membrane technology scheme is offered and presented as a two-stage countercurrent recirculating cascade, in order to solve the problem of natural gas dehydration and purification from CO2. The first stage is a single divider, and the second stage is a recirculating two-module divider. This scheme allows natural gas to be cleaned from impurities, with any desired degree of methane extraction. In this paper, the optimal values of the basic parameters of the selected technological scheme are determined. An estimation of energy efficiency was carried out, taking into account the energy consumption of interstage compressor and methane losses in energy units.

  20. Forecasting long memory series subject to structural change: A two-stage approach

    DEFF Research Database (Denmark)

    Papailias, Fotis; Dias, Gustavo Fruet

    2015-01-01

    A two-stage forecasting approach for long memory time series is introduced. In the first step, we estimate the fractional exponent and, by applying the fractional differencing operator, obtain the underlying weakly dependent series. In the second step, we produce multi-step-ahead forecasts...... for the weakly dependent series and obtain their long memory counterparts by applying the fractional cumulation operator. The methodology applies to both stationary and nonstationary cases. Simulations and an application to seven time series provide evidence that the new methodology is more robust to structural...... change and yields good forecasting results....

  1. Space Station Freedom carbon dioxide removal assembly two-stage rotary sliding vane pump

    Science.gov (United States)

    Matteau, Dennis

    1992-07-01

    The design and development of a positive displacement pump selected to operate as an essential part of the carbon dioxide removal assembly (CDRA) are described. An oilless two-stage rotary sliding vane pump was selected as the optimum concept to meet the CDRA application requirements. This positive displacement pump is characterized by low weight and small envelope per unit flow, ability to pump saturated gases and moderate amount of liquid, small clearance volumes, and low vibration. It is easily modified to accommodate several stages on a single shaft optimizing space and weight, which makes the concept ideal for a range of demanding space applications.

  2. Two-Stage Maximum Likelihood Estimation (TSMLE for MT-CDMA Signals in the Indoor Environment

    Directory of Open Access Journals (Sweden)

    Sesay Abu B

    2004-01-01

    Full Text Available This paper proposes a two-stage maximum likelihood estimation (TSMLE technique suited for multitone code division multiple access (MT-CDMA system. Here, an analytical framework is presented in the indoor environment for determining the average bit error rate (BER of the system, over Rayleigh and Ricean fading channels. The analytical model is derived for quadrature phase shift keying (QPSK modulation technique by taking into account the number of tones, signal bandwidth (BW, bit rate, and transmission power. Numerical results are presented to validate the analysis, and to justify the approximations made therein. Moreover, these results are shown to agree completely with those obtained by simulation.

  3. Two-Stage Electric Vehicle Charging Coordination in Low Voltage Distribution Grids

    DEFF Research Database (Denmark)

    Bhattarai, Bishnu Prasad; Bak-Jensen, Birgitte; Pillai, Jayakrishnan Radhakrishna

    2014-01-01

    Increased environmental awareness in the recent years has encouraged rapid growth of renewable energy sources (RESs); especially solar PV and wind. One of the effective solutions to compensate intermittencies in generation from the RESs is to enable consumer participation in demand response (DR......). Being a sizable rated element, electric vehicles (EVs) can offer a great deal of demand flexibility in future intelligent grids. This paper first investigates and analyzes driving pattern and charging requirements of EVs. Secondly, a two-stage charging algorithm, namely local adaptive control...

  4. Health care planning and education via gaming-simulation: a two-stage experiment.

    Science.gov (United States)

    Gagnon, J H; Greenblat, C S

    1977-01-01

    A two-stage process of gaming-simulation design was conducted: the first stage of design concerned national planning for hemophilia care; the second stage of design was for gaming-simulation concerning the problems of hemophilia patients and health care providers. The planning design was intended to be adaptable to large-scale planning for a variety of health care problems. The educational game was designed using data developed in designing the planning game. A broad range of policy-makers participated in the planning game.

  5. Influence of capacity- and time-constrained intermediate storage in two-stage food production systems

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter; Gaalman, Gerard

    2007-01-01

    In food processing, two-stage production systems with a batch processor in the first stage and packaging lines in the second stage are common and mostly separated by capacity- and time-constrained intermediate storage. This combination of constraints is common in practice, but the literature hardly...... of systems like this. Contrary to the common sense in operations management, the LPT rule is able to maximize the total production volume per day. Furthermore, we show that adding one tank has considerable effects. Finally, we conclude that the optimal setup frequency for batches in the first stage...

  6. The global stability of a delayed predator-prey system with two stage-structure

    Energy Technology Data Exchange (ETDEWEB)

    Wang Fengyan [College of Science, Jimei University, Xiamen Fujian 361021 (China)], E-mail: wangfy68@163.com; Pang Guoping [Department of Mathematics and Computer Science, Yulin Normal University, Yulin Guangxi 537000 (China)

    2009-04-30

    Based on the classical delayed stage-structured model and Lotka-Volterra predator-prey model, we introduce and study a delayed predator-prey system, where prey and predator have two stages, an immature stage and a mature stage. The time delays are the time lengths between the immature's birth and maturity of prey and predator species. Results on global asymptotic stability of nonnegative equilibria of the delay system are given, which generalize and suggest that good continuity exists between the predator-prey system and its corresponding stage-structured system.

  7. A Two-Stage Assembly-Type Flowshop Scheduling Problem for Minimizing Total Tardiness

    Directory of Open Access Journals (Sweden)

    Ju-Yong Lee

    2016-01-01

    Full Text Available This research considers a two-stage assembly-type flowshop scheduling problem with the objective of minimizing the total tardiness. The first stage consists of two independent machines, and the second stage consists of a single machine. Two types of components are fabricated in the first stage, and then they are assembled in the second stage. Dominance properties and lower bounds are developed, and a branch and bound algorithm is presented that uses these properties and lower bounds as well as an upper bound obtained from a heuristic algorithm. The algorithm performance is evaluated using a series of computational experiments on randomly generated instances and the results are reported.

  8. Biomass waste gasification - can be the two stage process suitable for tar reduction and power generation?

    Science.gov (United States)

    Sulc, Jindřich; Stojdl, Jiří; Richter, Miroslav; Popelka, Jan; Svoboda, Karel; Smetana, Jiří; Vacek, Jiří; Skoblja, Siarhei; Buryan, Petr

    2012-04-01

    A pilot scale gasification unit with novel co-current, updraft arrangement in the first stage and counter-current downdraft in the second stage was developed and exploited for studying effects of two stage gasification in comparison with one stage gasification of biomass (wood pellets) on fuel gas composition and attainable gas purity. Significant producer gas parameters (gas composition, heating value, content of tar compounds, content of inorganic gas impurities) were compared for the two stage and the one stage method of the gasification arrangement with only the upward moving bed (co-current updraft). The main novel features of the gasifier conception include grate-less reactor, upward moving bed of biomass particles (e.g. pellets) by means of a screw elevator with changeable rotational speed and gradual expanding diameter of the cylindrical reactor in the part above the upper end of the screw. The gasifier concept and arrangement are considered convenient for thermal power range 100-350 kW(th). The second stage of the gasifier served mainly for tar compounds destruction/reforming by increased temperature (around 950°C) and for gasification reaction of the fuel gas with char. The second stage used additional combustion of the fuel gas by preheated secondary air for attaining higher temperature and faster gasification of the remaining char from the first stage. The measurements of gas composition and tar compound contents confirmed superiority of the two stage gasification system, drastic decrease of aromatic compounds with two and higher number of benzene rings by 1-2 orders. On the other hand the two stage gasification (with overall ER=0.71) led to substantial reduction of gas heating value (LHV=3.15 MJ/Nm(3)), elevation of gas volume and increase of nitrogen content in fuel gas. The increased temperature (>950°C) at the entrance to the char bed caused also substantial decrease of ammonia content in fuel gas. The char with higher content of ash leaving the

  9. Two-stage continuous fermentation of Saccharomycopsis fibuligeria and Candida utilis.

    Science.gov (United States)

    Admassu, W; Korus, R A; Heimsch, R C

    1983-11-01

    Biomass production and carbohydrate reduction were determined for a two-stage continuous fermentation process with a simulated potato processing waste feed. The amylolytic yeast Saccharomycopsis fibuligera was grown in the first stage and a mixed culture of S. fibuligera and Candida utilis was maintained in the second stage. All conditions for the first and second stages were fixed except the flow of medium to the second stage was varied. Maximum biomass production occurred at a second stage dilution rate, D(2), of 0.27 h (-1). Carbohydrate reduction was inversely proportional to D(2), between 0.10 and 0.35 h (-1).

  10. Structural requirements and basic design concepts for a two-stage winged launcher system (Saenger)

    Science.gov (United States)

    Kuczera, H.; Keller, K.; Kunz, R.

    1988-10-01

    An evaluation is made of materials and structures technologies deemed capable of increasing the mass fraction-to-orbit of the Saenger two-stage launcher system while adequately addressing thermal-control and cryogenic fuel storage insulation problems. Except in its leading edges, nose cone, and airbreathing propulsion system air intakes, Ti alloy-based materials will be the basis of the airframe primary structure. Lightweight metallic thermal-protection measures will be employed. Attention is given to the design of the large lower stage element of Saenger.

  11. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    OpenAIRE

    Ladan Jamshidy; Hamid Reza Mozaffari; Payam Faraji; Roohollah Sharifi

    2016-01-01

    Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with ...

  12. An Investigation on the Formation of Carbon Nanotubes by Two-Stage Chemical Vapor Deposition

    Directory of Open Access Journals (Sweden)

    M. S. Shamsudin

    2012-01-01

    Full Text Available High density of carbon nanotubes (CNTs has been synthesized from agricultural hydrocarbon: camphor oil using a one-hour synthesis time and a titanium dioxide sol gel catalyst. The pyrolysis temperature is studied in the range of 700–900°C at increments of 50°C. The synthesis process is done using a custom-made two-stage catalytic chemical vapor deposition apparatus. The CNT characteristics are investigated by field emission scanning electron microscopy and micro-Raman spectroscopy. The experimental results showed that structural properties of CNT are highly dependent on pyrolysis temperature changes.

  13. Fast detection of lead dioxide (PbO2) in chlorinated drinking water by a two-stage iodometric method.

    Science.gov (United States)

    Zhang, Yan; Zhang, Yuanyuan; Lin, Yi-Pin

    2010-02-15

    Lead dioxide (PbO(2)) is an important corrosion product associated with lead contamination in drinking water. Quantification of PbO(2) in water samples has been proven challenging due to the incomplete dissolution of PbO(2) in sample preservation and digestion. In this study, we present a simple iodometric method for fast detection of PbO(2) in chlorinated drinking water. PbO(2) can oxidize iodide to form triiodide (I(3)(-)), a yellow-colored anion that can be detected by the UV-vis spectrometry. Complete reduction of up to 20 mg/L PbO(2) can be achieved within 10 min at pH 2.0 and KI = 4 g/L. Free chlorine can oxidize iodide and cause interference. However, this interference can be accounted by a two-stage pH adjustment, allowing free chlorine to completely react with iodide at ambient pH followed by sample acidification to pH 2.0 to accelerate the iodide oxidation by PbO(2). This method showed good recoveries of PbO(2) (90-111%) in chlorinated water samples with a concentration ranging from 0.01 to 20 mg/L. In chloraminated water, this method is limited due to incomplete quenching of monochloramine by iodide in neutral to slightly alkaline pH values. The interference of other particles that may be present in the distribution system was also investigated.

  14. The Effect Of Two-Stage Age Hardening Treatment Combined With Shot Peening On Stress Distribution In The Surface Layer Of 7075 Aluminum Alloy

    Directory of Open Access Journals (Sweden)

    Kaczmarek Ł.

    2015-09-01

    Full Text Available The article present the results of the study on the improvement of mechanical properties of the surface layer of 7075 aluminum alloy via two-stage aging combined with shot peening. The experiments proved that thermo-mechanical treatment may significantly improve hardness and stress distribution in the surface layer. Compressive stresses of 226 MPa±5.5 MPa and hardness of 210±2 HV were obtained for selected samples.

  15. Experiences from the full-scale implementation of a new two-stage vertical flow constructed wetland design.

    Science.gov (United States)

    Langergraber, Guenter; Pressl, Alexander; Haberl, Raimund

    2014-01-01

    This paper describes the results of the first full-scale implementation of a two-stage vertical flow constructed wetland (CW) system developed to increase nitrogen removal. The full-scale system was constructed for the Bärenkogelhaus, which is located in Styria at the top of a mountain, 1,168 m above sea level. The Bärenkogelhaus has a restaurant with 70 seats, 16 rooms for overnight guests and is a popular site for day visits, especially during weekends and public holidays. The CW treatment system was designed for a hydraulic load of 2,500 L.d(-1) with a specific surface area requirement of 2.7 m(2) per person equivalent (PE). It was built in fall 2009 and started operation in April 2010 when the restaurant was re-opened. Samples were taken between July 2010 and June 2013 and were analysed in the laboratory of the Institute of Sanitary Engineering at BOKU University using standard methods. During 2010 the restaurant at Bärenkogelhaus was open 5 days a week whereas from 2011 the Bärenkogelhaus was open only on demand for events. This resulted in decreased organic loads of the system in the later period. In general, the measured effluent concentrations were low and the removal efficiencies high. During the whole period the ammonia nitrogen effluent concentration was below 1 mg/L even at effluent water temperatures below 3 °C. Investigations during high-load periods, i.e. events like weddings and festivals at weekends, with more than 100 visitors, showed a very robust treatment performance of the two-stage CW system. Effluent concentrations of chemical oxygen demand and NH4-N were not affected by these events with high hydraulic loads.

  16. Cluster-cluster clustering

    Science.gov (United States)

    Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C. S.

    1985-01-01

    The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales.

  17. Cluster-cluster clustering

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C.S.

    1985-08-01

    The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales. 30 references.

  18. A cross-sectional, randomized cluster sample survey of household vulnerability to extreme heat among slum dwellers in ahmedabad, india.

    Science.gov (United States)

    Tran, Kathy V; Azhar, Gulrez S; Nair, Rajesh; Knowlton, Kim; Jaiswal, Anjali; Sheffield, Perry; Mavalankar, Dileep; Hess, Jeremy

    2013-06-18

    Extreme heat is a significant public health concern in India; extreme heat hazards are projected to increase in frequency and severity with climate change. Few of the factors driving population heat vulnerability are documented, though poverty is a presumed risk factor. To facilitate public health preparedness, an assessment of factors affecting vulnerability among slum dwellers was conducted in summer 2011 in Ahmedabad, Gujarat, India. Indicators of heat exposure, susceptibility to heat illness, and adaptive capacity, all of which feed into heat vulnerability, was assessed through a cross-sectional household survey using randomized multistage cluster sampling. Associations between heat-related morbidity and vulnerability factors were identified using multivariate logistic regression with generalized estimating equations to account for clustering effects. Age, preexisting medical conditions, work location, and access to health information and resources were associated with self-reported heat illness. Several of these variables were unique to this study. As sociodemographics, occupational heat exposure, and access to resources were shown to increase vulnerability, future interventions (e.g., health education) might target specific populations among Ahmedabad urban slum dwellers to reduce vulnerability to extreme heat. Surveillance and evaluations of future interventions may also be worthwhile.

  19. Lick Indices and Spectral Energy Distribution Analysis based on an M31 Star Cluster Sample: Comparisons of Methods and Models

    CERN Document Server

    Fan, Zhou; Chen, Bingqiu; Jiang, Linhua; Bian, Fuyan; Li, Zhongmu

    2016-01-01

    Application of fitting techniques to obtain physical parameters---such as ages, metallicities, and $\\alpha$-element to iron ratios---of stellar populations is an important approach to understand the nature of both galaxies and globular clusters (GCs). In fact, fitting methods based on different underlying models may yield different results, and with varying precision. In this paper, we have selected 22 confirmed M31 GCs for which we do not have access to previously known spectroscopic metallicities. Most are located at approximately one degree (in projection) from the galactic center. We performed spectroscopic observations with the 6.5 m MMT telescope, equipped with its Red Channel Spectrograph. Lick/IDS absorption-line indices, radial velocities, ages, and metallicities were derived based on the $\\rm EZ\\_Ages$ stellar population parameter calculator. We also applied full spectral fitting with the ULySS code to constrain the parameters of our sample star clusters. In addition, we performed $\\chi^2_{\\rm min}$...

  20. Complex Dynamical Behavior of a Two-Stage Colpitts Oscillator with Magnetically Coupled Inductors

    Directory of Open Access Journals (Sweden)

    V. Kamdoum Tamba

    2014-01-01

    Full Text Available A five-dimensional (5D controlled two-stage Colpitts oscillator is introduced and analyzed. This new electronic oscillator is constructed by considering the well-known two-stage Colpitts oscillator with two further elements (coupled inductors and variable resistor. In contrast to current approaches based on piecewise linear (PWL model, we propose a smooth mathematical model (with exponential nonlinearity to investigate the dynamics of the oscillator. Several issues, such as the basic dynamical behaviour, bifurcation diagrams, Lyapunov exponents, and frequency spectra of the oscillator, are investigated theoretically and numerically by varying a single control resistor. It is found that the oscillator moves from the state of fixed point motion to chaos via the usual paths of period-doubling and interior crisis routes as the single control resistor is monitored. Furthermore, an experimental study of controlled Colpitts oscillator is carried out. An appropriate electronic circuit is proposed for the investigations of the complex dynamics behaviour of the system. A very good qualitative agreement is obtained between the theoretical/numerical and experimental results.

  1. Optimization of Two-Stage Peltier Modules: Structure and Exergetic Efficiency

    Directory of Open Access Journals (Sweden)

    Cesar Ramirez-Lopez

    2012-08-01

    Full Text Available In this paper we undertake the theoretical analysis of a two-stage semiconductor thermoelectric module (TEM which contains an arbitrary and different number of thermocouples, n1 and n2, in each stage (pyramid-styled TEM. The analysis is based on a dimensionless entropy balance set of equations. We study the effects of n1 and n2, the flowing electric currents through each stage, the applied temperatures and the thermoelectric properties of the semiconductor materials on the exergetic efficiency. Our main result implies that the electric currents flowing in each stage must necessarily be different with a ratio about 4.3 if the best thermal performance and the highest temperature difference possible between the cold and hot side of the device are pursued. This fact had not been pointed out before for pyramid-styled two stage TEM. The ratio n1/n2 should be about 8.

  2. A two-stage series diode for intense large-area moderate pulsed X rays production

    Science.gov (United States)

    Lai, Dingguo; Qiu, Mengtong; Xu, Qifu; Su, Zhaofeng; Li, Mo; Ren, Shuqing; Huang, Zhongliang

    2017-01-01

    This paper presents a method for moderate pulsed X rays produced by a series diode, which can be driven by high voltage pulse to generate intense large-area uniform sub-100-keV X rays. A two stage series diode was designed for Flash-II accelerator and experimentally investigated. A compact support system of floating converter/cathode was invented, the extra cathode is floating electrically and mechanically, by withdrawing three support pins several milliseconds before a diode electrical pulse. A double ring cathode was developed to improve the surface electric field and emission stability. The cathode radii and diode separation gap were optimized to enhance the uniformity of X rays and coincidence of the two diode voltages based on the simulation and theoretical calculation. The experimental results show that the two stage series diode can work stably under 700 kV and 300 kA, the average energy of X rays is 86 keV, and the dose is about 296 rad(Si) over 615 cm2 area with uniformity 2:1 at 5 cm from the last converter. Compared with the single diode, the average X rays' energy reduces from 132 keV to 88 keV, and the proportion of sub-100-keV photons increases from 39% to 69%.

  3. Study on a high capacity two-stage free piston Stirling cryocooler working around 30 K

    Science.gov (United States)

    Wang, Xiaotao; Zhu, Jian; Chen, Shuai; Dai, Wei; Li, Ke; Pang, Xiaomin; Yu, Guoyao; Luo, Ercang

    2016-12-01

    This paper presents a two-stage high-capacity free-piston Stirling cryocooler driven by a linear compressor to meet the requirement of the high temperature superconductor (HTS) motor applications. The cryocooler system comprises a single piston linear compressor, a two-stage free piston Stirling cryocooler and a passive oscillator. A single stepped displacer configuration was adopted. A numerical model based on the thermoacoustic theory was used to optimize the system operating and structure parameters. Distributions of pressure wave, phase differences between the pressure wave and the volume flow rate and different energy flows are presented for a better understanding of the system. Some characterizing experimental results are presented. Thus far, the cryocooler has reached a lowest cold-head temperature of 27.6 K and achieved a cooling power of 78 W at 40 K with an input electric power of 3.2 kW, which indicates a relative Carnot efficiency of 14.8%. When the cold-head temperature increased to 77 K, the cooling power reached 284 W with a relative Carnot efficiency of 25.9%. The influences of different parameters such as mean pressure, input electric power and cold-head temperature are also investigated.

  4. Planning an Agricultural Water Resources Management System: A Two-Stage Stochastic Fractional Programming Model

    Directory of Open Access Journals (Sweden)

    Liang Cui

    2015-07-01

    Full Text Available Irrigation water management is crucial for agricultural production and livelihood security in many regions and countries throughout the world. In this study, a two-stage stochastic fractional programming (TSFP method is developed for planning an agricultural water resources management system under uncertainty. TSFP can provide an effective linkage between conflicting economic benefits and the associated penalties; it can also balance conflicting objectives and maximize the system marginal benefit with per unit of input under uncertainty. The developed TSFP method is applied to a real case of agricultural water resources management of the Zhangweinan River Basin China, which is one of the main food and cotton producing regions in north China and faces serious water shortage. The results demonstrate that the TSFP model is advantageous in balancing conflicting objectives and reflecting complicated relationships among multiple system factors. Results also indicate that, under the optimized irrigation target, the optimized water allocation rate of Minyou Channel and Zhangnan Channel are 57.3% and 42.7%, respectively, which adapts the changes in the actual agricultural water resources management problem. Compared with the inexact two-stage water management (ITSP method, TSFP could more effectively address the sustainable water management problem, provide more information regarding tradeoffs between multiple input factors and system benefits, and help the water managers maintain sustainable water resources development of the Zhangweinan River Basin.

  5. A separate two-stage pulse tube cooler working at liquid helium temperature

    Institute of Scientific and Technical Information of China (English)

    QIU Limin; HE Yonglin; GAN Zhihua; WAN Laihong; CHEN Guobang

    2005-01-01

    A novel 4 K separate two-stage pulse tube cooler (PTC) was designed and tested. The cooler consists of two separate pulse tube coolers, in which the cold end of the first stage regenerator is thermally connected with the middle part of the second regenerator. Compared to the traditional coupled multi-stage pulse tube cooler, the mutual interference between stages can be significantly eliminated. The lowest refrigeration temperature obtained at the first stage pulse tube was 13.8 K. This is a new record for single stage PTC. With two compressors and two rotary valves driving mode, the separate two-stage PTC obtained a refrigeration temperature of 2.5 K at the second stage. Cooling capacities of 508 mW at 4.2 K and 15 W at 37.5 K were achieved simultaneously. A one-compressor and one-rotary valve driving mode has been proposed to further simplify the structure of separate type PTC.

  6. Multiobjective Two-Stage Stochastic Programming Problems with Interval Discrete Random Variables

    Directory of Open Access Journals (Sweden)

    S. K. Barik

    2012-01-01

    Full Text Available Most of the real-life decision-making problems have more than one conflicting and incommensurable objective functions. In this paper, we present a multiobjective two-stage stochastic linear programming problem considering some parameters of the linear constraints as interval type discrete random variables with known probability distribution. Randomness of the discrete intervals are considered for the model parameters. Further, the concepts of best optimum and worst optimum solution are analyzed in two-stage stochastic programming. To solve the stated problem, first we remove the randomness of the problem and formulate an equivalent deterministic linear programming model with multiobjective interval coefficients. Then the deterministic multiobjective model is solved using weighting method, where we apply the solution procedure of interval linear programming technique. We obtain the upper and lower bound of the objective function as the best and the worst value, respectively. It highlights the possible risk involved in the decision-making tool. A numerical example is presented to demonstrate the proposed solution procedure.

  7. Two-Stage Single-Compartment Models to Evaluate Dissolution in the Lower Intestine.

    Science.gov (United States)

    Markopoulos, Constantinos; Vertzoni, Maria; Symillides, Mira; Kesisoglou, Filippos; Reppas, Christos

    2015-09-01

    The purpose was to propose two-stage single-compartment models for evaluating dissolution characteristics in distal ileum and ascending colon, under conditions simulating the bioavailability and bioequivalence studies in fasted and fed state by using the mini-paddle and the compendial flow-through apparatus (closed-loop mode). Immediate release products of two highly dosed active pharmaceutical ingredients (APIs), sulfasalazine and L-870,810, and one mesalamine colon targeting product were used for evaluating their usefulness. Change of medium composition simulating the conditions in distal ileum (SIFileum ) to a medium simulating the conditions in ascending colon in fasted state and in fed state was achieved by adding an appropriate solution in SIFileum . Data with immediate release products suggest that dissolution in lower intestine is substantially different than in upper intestine and is affected by regional pH differences > type/intensity of fluid convection > differences in concentration of other luminal components. Asacol® (400 mg/tab) was more sensitive to type/intensity of fluid convection. In all the cases, data were in line with available human data. Two-stage single-compartment models may be useful for the evaluation of dissolution in lower intestine. The impact of type/intensity of fluid convection and viscosity of media on luminal performance of other APIs and drug products requires further exploration.

  8. Simultaneous bile duct and portal venous branch ligation in two-stage hepatectomy

    Institute of Scientific and Technical Information of China (English)

    Hiroya Iida; Chiaki Yasui; Tsukasa Aihara; Shinichi Ikuta; Hidenori Yoshie; Naoki Yamanaka

    2011-01-01

    Hepatectomy is an effective surgical treatment for multiple bilobar liver metastases from colon cancer; however, one of the primary obstacles to completing surgical resection for these cases is an insufficient volume of the future remnant liver, which may cause postoperative liver failure. To induce atrophy of the unilateral lobe and hypertrophy of the future remnant liver, procedures to occlude the portal vein have been conventionally used prior to major hepatectomy. We report a case of a 50-year-old woman in whom two-stage hepatectomy was performed in combination with intraoperative ligation of the portal vein and the bile duct of the right hepatic lobe. This procedure was designed to promote the atrophic effect on the right hepatic lobe more effectively than the conventional technique, and to the best of our knowledge, it was used for the first time in the present case. Despite successful induction of liver volume shift as well as the following procedure, the patient died of subsequent liver failure after developing recurrent tumors. We discuss the first case in which simultaneous ligation of the portal vein and the biliary system was successfully applied as part of the first step of two-stage hepatectomy.

  9. Metamodeling and Optimization of a Blister Copper Two-Stage Production Process

    Science.gov (United States)

    Jarosz, Piotr; Kusiak, Jan; Małecki, Stanisław; Morkisz, Paweł; Oprocha, Piotr; Pietrucha, Wojciech; Sztangret, Łukasz

    2016-06-01

    It is often difficult to estimate parameters for a two-stage production process of blister copper (containing 99.4 wt.% of Cu metal) as well as those for most industrial processes with high accuracy, which leads to problems related to process modeling and control. The first objective of this study was to model flash smelting and converting of Cu matte stages using three different techniques: artificial neural networks, support vector machines, and random forests, which utilized noisy technological data. Subsequently, more advanced models were applied to optimize the entire process (which was the second goal of this research). The obtained optimal solution was a Pareto-optimal one because the process consisted of two stages, making the optimization problem a multi-criteria one. A sequential optimization strategy was employed, which aimed for optimal control parameters consecutively for both stages. The obtained optimal output parameters for the first smelting stage were used as input parameters for the second converting stage. Finally, a search for another optimal set of control parameters for the second stage of a Kennecott-Outokumpu process was performed. The optimization process was modeled using a Monte-Carlo method, and both modeling parameters and computed optimal solutions are discussed.

  10. Development and optimization of a two-stage gasifier for heat and power production

    Science.gov (United States)

    Kosov, V. V.; Zaichenko, V. M.

    2016-11-01

    The major methods of biomass thermal conversion are combustion in excess oxygen, gasification in reduced oxygen, and pyrolysis in the absence of oxygen. The end products of these methods are heat, gas, liquid and solid fuels. From the point of view of energy production, none of these methods can be considered optimal. A two-stage thermal conversion of biomass based on pyrolysis as the first stage and pyrolysis products cracking as the second stage can be considered the optimal method for energy production that allows obtaining synthesis gas consisting of hydrogen and carbon monoxide and not containing liquid or solid particles. On the base of the two stage cracking technology, there was designed an experimental power plant of electric power up to 50 kW. The power plant consists of a thermal conversion module and a gas engine power generator adapted for operation on syngas. Purposes of the work were determination of an optimal operation temperature of the thermal conversion module and an optimal mass ratio of processed biomass and charcoal in cracking chamber of the thermal conversion module. Experiments on the pyrolysis products cracking at various temperatures show that the optimum cracking temperature is equal to 1000 °C. From the results of measuring the volume of gas produced in different mass ratios of charcoal and wood biomass processed, it follows that the maximum volume of the gas in the range of the mass ratio equal to 0.5-0.6.

  11. On bi-criteria two-stage transportation problem: a case study

    Directory of Open Access Journals (Sweden)

    Ahmad MURAD

    2010-01-01

    Full Text Available The study of the optimum distribution of goods between sources and destinations is one of the important topics in projects economics. This importance comes as a result of minimizing the transportation cost, deterioration, time, etc. The classical transportation problem constitutes one of the major areas of application for linear programming. The aim of this problem is to obtain the optimum distribution of goods from different sources to different destinations which minimizes the total transportation cost. From the practical point of view, the transportation problems may differ from the classical form. It may contain one or more objective function, one or more stage to transport, one or more type of commodity with one or more means of transport. The aim of this paper is to construct an optimization model for transportation problem for one of mill-stones companies. The model is formulated as a bi-criteria two-stage transportation problem with a special structure depending on the capacities of suppliers, warehouses and requirements of the destinations. A solution algorithm is introduced to solve this class of bi-criteria two-stage transportation problem to obtain the set of non-dominated extreme points and the efficient solutions accompanied with each one that enables the decision maker to choose the best one. The solution algorithm mainly based on the fruitful application of the methods for treating transportation problems, theory of duality of linear programming and the methods of solving bi-criteria linear programming problems.

  12. An integrated two-stage support vector machine approach to forecast inundation maps during typhoons

    Science.gov (United States)

    Jhong, Bing-Chen; Wang, Jhih-Huang; Lin, Gwo-Fong

    2017-04-01

    During typhoons, accurate forecasts of hourly inundation depths are essential for inundation warning and mitigation. Due to the lack of observed data of inundation maps, sufficient observed data are not available for developing inundation forecasting models. In this paper, the inundation depths, which are simulated and validated by a physically based two-dimensional model (FLO-2D), are used as a database for inundation forecasting. A two-stage inundation forecasting approach based on Support Vector Machine (SVM) is proposed to yield 1- to 6-h lead-time inundation maps during typhoons. In the first stage (point forecasting), the proposed approach not only considers the rainfall intensity and inundation depth as model input but also simultaneously considers cumulative rainfall and forecasted inundation depths. In the second stage (spatial expansion), the geographic information of inundation grids and the inundation forecasts of reference points are used to yield inundation maps. The results clearly indicate that the proposed approach effectively improves the forecasting performance and decreases the negative impact of increasing forecast lead time. Moreover, the proposed approach is capable of providing accurate inundation maps for 1- to 6-h lead times. In conclusion, the proposed two-stage forecasting approach is suitable and useful for improving the inundation forecasting during typhoons, especially for long lead times.

  13. The influence of partial oxidation mechanisms on tar destruction in TwoStage biomass gasification

    DEFF Research Database (Denmark)

    Ahrenfeldt, Jesper; Egsgaard, Helge; Stelte, Wolfgang

    2013-01-01

    TwoStage gasification of biomass results in almost tar free producer gas suitable for multiple end-use purposes. In the present study, it is investigated to what extent the partial oxidation process of the pyrolysis gas from the first stage is involved in direct and in-direct tar destruction and ...... tar destruction and a high moisture content of the biomass enhances the decomposition of phenol and inhibits the formation of naphthalene. This enhances tar conversion and gasification in the char-bed, and thus contributes in-directly to the tar destruction.......TwoStage gasification of biomass results in almost tar free producer gas suitable for multiple end-use purposes. In the present study, it is investigated to what extent the partial oxidation process of the pyrolysis gas from the first stage is involved in direct and in-direct tar destruction...... and conversion. The study identifies the following major impact factors regarding tar content in the producer gas: oxidation temperature, excess air ratio and biomass moisture content. In a experimental setup, wood pellets were pyrolyzed and the resulting pyrolysis gas was transferred in a heated partial...

  14. Numerical simulation of municipal solid waste combustion in a novel two-stage reciprocating incinerator.

    Science.gov (United States)

    Huai, X L; Xu, W L; Qu, Z Y; Li, Z G; Zhang, F P; Xiang, G M; Zhu, S Y; Chen, G

    2008-01-01

    A mathematical model was presented in this paper for the combustion of municipal solid waste in a novel two-stage reciprocating grate furnace. Numerical simulations were performed to predict the temperature, the flow and the species distributions in the furnace, with practical operational conditions taken into account. The calculated results agree well with the test data, and the burning behavior of municipal solid waste in the novel two-stage reciprocating incinerator can be demonstrated well. The thickness of waste bed, the initial moisture content, the excessive air coefficient and the secondary air are the major factors that influence the combustion process. If the initial moisture content of waste is high, both the heat value of waste and the temperature inside incinerator are low, and less oxygen is necessary for combustion. The air supply rate and the primary air distribution along the grate should be adjusted according to the initial moisture content of the waste. A reasonable bed thickness and an adequate excessive air coefficient can keep a higher temperature, promote the burnout of combustibles, and consequently reduce the emission of dioxin pollutants. When the total air supply is constant, reducing primary air and introducing secondary air properly can enhance turbulence and mixing, prolong the residence time of flue gas, and promote the complete combustion of combustibles. This study provides an important reference for optimizing the design and operation of municipal solid wastes furnace.

  15. Two stage heterotrophy/photoinduction culture of Scenedesmus incrassatulus: potential for lutein production.

    Science.gov (United States)

    Flórez-Miranda, Liliana; Cañizares-Villanueva, Rosa Olivia; Melchy-Antonio, Orlando; Jerónimo, Fernando Martínez-; Flores-Ortíz, Cesar Mateo

    2017-09-16

    A biomass production process including two stages, heterotrophy/photoinduction (TSHP), was developed to improve biomass and lutein production by the green microalgae Scenedesmus incrassatulus. To determine the effects of different nitrogen sources (yeast extract and urea) and temperature in the heterotrophic stage, experiments using shake flask cultures with glucose as the carbon source were carried out. The highest biomass productivity and specific pigment concentrations were reached using urea+vitamins (U+V) at 30°C. The first stage of the TSHP process was done in a 6L bioreactor, and the inductions in a 3L airlift photobioreactor. At the end of the heterotrophic stage, S. incrassatulus achieved the maximal biomass concentration, increasing from 7.22gL(-1) to 17.98gL(-1) with an increase in initial glucose concentration from 10.6gL(-1) to 30.3gL(-1). However, the higher initial glucose concentration resulted in a lower specific growth rate (μ) and lower cell yield (Yx/s), possibly due to substrate inhibition. After 24h of photoinduction, lutein content in S. incrassatulus biomass was 7 times higher than that obtained at the end of heterotrophic cultivation, and the lutein productivity was 1.6 times higher compared with autotrophic culture of this microalga. Hence, the two-stage heterotrophy/photoinduction culture is an effective strategy for high cell density and lutein production in S. incrassatulus. Copyright © 2017. Published by Elsevier B.V.

  16. Dynamics of installation way for the actuator of a two-stage active vibration-isolator

    Institute of Scientific and Technical Information of China (English)

    HU Li; HUANG Qi-bai; HE Xue-song; YUAN Ji-xuan

    2008-01-01

    We investigated the behaviors of an active control system of two-stage vibration isolation with the actuator installed in parallel with either the upper passive mount or the lower passive isolation mount. We revealed the relationships between the active control force of the actuator and the parameters of the passive isolators by studying the dynamics of two-stage active vibration isolation for the actuator at the foregoing two positions in turn. With the actuator installed beside the upper mount, a small active force can achieve a very good isolating effect when the frequency of the stimulating force is much larger than the natural frequency of the upper mount; a larger active force is required in the low-frequency domain; and the active force equals the stimulating force when the upper mount works within the resonance region, suggesting an approach to reducing wobble and ensuring desirable installation accuracy by increasing the upper-mount stiffness. In either the low or the high frequency region far away from the resonance region, the active force is smaller when the actuator is beside the lower mount than beside the upper mount.

  17. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Won Sik [Purdue Univ., West Lafayette, IN (United States); Lin, C. S. [Purdue Univ., West Lafayette, IN (United States); Hader, J. S. [Purdue Univ., West Lafayette, IN (United States); Park, T. K. [Purdue Univ., West Lafayette, IN (United States); Deng, P. [Purdue Univ., West Lafayette, IN (United States); Yang, G. [Purdue Univ., West Lafayette, IN (United States); Jung, Y. S. [Purdue Univ., West Lafayette, IN (United States); Kim, T. K. [Argonne National Lab. (ANL), Argonne, IL (United States); Stauff, N. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-30

    This report presents the performance characteristics of two “two-stage” fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  18. Hydrogen and methane production from household solid waste in the two-stage fermentation process

    DEFF Research Database (Denmark)

    Lui, D.; Liu, D.; Zeng, Raymond Jianxiong

    2006-01-01

    A two-stage process combined hydrogen and methane production from household solid waste was demonstrated working successfully. The yield of 43 mL H-2/g volatile solid (VS) added was generated in the first hydrogen production stage and the methane production in the second stage was 500 mL CH4/g VS....... Furthermore, this study also provided direct evidence in the dynamic fermentation process that, hydrogen production increase was reflected by acetate to butyrate ratio increase in liquid phase. (c) 2006 Elsevier Ltd. All rights reserved.......A two-stage process combined hydrogen and methane production from household solid waste was demonstrated working successfully. The yield of 43 mL H-2/g volatile solid (VS) added was generated in the first hydrogen production stage and the methane production in the second stage was 500 mL CH4/g VS...... added. This figure was 21% higher than the methane yield from the one-stage process, which was run as control. Sparging of the hydrogen reactor with methane gas resulted in doubling of the hydrogen production. PH was observed as a key factor affecting fermentation pathway in hydrogen production stage...

  19. Two-stage electrodialytic concentration of glyceric acid from fermentation broth.

    Science.gov (United States)

    Habe, Hiroshi; Shimada, Yuko; Fukuoka, Tokuma; Kitamoto, Dai; Itagaki, Masayuki; Watanabe, Kunihiko; Yanagishita, Hiroshi; Sakaki, Keiji

    2010-12-01

    The aim of this research was the application of a two-stage electrodialysis (ED) method for glyceric acid (GA) recovery from fermentation broth. First, by desalting ED, glycerate solutions (counterpart is Na+) were concentrated using ion-exchange membranes, and the glycerate recovery and energy consumption became more efficient with increasing the initial glycerate concentration (30 to 130 g/l). Second, by water-splitting ED, the concentrated glycerate was electroconverted to GA using bipolar membranes. Using a culture broth of Acetobacter tropicalis containing 68.6 g/l of D-glycerate, a final D-GA concentration of 116 g/l was obtained following the two-stage ED process. The total energy consumption for the D-glycerate concentration and its electroconversion to D-GA was approximately 0.92 kWh per 1 kg of D-GA. Copyright © 2010 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  20. Occurrence of two-stage hardening in C-Mn steel wire rods containing pearlitic microstructure

    Science.gov (United States)

    Singh, Balbir; Sahoo, Gadadhar; Saxena, Atul

    2016-09-01

    The 8 and 10 mm diameter wire rods intended for use as concrete reinforcement were produced/ hot rolled from C-Mn steel chemistry containing various elements within the range of C:0.55-0.65, Mn:0.85-1.50, Si:0.05-0.09, S:0.04 max, P:0.04 max and N:0.006 max wt%. Depending upon the C and Mn contents the product attained pearlitic microstructure in the range of 85-93% with balance amount of polygonal ferrite transformed at prior austenite grain boundaries. The pearlitic microstructure in the wire rods helped in achieving yield strength, tensile strength, total elongation and reduction in area values within the range of 422-515 MPa, 790-950 MPa, 22-15% and 45-35%, respectively. On analyzing the tensile results it was revealed that the material experienced hardening in two stages separable by a knee strain value of about 0.05. The occurrence of two stage hardening thus in the steel with hardening coefficients of 0.26 and 0.09 could be demonstrated with the help of derived relationships existed between flow stress and the strain.

  1. Rules and mechanisms for efficient two-stage learning in neural circuits

    Science.gov (United States)

    Teşileanu, Tiberiu; Ölveczky, Bence; Balasubramanian, Vijay

    2017-01-01

    Trial-and-error learning requires evaluating variable actions and reinforcing successful variants. In songbirds, vocal exploration is induced by LMAN, the output of a basal ganglia-related circuit that also contributes a corrective bias to the vocal output. This bias is gradually consolidated in RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using stochastic gradient descent, we derive how the activity in ‘tutor’ circuits (e.g., LMAN) should match plasticity mechanisms in ‘student’ circuits (e.g., RA) to achieve efficient learning. We further describe a reinforcement learning framework through which the tutor can build its teaching signal. We show that mismatches between the tutor signal and the plasticity mechanism can impair learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning. DOI: http://dx.doi.org/10.7554/eLife.20944.001 PMID:28374674

  2. Two-stage estimation for multivariate recurrent event data with a dependent terminal event.

    Science.gov (United States)

    Chen, Chyong-Mei; Chuang, Ya-Wen; Shen, Pao-Sheng

    2015-03-01

    Recurrent event data arise in longitudinal follow-up studies, where each subject may experience the same type of events repeatedly. The work in this article is motivated by the data from a study of repeated peritonitis for patients on peritoneal dialysis. Due to the aspects of medicine and cost, the peritonitis cases were classified into two types: Gram-positive and non-Gram-positive peritonitis. Further, since the death and hemodialysis therapy preclude the occurrence of recurrent events, we face multivariate recurrent event data with a dependent terminal event. We propose a flexible marginal model, which has three characteristics: first, we assume marginal proportional hazard and proportional rates models for terminal event time and recurrent event processes, respectively; second, the inter-recurrences dependence and the correlation between the multivariate recurrent event processes and terminal event time are modeled through three multiplicative frailties corresponding to the specified marginal models; third, the rate model with frailties for recurrent events is specified only on the time before the terminal event. We propose a two-stage estimation procedure for estimating unknown parameters. We also establish the consistency of the two-stage estimator. Simulation studies show that the proposed approach is appropriate for practical use. The methodology is applied to the peritonitis cohort data that motivated this study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Two-stage earth-to-orbit vehicles with dual-fuel propulsion in the Orbiter

    Science.gov (United States)

    Martin, J. A.

    1982-01-01

    Earth-to-orbit vehicle studies of future replacements for the Space Shuttle are needed to guide technology development. Previous studies that have examined single-stage vehicles have shown advantages for dual-fuel propulsion. Previous two-stage system studies have assumed all-hydrogen fuel for the Orbiters. The present study examined dual-fuel Orbiters and found that the system dry mass could be reduced with this concept. The possibility of staging the booster at a staging velocity low enough to allow coast-back to the launch site is shown to be beneficial, particularly in combination with a dual-fuel Orbiter. An engine evaluation indicated the same ranking of engines as did a previous single-stage study. Propane and RP-1 fuels result in lower vehicle dry mass than methane, and staged-combustion engines are preferred over gas-generator engines. The sensitivity to the engine selection is less for two-stage systems than for single-stage systems.

  4. Two-stage effects of awareness cascade on epidemic spreading in multiplex networks

    Science.gov (United States)

    Guo, Quantong; Jiang, Xin; Lei, Yanjun; Li, Meng; Ma, Yifang; Zheng, Zhiming

    2015-01-01

    Human awareness plays an important role in the spread of infectious diseases and the control of propagation patterns. The dynamic process with human awareness is called awareness cascade, during which individuals exhibit herd-like behavior because they are making decisions based on the actions of other individuals [Borge-Holthoefer et al., J. Complex Networks 1, 3 (2013), 10.1093/comnet/cnt006]. In this paper, to investigate the epidemic spreading with awareness cascade, we propose a local awareness controlled contagion spreading model on multiplex networks. By theoretical analysis using a microscopic Markov chain approach and numerical simulations, we find the emergence of an abrupt transition of epidemic threshold βc with the local awareness ratio α approximating 0.5 , which induces two-stage effects on epidemic threshold and the final epidemic size. These findings indicate that the increase of α can accelerate the outbreak of epidemics. Furthermore, a simple 1D lattice model is investigated to illustrate the two-stage-like sharp transition at αc≈0.5 . The results can give us a better understanding of why some epidemics cannot break out in reality and also provide a potential access to suppressing and controlling the awareness cascading systems.

  5. Configuration Consideration for Expander in Transcritical Carbon Dioxide Two-Stage Compression Cycle

    Institute of Scientific and Technical Information of China (English)

    MA Yitai; YANG Junlan; GUAN Haiqing; LI Minxia

    2005-01-01

    To investigate the configuration consideration of expander in transcritical carbon dioxide two-stage compression cycle, the best place in the cycle should be searched for to reinvest the recovery work so as to improve the system efficiency. The expander and the compressor are connected to the same shaft and integrated into one unit, with the latter being driven by the former, thus the transfer loss and leakage loss can be decreased greatly. In these systems, the expander can be either connected with the first stage compressor (shortened as DCDL cycle) or the second stage compressor (shortened as DCDH cycle), but the two configuration ways can get different performances. By setting up theoretical model for two kinds of expander configuration ways in the transcritical carbon dioxide two-stage compression cycle, the first and the second laws of thermodynamics are used to analyze the coefficient of performance, exergy efficiency, inter-stage pressure, discharge temperature and exergy losses of each component for the two cycles. From the model results, the performance of DCDH cycle is better than that of DCDL cycle. The analysis results are indispensable to providing a theoretical basis for practical design and operating.

  6. Two-stage coordination multi-radio multi-channel mac protocol for wireless mesh networks

    CERN Document Server

    Zhao, Bingxuan

    2011-01-01

    Within the wireless mesh network, a bottleneck problem arises as the number of concurrent traffic flows (NCTF) increases over a single common control channel, as it is for most conventional networks. To alleviate this problem, this paper proposes a two-stage coordination multi-radio multi-channel MAC (TSC-M2MAC) protocol that designates all available channels as both control channels and data channels in a time division manner through a two-stage coordination. At the first stage, a load balancing breadth-first-search-based vertex coloring algorithm for multi-radio conflict graph is proposed to intelligently allocate multiple control channels. At the second stage, a REQ/ACK/RES mechanism is proposed to realize dynamical channel allocation for data transmission. At this stage, the Channel-and-Radio Utilization Structure (CRUS) maintained by each node is able to alleviate the hidden nodes problem; also, the proposed adaptive adjustment algorithm for the Channel Negotiation and Allocation (CNA) sub-interval is ab...

  7. Development of a Two-Stage Microalgae Dewatering Process – A Life Cycle Assessment Approach

    Science.gov (United States)

    Soomro, Rizwan R.; Zeng, Xianhai; Lu, Yinghua; Lin, Lu; Danquah, Michael K.

    2016-01-01

    Even though microalgal biomass is leading the third generation biofuel research, significant effort is required to establish an economically viable commercial-scale microalgal biofuel production system. Whilst a significant amount of work has been reported on large-scale cultivation of microalgae using photo-bioreactors and pond systems, research focus on establishing high performance downstream dewatering operations for large-scale processing under optimal economy is limited. The enormous amount of energy and associated cost required for dewatering large-volume microalgal cultures has been the primary hindrance to the development of the needed biomass quantity for industrial-scale microalgal biofuels production. The extremely dilute nature of large-volume microalgal suspension and the small size of microalgae cells in suspension create a significant processing cost during dewatering and this has raised major concerns towards the economic success of commercial-scale microalgal biofuel production as an alternative to conventional petroleum fuels. This article reports an effective framework to assess the performance of different dewatering technologies as the basis to establish an effective two-stage dewatering system. Bioflocculation coupled with tangential flow filtration (TFF) emerged a promising technique with total energy input of 0.041 kWh, 0.05 kg CO2 emissions and a cost of $ 0.0043 for producing 1 kg of microalgae biomass. A streamlined process for operational analysis of two-stage microalgae dewatering technique, encompassing energy input, carbon dioxide emission, and process cost, is presented. PMID:26904075

  8. Two-stage image segmentation based on edge and region information

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A two-stage method for image segmentation based on edge and region information is proposed. Different deformation schemes are used at two stages for segmenting the object correctly in image plane. At the first stage, the contour of the model is divided into several segments hierarchically that deform respectively using affine transformation. After the contour is deformed to the approximate boundary of object, a fine match mechanism using statistical information of local region to redefine the external energy of the model is used to make the contour fit the object's boundary exactly. The algorithm is effective, as the hierarchical segmental deformation makes use of the globe and local information of the image, the affine transformation keeps the consistency of the model, and the reformative approaches of computing the internal energy and external energy are proposed to reduce the algorithm complexity. The adaptive method of defining the search area at the second stage makes the model converge quickly. The experimental results indicate that the proposed model is effective and robust to local minima and able to search for concave objects.

  9. Waste-gasification efficiency of a two-stage fluidized-bed gasification system.

    Science.gov (United States)

    Liu, Zhen-Shu; Lin, Chiou-Liang; Chang, Tsung-Jen; Weng, Wang-Chang

    2016-02-01

    This study employed a two-stage fluidized-bed gasifier as a gasification reactor and two additives (CaO and activated carbon) as the Stage-II bed material to investigate the effects of the operating temperature (700°C, 800°C, and 900°C) on the syngas composition, total gas yield, and gas-heating value during simulated waste gasification. The results showed that when the operating temperature increased from 700 to 900°C, the molar percentage of H2 in the syngas produced by the two-stage gasification process increased from 19.4 to 29.7mol% and that the total gas yield and gas-heating value also increased. When CaO was used as the additive, the molar percentage of CO2 in the syngas decreased, and the molar percentage of H2 increased. When activated carbon was used, the molar percentage of CH4 in the syngas increased, and the total gas yield and gas-heating value increased. Overall, CaO had better effects on the production of H2, whereas activated carbon clearly enhanced the total gas yield and gas-heating value. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  10. Two-Stage Orthogonal Least Squares Methods for Neural Network Construction.

    Science.gov (United States)

    Zhang, Long; Li, Kang; Bai, Er-Wei; Irwin, George W

    2015-08-01

    A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.

  11. Two-stage numerical simulation for temperature profile in furnace of tangentially fired pulverized coal boiler

    Institute of Scientific and Technical Information of China (English)

    ZHOU Nai-jun; XU Qiong-hui; ZHOU Ping

    2005-01-01

    Considering the fact that the temperature distribution in furnace of a tangential fired pulverized coal boiler is difficult to be measured and monitored, two-stage numerical simulation method was put forward. First, multi-field coupling simulation in typical work conditions was carried out off-line with the software CFX-4.3, and then the expression of temperature profile varying with operating parameter was obtained. According to real-time operating parameters, the temperature at arbitrary point of the furnace can be calculated by using this expression. Thus the temperature profile can be shown on-line and monitoring for combustion state in the furnace is realized. The simul-ation model was checked by the parameters measured in an operating boiler, DG130-9.8/540. The maximum of relative error is less than 12% and the absolute error is less than 120 ℃, which shows that the proposed two-stage simulation method is reliable and able to satisfy the requirement of industrial application.

  12. A low-voltage sense amplifier with two-stage operational amplifier clamping for flash memory

    Science.gov (United States)

    Guo, Jiarong

    2017-04-01

    A low-voltage sense amplifier with reference current generator utilizing two-stage operational amplifier clamp structure for flash memory is presented in this paper, capable of operating with minimum supply voltage at 1 V. A new reference current generation circuit composed of a reference cell and a two-stage operational amplifier clamping the drain pole of the reference cell is used to generate the reference current, which avoids the threshold limitation caused by current mirror transistor in the traditional sense amplifier. A novel reference voltage generation circuit using dummy bit-line structure without pull-down current is also adopted, which not only improves the sense window enhancing read precision but also saves power consumption. The sense amplifier was implemented in a flash realized in 90 nm flash technology. Experimental results show the access time is 14.7 ns with power supply of 1.2 V and slow corner at 125 °C. Project supported by the National Natural Science Fundation of China (No. 61376028).

  13. Two-stage high temperature sludge gasification using the waste heat from hot blast furnace slags.

    Science.gov (United States)

    Sun, Yongqi; Zhang, Zuotai; Liu, Lili; Wang, Xidong

    2015-12-01

    Nowadays, disposal of sewage sludge from wastewater treatment plants and recovery of waste heat from steel industry, become two important environmental issues and to integrate these two problems, a two-stage high temperature sludge gasification approach was investigated using the waste heat in hot slags herein. The whole process was divided into two stages, i.e., the low temperature sludge pyrolysis at ⩽ 900°C in argon agent and the high temperature char gasification at ⩾ 900°C in CO2 agent, during which the heat required was supplied by hot slags in different temperature ranges. Both the thermodynamic and kinetic mechanisms were identified and it was indicated that an Avrami-Erofeev model could best interpret the stage of char gasification. Furthermore, a schematic concept of this strategy was portrayed, based on which the potential CO yield and CO2 emission reduction achieved in China could be ∼1.92∗10(9)m(3) and 1.93∗10(6)t, respectively.

  14. A two-stage broadcast message propagation model in social networks

    Science.gov (United States)

    Wang, Dan; Cheng, Shun-Jun

    2016-11-01

    Message propagation in social networks is becoming a popular topic in complex networks. One of the message types in social networks is called broadcast message. It refers to a type of message which has a unique and unknown destination for the publisher, such as 'lost and found'. Its propagation always has two stages. Due to this feature, rumor propagation model and epidemic propagation model have difficulty in describing this message's propagation accurately. In this paper, an improved two-stage susceptible-infected-removed model is proposed. We come up with the concept of the first forwarding probability and the second forwarding probability. Another part of our work is figuring out the influence to the successful message transmission chance in each level resulting from multiple reasons, including the topology of the network, the receiving probability, the first stage forwarding probability, the second stage forwarding probability as well as the length of the shortest path between the publisher and the relevant destination. The proposed model has been simulated on real networks and the results proved the model's effectiveness.

  15. Two staged incentive contract focused on efficiency and innovation matching in critical chain project management

    Directory of Open Access Journals (Sweden)

    Min Zhang

    2014-09-01

    Full Text Available Purpose: The purpose of this paper is to define the relative optimal incentive contract to effectively encourage employees to improve work efficiency while actively implementing innovative behavior. Design/methodology/approach: This paper analyzes a two staged incentive contract coordinated with efficiency and innovation in Critical Chain Project Management using learning real options, based on principle-agent theory. The situational experiment is used to analyze the validity of the basic model. Finding: The two staged incentive scheme is more suitable for employees to create and implement learning real options, which will throw themselves into innovation process efficiently in Critical Chain Project Management. We prove that the combination of tolerance for early failure and reward for long-term success is effective in motivating innovation. Research limitations/implications: We do not include the individual characteristics of uncertain perception, which might affect the consistency of external validity. The basic model and the experiment design need to improve. Practical Implications: The project managers should pay closer attention to early innovation behavior and monitoring feedback of competition time in the implementation of Critical Chain Project Management. Originality/value: The central contribution of this paper is the theoretical and experimental analysis of incentive schemes for innovation in Critical Chain Project Management using the principal-agent theory, to encourage the completion of CCPM methods as well as imitative free-riding on the creative ideas of other members in the team.

  16. Selective capsulotomies of the expanded breast as a remodelling method in two-stage breast reconstruction.

    Science.gov (United States)

    Grimaldi, Luca; Campana, Matteo; Brandi, Cesare; Nisi, Giuseppe; Brafa, Anna; Calabrò, Massimiliano; D'Aniello, Carlo

    2013-06-01

    The two-stage breast reconstruction with tissue expander and prosthesis is nowadays a common method for achieving a satisfactory appearance in selected patients who had a mastectomy, but its most common aesthetic drawback is represented by an excessive volumetric increment of the superior half of the reconstructed breast, with a convexity of the profile in that area. A possible solution to limit this effect, and to fulfil the inferior pole, may be obtained by reducing the inferior tissue resistance by means of capsulotomies. This study reports the effects of various types of capsulotomies, performed in 72 patients after removal of the mammary expander, with the aim of emphasising the convexity of the inferior mammary aspect in the expanded breast. According to each kind of desired modification, possible solutions are described. On the basis of subjective and objective evaluations, an overall high degree of satisfaction has been evidenced. The described selective capsulotomies, when properly carried out, may significantly improve the aesthetic results in two-stage reconstructed breasts, with no additional scars, with minimal risks, and with little lengthening of the surgical time.

  17. Rapid Two-stage Versus One-stage Surgical Repair of Interrupted Aortic Arch with Ventricular Septal Defect in Neonates

    Directory of Open Access Journals (Sweden)

    Meng-Lin Lee

    2008-11-01

    Conclusion: The outcome of rapid two-stage repair is comparable to that of one-stage repair. Rapid two-stage repair has the advantages of significantly shorter cardiopulmonary bypass duration and AXC time, and avoids deep hypothermic circulatory arrest. LVOTO remains an unresolved issue, and postoperative aortic arch restenosis can be dilated effectively by percutaneous balloon angioplasty.

  18. Two-Stage Nerve Graft in Severe Scar: A Time-Course Study in a Rat Model

    Directory of Open Access Journals (Sweden)

    Shayan Zadegan

    2015-04-01

    According to the EPT and WRL, the two-stage nerve graft showed significant improvement (P=0.020 and P =0.017 respectively. The TOA showed no significant difference between the two groups. The total vascular index was significantly higher in the two-stage nerve graft group (P

  19. Factors that affect self-care behaviour of female high school students with dysmenorrhoea: a cluster sampling study.

    Science.gov (United States)

    Chang, Shu-Fang; Chuang, Mei-hua

    2012-04-01

    The purpose of this study was to identify factors that affect the self-care behaviour of female high school students with dysmenorrhoea. This cross-sectional study utilized a questionnaire-based survey to understand the self-care behaviour of female high school students dysmenorrhoeal, along with the factors that affect this behaviour. A cluster random sampling method was adopted and questionnaires were used for data collection. Study participants experienced a moderate level of discomfort from dysmenorrhoea, and perceived dysmenorrhoea as serious. This investigation finds that cues to action raised perceived susceptibility to dysmenorrhoea and the perceived effectiveness of self-care behaviour and, therefore, increased the adoption of self-care behaviour. Hence, school nurses should offer female high school students numerous resources to apply correct self-care behaviour.

  20. Two stages of Kondo effect and competition between RKKY and Kondo in Gd-based intermetallic compound

    Energy Technology Data Exchange (ETDEWEB)

    Vaezzadeh, Mehdi [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of)]. E-mail: mehdi@kntu.ac.ir; Yazdani, Ahmad [Tarbiat Modares University, P.O. Box 14155-4838, Tehran (Iran, Islamic Republic of); Vaezzadeh, Majid [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of); Daneshmand, Gissoo [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of); Kanzeghi, Ali [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of)

    2006-05-01

    The magnetic behavior of Gd-based intermetallic compound (Gd{sub 2}Al{sub (1-x)}Au{sub x}) in the form of the powder and needle, is investigated. All the samples are an orthorhombic crystal structure. Only the compound with x=0.4 shows the Kondo effect (other compounds have a normal behavior). Although, for the compound in the form of powder, with x=0.4, the susceptibility measurement {chi}(T) shows two different stages. Moreover for (T>T{sub K2}) a fall of the value of {chi}(T) is observable, which indicates a weak presence of ferromagnetic phase. About the two stages of Kondo effect, we observe at the first (T{sub K1}) an increase of {chi}(T) and in the second stage (T{sub K2}) a new remarkable decrease of {chi}(T) (T{sub K1}>T{sub K2}). For the sample in the form of needles, the first stage is observable only under high magnetic field. This first stage could be corresponds to a narrow resonance between Kondo cloud and itinerant electron. The second stage, which is remarkably visible for the sample in the form of the powder, can be attribute to a complete polarization of Kondo cloud. Observation of these two Kondo stages could be due to the weak presence of RKKY contribution.

  1. Low vaccination coverage of Greek Roma children amid economic crisis: national survey using stratified cluster sampling

    Science.gov (United States)

    Petraki, Ioanna; Arkoudis, Chrisoula; Terzidis, Agis; Smyrnakis, Emmanouil; Benos, Alexis; Panagiotopoulos, Takis

    2017-01-01

    Abstract Background: Research on Roma health is fragmentary as major methodological obstacles often exist. Reliable estimates on vaccination coverage of Roma children at a national level and identification of risk factors for low coverage could play an instrumental role in developing evidence-based policies to promote vaccination in this marginalized population group. Methods: We carried out a national vaccination coverage survey of Roma children. Thirty Roma settlements, stratified by geographical region and settlement type, were included; 7–10 children aged 24–77 months were selected from each settlement using systematic sampling. Information on children’s vaccination coverage was collected from multiple sources. In the analysis we applied weights for each stratum, identified through a consensus process. Results: A total of 251 Roma children participated in the study. A vaccination document was presented for the large majority (86%). We found very low vaccination coverage for all vaccines. In 35–39% of children ‘minimum vaccination’ (DTP3 and IPV2 and MMR1) was administered, while 34–38% had received HepB3 and 31–35% Hib3; no child was vaccinated against tuberculosis in the first year of life. Better living conditions and primary care services close to Roma settlements were associated with higher vaccination indices. Conclusions: Our study showed inadequate vaccination coverage of Roma children in Greece, much lower than that of the non-minority child population. This serious public health challenge should be systematically addressed, or, amid continuing economic recession, the gap may widen. Valid national estimates on important characteristics of the Roma population can contribute to planning inclusion policies. PMID:27694159

  2. Two-stage unilateral versus one-stage bilateral single-port sympathectomy for palmar and axillary hyperhidrosis†

    Science.gov (United States)

    Ibrahim, Mohsen; Menna, Cecilia; Andreetti, Claudio; Ciccone, Anna Maria; D'Andrilli, Antonio; Maurizi, Giulio; Poggi, Camilla; Vanni, Camilla; Venuta, Federico; Rendina, Erino Angelo

    2013-01-01

    OBJECTIVES Video-assisted thoracoscopic sympathectomy is currently the best treatment for palmar and axillary hyperhidrosis. It can be performed through either one or two stages of surgery. This study aimed to evaluate the operative and postoperative results of two-stage unilateral vs one-stage bilateral thoracoscopic sympathectomy. METHODS From November 1995 to February 2011, 270 patients with severe palmar and/or axillary hyperhidrosis were recruited for this study. One hundred and thirty patients received one-stage bilateral, single-port video-assisted thoracoscopic sympathectomy (one-stage group) and 140, two-stage unilateral, single-port video-assisted thoracoscopic sympathectomy, with a mean time interval of 4 months between the procedures (two-stage group). RESULTS The mean postoperative follow-up period was 12.5 (range: 1–24 months). After surgery, hands and axillae of all patients were dry and warm. Sixteen (12%) patients of the one-stage group and 15 (11%) of the two-stage group suffered from mild/moderate pain (P = 0.8482). The mean operative time was 38 ± 5 min in the one-stage group and 39 ± 8 min in the two-stage group (P = 0.199). Pneumothorax occurred in 8 (6%) patients of the one-stage group and in 11 (8%) of the two-stage group. Compensatory sweating occurred in 25 (19%) patients of the one-stage group and in 6 (4%) of the two-stage group (P = 0.0001). No patients developed Horner's syndrome. CONCLUSIONS Both two-stage unilateral and one-stage bilateral single-port video-assisted thoracoscopic sympathectomies are effective, safe and minimally invasive procedures. Two-stage unilateral sympathectomy can be performed with a lower occurrence of compensatory sweating, improving permanently the quality of life in patients with palmar and axillary hyperhidrosis. PMID:23442937

  3. Two-stage unilateral versus one-stage bilateral single-port sympathectomy for palmar and axillary hyperhidrosis.

    Science.gov (United States)

    Ibrahim, Mohsen; Menna, Cecilia; Andreetti, Claudio; Ciccone, Anna Maria; D'Andrilli, Antonio; Maurizi, Giulio; Poggi, Camilla; Vanni, Camilla; Venuta, Federico; Rendina, Erino Angelo

    2013-06-01

    Video-assisted thoracoscopic sympathectomy is currently the best treatment for palmar and axillary hyperhidrosis. It can be performed through either one or two stages of surgery. This study aimed to evaluate the operative and postoperative results of two-stage unilateral vs one-stage bilateral thoracoscopic sympathectomy. From November 1995 to February 2011, 270 patients with severe palmar and/or axillary hyperhidrosis were recruited for this study. One hundred and thirty patients received one-stage bilateral, single-port video-assisted thoracoscopic sympathectomy (one-stage group) and 140, two-stage unilateral, single-port video-assisted thoracoscopic sympathectomy, with a mean time interval of 4 months between the procedures (two-stage group). The mean postoperative follow-up period was 12.5 (range: 1-24 months). After surgery, hands and axillae of all patients were dry and warm. Sixteen (12%) patients of the one-stage group and 15 (11%) of the two-stage group suffered from mild/moderate pain (P = 0.8482). The mean operative time was 38 ± 5 min in the one-stage group and 39 ± 8 min in the two-stage group (P = 0.199). Pneumothorax occurred in 8 (6%) patients of the one-stage group and in 11 (8%) of the two-stage group. Compensatory sweating occurred in 25 (19%) patients of the one-stage group and in 6 (4%) of the two-stage group (P = 0.0001). No patients developed Horner's syndrome. Both two-stage unilateral and one-stage bilateral single-port video-assisted thoracoscopic sympathectomies are effective, safe and minimally invasive procedures. Two-stage unilateral sympathectomy can be performed with a lower occurrence of compensatory sweating, improving permanently the quality of life in patients with palmar and axillary hyperhidrosis.

  4. A two-stage strategy to accommodate general patterns of confounding in the design of observational studies.

    Science.gov (United States)

    Haneuse, Sebastien; Schildcrout, Jonathan; Gillen, Daniel

    2012-04-01

    Accommodating general patterns of confounding in sample size/power calculations for observational studies is extremely challenging, both technically and scientifically. While employing previously implemented sample size/power tools is appealing, they typically ignore important aspects of the design/data structure. In this paper, we show that sample size/power calculations that ignore confounding can be much more unreliable than is conventionally thought; using real data from the US state of North Carolina, naive calculations yield sample size estimates that are half those obtained when confounding is appropriately acknowledged. Unfortunately, eliciting realistic design parameters for confounding mechanisms is difficult. To overcome this, we propose a novel two-stage strategy for observational study design that can accommodate arbitrary patterns of confounding. At the first stage, researchers establish bounds for power that facilitate the decision of whether or not to initiate the study. At the second stage, internal pilot data are used to estimate key scientific inputs that can be used to obtain realistic sample size/power. Our results indicate that the strategy is effective at replicating gold standard calculations based on knowing the true confounding mechanism. Finally, we show that consideration of the nature of confounding is a crucial aspect of the elicitation process; depending on whether the confounder is positively or negatively associated with the exposure of interest and outcome, naive power calculations can either under or overestimate the required sample size. Throughout, simulation is advocated as the only general means to obtain realistic estimates of statistical power; we describe, and provide in an R package, a simple algorithm for estimating power for a case-control study.

  5. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  6. Two-stage vs single-stage management for concomitant gallstones and common bile duct stones

    Institute of Scientific and Technical Information of China (English)

    Jiong Lu; Yao Cheng; Xian-Ze Xiong; Yi-Xin Lin; Si-Jia Wu; Nan-Sheng Cheng

    2012-01-01

    AIM:To evaluate the safety and effectiveness of two-stage vs single-stage management for concomitant gallstones and common bile duct stones.METHODS:Four databases,including PubMed,Embase,the Cochrane Central Register of Controlled Trials and the Science Citation Index up to September 2011,were searched to identify all randomized controlled trials (RCTs).Data were extracted from the studies by two independent reviewers.The primary outcomes were stone clearance from the common bile duct,postoperative morbidity and mortality.The secondary outcomes were conversion to other procedures,number of procedures per patient,length of hospital stay,total operative time,hospitalization charges,patient acceptance and quality of life scores.RESULTS:Seven eligible RCTs [five trials (n =621)comparing preoperative endoscopic retrograde cholangiopancreatography (ERCP)/endoscopic sphincterotomy (EST) + laparoscopic cholecystectomy (LC) with LC +laparoscopic common bile duct exploration (LCBDE);two trials (n =166) comparing postoperative ERCP/EST + LC with LC + LCBDE],composed of 787 patients in total,were included in the final analysis.The metaanalysis detected no statistically significant difference between the two groups in stone clearance from the common bile duct [risk ratios (RR) =-0.10,95% confidence intervals (CI):-0.24 to 0.04,P =0.17],postoperative morbidity (RR =0.79,95% CI:0.58 to 1.10,P =0.16),mortality (RR =2.19,95% CI:0.33 to 14.67,P =0.42),conversion to other procedures (RR =1.21,95% CI:0.54 to 2.70,P =0.39),length of hospital stay (MD =0.99,95% CI:-1.59 to 3.57,P =0.45),total operative time (MD =12.14,95% CI:-1.83 to 26.10,P =0.09).Two-stage (LC + ERCP/EST) management clearly required more procedures per patient than single-stage (LC + LCBDE) management.CONCLUSION:Single-stage management is equivalent to two-stage management but requires fewer procedures.However,patient's condition,operator's expertise and local resources should be taken into account in

  7. Clustered lot quality assurance sampling: a tool to monitor immunization coverage rapidly during a national yellow fever and polio vaccination campaign in Cameroon, May 2009.

    Science.gov (United States)

    Pezzoli, L; Tchio, R; Dzossa, A D; Ndjomo, S; Takeu, A; Anya, B; Ticha, J; Ronveaux, O; Lewis, R F

    2012-01-01

    We used the clustered lot quality assurance sampling (clustered-LQAS) technique to identify districts with low immunization coverage and guide mop-up actions during the last 4 days of a combined oral polio vaccine (OPV) and yellow fever (YF) vaccination campaign conducted in Cameroon in May 2009. We monitored 17 pre-selected districts at risk for low coverage. We designed LQAS plans to reject districts with YF vaccination coverage <90% and with OPV coverage <95%. In each lot the sample size was 50 (five clusters of 10) with decision values of 3 for assessing OPV and 7 for YF coverage. We 'rejected' 10 districts for low YF coverage and 14 for low OPV coverage. Hence we recommended a 2-day extension of the campaign. Clustered-LQAS proved to be useful in guiding the campaign vaccination strategy before the completion of the operations.

  8. The extended ROSAT-ESO Flux Limited X-ray Galaxy Cluster Survey (REFLEX II) III. Construction of the first flux-limited supercluster sample

    CERN Document Server

    Chon, Gayoung; Nowak, Nina

    2012-01-01

    We present the first supercluster catalogue constructed with the extended ROSAT-ESO Flux Limited X-ray Galaxy Cluster survey (REFLEX II) data, which comprises 919 X-ray selected galaxy clusters. Based on this cluster catalogue we construct a supercluster catalogue using a friends-of-friends algorithm with a linking length depending on the local cluster density. The resulting catalogue comprises 164 superclusters at redshift z<=0.4. We study the properties of different catalogues such as the distributions of the redshift, extent and multiplicity by varying the choice of parameters. In addition to the main catalogue we compile a large volume-limited cluster sample to investigate the statistics of the superclusters. We also compare the X-ray luminosity function for the clusters in superclusters with that for the field clusters with the flux- and volume-limited catalogues. The results mildly support the theoretical suggestion of a top-heavy X-ray luminosity function of galaxy clusters in regions of high cluste...

  9. Two-stage dilute-acid and organic-solvent lignocellulosic pretreatment for enhanced bioprocessing

    Energy Technology Data Exchange (ETDEWEB)

    Brodeur, G.; Telotte, J.; Stickel, J. J.; Ramakrishnan, S.

    2016-11-01

    A two stage pretreatment approach for biomass is developed in the current work in which dilute acid (DA) pretreatment is followed by a solvent based pretreatment (N-methyl morpholine N oxide -- NMMO). When the combined pretreatment (DAWNT) is applied to sugarcane bagasse and corn stover, the rates of hydrolysis and overall yields (>90%) are seen to dramatically improve and under certain conditions 48 h can be taken off the time of hydrolysis with the additional NMMO step to reach similar conversions. DAWNT shows a 2-fold increase in characteristic rates and also fractionates different components of biomass -- DA treatment removes the hemicellulose while the remaining cellulose is broken down by enzymatic hydrolysis after NMMO treatment to simple sugars. The remaining residual solid is high purity lignin. Future work will focus on developing a full scale economic analysis of DAWNT for use in biomass fractionation.

  10. Reconstruction of Gene Regulatory Networks Based on Two-Stage Bayesian Network Structure Learning Algorithm

    Institute of Scientific and Technical Information of China (English)

    Gui-xia Liu; Wei Feng; Han Wang; Lei Liu; Chun-guang Zhou

    2009-01-01

    In the post-genomic biology era, the reconstruction of gene regulatory networks from microarray gene expression data is very important to understand the underlying biological system, and it has been a challenging task in bioinformatics. The Bayesian network model has been used in reconstructing the gene regulatory network for its advantages, but how to determine the network structure and parameters is still important to be explored. This paper proposes a two-stage structure learning algorithm which integrates immune evolution algorithm to build a Bayesian network .The new algorithm is evaluated with the use of both simulated and yeast cell cycle data. The experimental results indicate that the proposed algorithm can find many of the known real regulatory relationships from literature and predict the others unknown with high validity and accuracy.

  11. The Application of Two-stage Structure Decomposition Technique to the Study of Industrial Carbon Emissions

    Institute of Scientific and Technical Information of China (English)

    Yanqiu HE

    2015-01-01

    The total carbon emissions control is the ultimate goal of carbon emission reduction, while industrial carbon emissions are the basic units of the total carbon emission. On the basis of existing research results, in this paper, a two-stage input-output structure decomposition method is creatively proposed for fully combining the input-output method with the structure decomposition technique. In this study, more comprehensive technical progress indicators were chosen in comparison with the previous studies and included the utilization efficiency of all kinds of intermediate inputs such as energy and non-energy products, and finally were positioned at the factors affecting the carbon emissions of different industries. Through analysis, the affecting rate of each factor on industrial carbon emissions was acquired. Thus, a theory basis and data support is provided for the total carbon emissions control of China from the perspective of industrial emissions.

  12. A two-stage metal valorisation process from electric arc furnace dust (EAFD

    Directory of Open Access Journals (Sweden)

    H. Issa

    2016-04-01

    Full Text Available This paper demonstrates possibility of separate zinc and lead recovery from coal composite pellets, composed of EAFD with other synergetic iron-bearing wastes and by-products (mill scale, pyrite-cinder, magnetite concentrate, through a two-stage process. The results show that in the first, low temp erature stage performed in electro-resistant furnace, removal of lead is enabled due to presence of chlorides in the system. In the second stage, performed at higher temperatures in Direct Current (DC plasma furnace, valorisation of zinc is conducted. Using this process, several final products were obtained, including a higher purity zinc oxide, which, by its properties, corresponds washed Waelz oxide.

  13. A wavelet-based two-stage near-lossless coder.

    Science.gov (United States)

    Yea, Sehoon; Pearlman, William A

    2006-11-01

    In this paper, we present a two-stage near-lossless compression scheme. It belongs to the class of "lossy plus residual coding" and consists of a wavelet-based lossy layer followed by arithmetic coding of the quantized residual to guarantee a given L(infinity) error bound in the pixel domain. We focus on the selection of the optimum bit rate for the lossy layer to achieve the minimum total bit rate. Unlike other similar lossy plus lossless approaches using a wavelet-based lossy layer, the proposed method does not require iteration of decoding and inverse discrete wavelet transform in succession to locate the optimum bit rate. We propose a simple method to estimate the optimal bit rate, with a theoretical justification based on the critical rate argument from the rate-distortion theory and the independence of the residual error.

  14. Two-Stage Over-the-Air (OTA Test Method for LTE MIMO Device Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Ya Jing

    2012-01-01

    Full Text Available With MIMO technology being adopted by the wireless communication standards LTE and HSPA+, MIMO OTA research has attracted wide interest from both industry and academia. Parallel studies are underway in COST2100, CTIA, and 3GPP RAN WG4. The major test challenge for MIMO OTA is how to create a repeatable scenario which accurately reflects the MIMO antenna radiation performance in a realistic wireless propagation environment. Different MIMO OTA methods differ in the way to reproduce a specified MIMO channel model. This paper introduces a novel, flexible, and cost-effective method for measuring MIMO OTA using a two-stage approach. In the first stage, the antenna pattern is measured in an anechoic chamber using a nonintrusive approach, that is without cabled connections or modifying the device. In the second stage, the antenna pattern is convolved with the chosen channel model in a channel emulator to measure throughput using a cabled connection.

  15. Two stages of parafoveal processing during reading: Evidence from a display change detection task.

    Science.gov (United States)

    Angele, Bernhard; Slattery, Timothy J; Rayner, Keith

    2016-08-01

    We used a display change detection paradigm (Slattery, Angele, & Rayner Human Perception and Performance, 37, 1924-1938 2011) to investigate whether display change detection uses orthographic regularity and whether detection is affected by the processing difficulty of the word preceding the boundary that triggers the display change. Subjects were significantly more sensitive to display changes when the change was from a nonwordlike preview than when the change was from a wordlike preview, but the preview benefit effect on the target word was not affected by whether the preview was wordlike or nonwordlike. Additionally, we did not find any influence of preboundary word frequency on display change detection performance. Our results suggest that display change detection and lexical processing do not use the same cognitive mechanisms. We propose that parafoveal processing takes place in two stages: an early, orthography-based, preattentional stage, and a late, attention-dependent lexical access stage.

  16. Enhanced biodiesel production in Neochloris oleoabundans by a semi-continuous process in two stage photobioreactors.

    Science.gov (United States)

    Yoon, Se Young; Hong, Min Eui; Chang, Won Seok; Sim, Sang Jun

    2015-07-01

    Under autotrophic conditions, highly productive biodiesel production was achieved using a semi-continuous culture system in Neochloris oleoabundans. In particular, the flue gas generated by combustion of liquefied natural gas and natural solar radiation were used for cost-effective microalgal culture system. In semi-continuous culture, the greater part (~80%) of the culture volume containing vegetative cells grown under nitrogen-replete conditions in a first photobioreactor (PBR) was directly transferred to a second PBR and cultured sequentially under nitrogen-deplete conditions for accelerating oil accumulation. As a result, in semi-continuous culture, the productivities of biomass and biodiesel in the cells were increased by 58% (growth phase) and 51% (induction phase) compared to the cells in batch culture, respectively. The semi-continuous culture system using two stage photobioreactors is a very efficient strategy to further improve biodiesel production from microalgae under photoautotrophic conditions.

  17. The Sources of Efficiency of the Nigerian Banking Industry: A Two- Stage Approach

    Directory of Open Access Journals (Sweden)

    Frances Obafemi

    2013-11-01

    Full Text Available The paper employed a two-stage Data Envelopment Analysis (DEA approach to examine the sources oftechnical efficiency in the Nigerian banking sub-sector. Using a cross sectionof commercial and merchant banks, the study showed that the Nigerian bankingindustry was not efficient both in the pre-and-post-liberalization era. Thestudy further revealed that market share was the strongest determinant oftechnical efficiency in the Nigerian banking Industry. Thus, appropriatemacroeconomic policy, institutional development and structural reforms mustaccompany financial liberalization to create the stable environment requiredfor it to succeed. Hence, the present bank consolidation and reforms by theCentral Bank of Nigeria, which started with Soludo and continued with Sanusi,are considered necessary, especially in the areas of e banking and reorganizingthe management of banks.

  18. Two-stage triolein breath test differentiates pancreatic insufficiency from other causes of malabsorption

    Energy Technology Data Exchange (ETDEWEB)

    Goff, J.S.

    1982-07-01

    In 24 patients with malabsorption, (/sup 14/C)triolein breath tests were conducted before and together with the administration of pancreatic enzymes (Pancrease, Johnson and Johnson, Skillman, N.J.). Eleven patients with pancreatic insufficiency had a significant rise in peak percent dose per hour /sup 14/CO/sub 2/ excretion after Pancrease, whereas 13 patients with other causes of malabsorption had no increase in /sup 14/CO/sub 2/ excretion (2.61 +/- 0.96 vs. 0.15 +/- 0.45, p less than 0.001). The two-stage (/sup 14/C)triolein breath test appears to be an accurate and simple noninvasive test of fat malabsorption that differentiates steatorrhea secondary to pancreatic insufficiency from other causes of steatorrhea.

  19. Heuristic for Critical Machine Based a Lot Streaming for Two-Stage Hybrid Production Environment

    Science.gov (United States)

    Vivek, P.; Saravanan, R.; Chandrasekaran, M.; Pugazhenthi, R.

    2017-03-01

    Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a two stage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.

  20. Product prioritization in a two-stage food production system with intermediate storage

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter

    2007-01-01

    In the food-processing industry, usually a limited number of storage tanks for intermediate storage is available, which are used for different products. The market sometimes requires extremely short lead times for some products, leading to prioritization of these products, partly through the dedi......In the food-processing industry, usually a limited number of storage tanks for intermediate storage is available, which are used for different products. The market sometimes requires extremely short lead times for some products, leading to prioritization of these products, partly through...... the dedication of a storage tank. This type of situation has hardly been investigated, although planners struggle with it in practice. This paper aims at investigating the fundamental effect of prioritization and dedicated storage in a two-stage production system, for various product mixes. We show...

  1. Experimental and modeling study of a two-stage pilot scale high solid anaerobic digester system.

    Science.gov (United States)

    Yu, Liang; Zhao, Quanbao; Ma, Jingwei; Frear, Craig; Chen, Shulin

    2012-11-01

    This study established a comprehensive model to configure a new two-stage high solid anaerobic digester (HSAD) system designed for highly degradable organic fraction of municipal solid wastes (OFMSW). The HSAD reactor as the first stage was naturally separated into two zones due to biogas floatation and low specific gravity of solid waste. The solid waste was retained in the upper zone while only the liquid leachate resided in the lower zone of the HSAD reactor. Continuous stirred-tank reactor (CSTR) and advective-diffusive reactor (ADR) models were constructed in series to describe the whole system. Anaerobic digestion model No. 1 (ADM1) was used as reaction kinetics and incorporated into each reactor module. Compared with the experimental data, the simulation results indicated that the model was able to well predict the pH, volatile fatty acid (VFA) and biogas production.

  2. Study of a two-stage photobase generator for photolithography in microelectronics.

    Science.gov (United States)

    Turro, Nicholas J; Li, Yongjun; Jockusch, Steffen; Hagiwara, Yuji; Okazaki, Masahiro; Mesch, Ryan A; Schuster, David I; Willson, C Grant

    2013-03-01

    The investigation of the photochemistry of a two-stage photobase generator (PBG) is described. Absorption of a photon by a latent PBG (1) (first step) produces a PBG (2). Irradiation of 2 in the presence of water produces a base (second step). This two-photon sequence (1 + hν → 2 + hν → base) is an important component in the design of photoresists for pitch division technology, a method that doubles the resolution of projection photolithography for the production of microelectronic chips. In the present system, the excitation of 1 results in a Norrish type II intramolecular hydrogen abstraction to generate a 1,4-biradiacal that undergoes cleavage to form 2 and acetophenone (Φ ∼ 0.04). In the second step, excitation of 2 causes cleavage of the oxime ester (Φ = 0.56) followed by base generation after reaction with water.

  3. ADM1-based modeling of methane production from acidified sweet sorghum extractin a two stage process

    DEFF Research Database (Denmark)

    Antonopoulou, Georgia; Gavala, Hariklia N.; Skiadas, Ioannis

    2012-01-01

    The present study focused on the application of the Anaerobic Digestion Model 1 οn the methane production from acidified sorghum extract generated from a hydrogen producing bioreactor in a two-stage anaerobic process. The kinetic parameters for hydrogen and volatile fatty acids consumption were...... estimated through fitting of the model equations to the data obtained from batch experiments. The simulation of the continuous reactor performance at all HRTs tested (20, 15 and 10d) was very satisfactory. Specifically, the largest deviation of the theoretical predictions against the experimental data...... was 12% for the methane production rate at the HRT of 20d while the deviation values for the 15 and 10 d HRT were 1.9% and 1.1%, respectively. The model predictions regarding pH, methane percentage in the gas phase and COD removal were in very good agreement with the experimental data with a deviation...

  4. A Two-Stage Diagnosis Framework for Wind Turbine Gearbox Condition Monitoring

    Directory of Open Access Journals (Sweden)

    Janet M. Twomey

    2013-01-01

    Full Text Available Advances in high performance sensing technologies enable the development of wind turbine condition monitoring system to diagnose and predict the system-wide effects of failure events. This paper presents a vibration-based two stage fault detection framework for failure diagnosis of rotating components in wind turbines. The proposed framework integrates an analytical defect detection method and a graphical verification method together to ensure the diagnosis efficiency and accuracy. The efficacy of the proposed methodology is demonstrated with a case study with the gearbox condition monitoring Round Robin study dataset provided by the National Renewable Energy Laboratory (NREL. The developed methodology successfully picked five faults out of seven in total with accurate severity levels without producing any false alarm in the blind analysis. The case study results indicated that the developed fault detection framework is effective for analyzing gear and bearing faults in wind turbine drive train system based upon system vibration characteristics.

  5. Nitrification and microalgae cultivation for two-stage biological nutrient valorization from source separated urine.

    Science.gov (United States)

    Coppens, Joeri; Lindeboom, Ralph; Muys, Maarten; Coessens, Wout; Alloul, Abbas; Meerbergen, Ken; Lievens, Bart; Clauwaert, Peter; Boon, Nico; Vlaeminck, Siegfried E

    2016-07-01

    Urine contains the majority of nutrients in urban wastewaters and is an ideal nutrient recovery target. In this study, stabilization of real undiluted urine through nitrification and subsequent microalgae cultivation were explored as strategy for biological nutrient recovery. A nitrifying inoculum screening revealed a commercial aquaculture inoculum to have the highest halotolerance. This inoculum was compared with municipal activated sludge for the start-up of two nitrification membrane bioreactors. Complete nitrification of undiluted urine was achieved in both systems at a conductivity of 75mScm(-1) and loading rate above 450mgNL(-1)d(-1). The halotolerant inoculum shortened the start-up time with 54%. Nitrite oxidizers showed faster salt adaptation and Nitrobacter spp. became the dominant nitrite oxidizers. Nitrified urine as growth medium for Arthrospira platensis demonstrated superior growth compared to untreated urine and resulted in a high protein content of 62%. This two-stage strategy is therefore a promising approach for biological nutrient recovery.

  6. STOCHASTIC DISCRETE MODEL OF TWO-STAGE ISOLATION SYSTEM WITH RIGID LIMITERS

    Institute of Scientific and Technical Information of China (English)

    HE Hua; FENG Qi; SHEN Rong-ying; WANG Yu

    2006-01-01

    The possible intermittent impacts of a two-stage isolation system with rigid limiters have been investigated. The isolation system is under periodic external excitation disturbed by small stationary Gaussian white noise after shock. The maximal impact Then in the period after shock, the zero order approximate stochastic discrete model and the first order approximate stochastic model are developed. The real isolation system of an MTU diesel engine is used to evaluate the established model. After calculating of the numerical example, the effects of noise excitation on the isolation system are discussed.The results show that the property of the system is complicated due to intermittent impact. The difference between zero order model and the first order model may be great.The effect of small noise is obvious. The results may be expected useful to the naval designers.

  7. Two-stage high frequency pulse tube cooler for refrigeration at 25 K

    CERN Document Server

    Dietrich, M

    2009-01-01

    A two-stage Stirling-type U-shape pulse tube cryocooler driven by a 10 kW-class linear compressor was designed, built and tested. A special feature of the cold head is the absence of a heat exchanger at the cold end of the first stage, since the intended application requires no cooling power at an intermediate temperature. Simulations where done using Sage-software to find optimum operating conditions and cold head geometry. Flow-impedance matching was required to connect the compressor designed for 60 Hz operation to the 40 Hz cold head. A cooling power of 12.9 W at 25 K with an electrical input power of 4.6 kW has been achieved up to now. The lowest temperature reached is 13.7 K.

  8. Two-stage reflective optical system for achromatic 10 nm x-ray focusing

    Science.gov (United States)

    Motoyama, Hiroto; Mimura, Hidekazu

    2015-12-01

    Recently, coherent x-ray sources have promoted developments of optical systems for focusing, imaging, and interferometers. In this paper, we propose a two-stage focusing optical system with the goal of achromatically focusing pulses from an x-ray free-electron laser (XFEL), with a focal width of 10 nm. In this optical system, the x-ray beam is expanded by a grazing-incidence aspheric mirror, and it is focused by a mirror that is shaped as a solid of revolution. We describe the design procedure and discuss the theoretical focusing performance. In theory, soft-XFEL lights can be focused to a 10 nm area without chromatic aberration and with high reflectivity; this creates an unprecedented power density of 1020 W cm-2 in the soft-x-ray range.

  9. A Sensorless Power Reserve Control Strategy for Two-Stage Grid-Connected PV Systems

    DEFF Research Database (Denmark)

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2017-01-01

    Due to the still increasing penetration of grid-connected Photovoltaic (PV) systems, advanced active power control functionalities have been introduced in grid regulations. A power reserve control, where namely the active power from the PV panels is reserved during operation, is required for grid...... to achieve the power reserve. In this method, the solar irradiance and temperature measurements that have been used in conventional power reserve control schemes to estimate the available PV power are not required, and thereby being a sensorless approach with reduced cost. Experimental tests have been...... support. In this paper, a cost-effective solution to realize the power reserve for two-stage grid-connected PV systems is proposed. The proposed solution routinely employs a Maximum Power Point Tracking (MPPT) control to estimate the available PV power and a Constant Power Generation (CPG) control...

  10. Sensorless Reserved Power Control Strategy for Two-Stage Grid-Connected Photovoltaic Systems

    DEFF Research Database (Denmark)

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2016-01-01

    Due to still increasing penetration level of grid-connected Photovoltaic (PV) systems, advanced active power control functionalities have been introduced in grid regulations. A reserved power control, where the active power from the PV panels is reserved during operation, is required for grid...... to achieve the power reserve. In this method, the irradiance measurements that have been used in conventional control schemes to estimate the available PV power are not required, and thereby being a sensorless solution. Simulations and experimental tests have been performed on a 3-kW two-stage single...... support. In this paper, a cost-effective solution to realize the reserved power control for grid-connected PV systems is proposed. The proposed solution routinely employs a Maximum Power Point Tracking (MPPT) control to estimate the available PV power and a Constant Power Generation (CPG) control...

  11. Prey-Predator Model with Two-Stage Infection in Prey: Concerning Pest Control

    Directory of Open Access Journals (Sweden)

    Swapan Kumar Nandi

    2015-01-01

    Full Text Available A prey-predator model system is developed; specifically the disease is considered into the prey population. Here the prey population is taken as pest and the predators consume the selected pest. Moreover, we assume that the prey species is infected with a viral disease forming into susceptible and two-stage infected classes, and the early stage of infected prey is more vulnerable to predation by the predator. Also, it is assumed that the later stage of infected pests is not eaten by the predator. Different equilibria of the system are investigated and their stability analysis and Hopf bifurcation of the system around the interior equilibriums are discussed. A modified model has been constructed by considering some alternative source of food for the predator population and the dynamical behavior of the modified model has been investigated. We have demonstrated the analytical results by numerical analysis by taking some simulated set of parameter values.

  12. Lossless and near-lossless digital angiography coding using a two-stage motion compensation approach.

    Science.gov (United States)

    dos Santos, Rafael A P; Scharcanski, Jacob

    2008-07-01

    This paper presents a two-stage motion compensation coding scheme for image sequences in hemodynamics. The first stage of the proposed method implements motion compensation, and the second stage corrects local pixel intensity distortions with a context adaptive linear predictor. The proposed method is robust to the local intensity distortions and the noise that often degrades these image sequences, providing lossless and near-lossless quality. Our experiments with lossless compression of 12bits/pixel studies indicate that, potentially, our approach can perform 3.8%, 2% and 1.6% better than JPEG-2000, JPEG-LS and the method proposed by Scharcanski [1], respectively. The performance tends to improve for near-lossless compression. Therefore, this work presents experimental evidence that for coding image sequences in hemodynamics, an adequate motion compensation scheme can be more efficient than the still-image coding methods often used nowadays.

  13. Quasi-estimation as a Basis for Two-stage Solving of Regression Problem

    CERN Document Server

    Gordinsky, Anatoly

    2010-01-01

    An effective two-stage method for an estimation of parameters of the linear regression is considered. For this purpose we introduce a certain quasi-estimator that, in contrast to usual estimator, produces two alternative estimates. It is proved that, in comparison to the least squares estimate, one alternative has a significantly smaller quadratic risk, retaining at the same time unbiasedness and consistency. These properties hold true for one-dimensional, multi-dimensional, orthogonal and non-orthogonal problems. Moreover, a Monte-Carlo simulation confirms high robustness of the quasi-estimator to violations of the initial assumptions. Therefore, at the first stage of the estimation we calculate mentioned two alternative estimates. At the second stage we choose the better estimate out of these alternatives. In order to do so we use additional information, among it but not exclusively of a priori nature. In case of two alternatives the volume of such information should be minimal. Furthermore, the additional ...

  14. A characteristics study on the performance of a two-stage light gas gun

    Institute of Scientific and Technical Information of China (English)

    吴应湘; 郑之初; P.Kupschus

    1995-01-01

    In order to obtain an overall and systematic understanding of the performance of a two-stage light gas gun (TLGG), a numerical code to simulate the process occurring in a gun shot is advanced based on the quasi-one-dimensional unsteady equations of motion with the real gas effect, friction and heat transfer taken into account in a characteristic formulation for both driver and propellant gas. Comparisons of projectile velocities and projectile pressures along the barrel with experimental results from JET (Joint European Torus) and with computational data got by the Lagrangian method indicate that this code can provide results with good accuracy over a wide range of gun geometry and loading conditions.

  15. A Two-Stage Approach for Medical Supplies Intermodal Transportation in Large-Scale Disaster Responses

    Directory of Open Access Journals (Sweden)

    Junhu Ruan

    2014-10-01

    Full Text Available We present a two-stage approach for the “helicopters and vehicles” intermodal transportation of medical supplies in large-scale disaster responses. In the first stage, a fuzzy-based method and its heuristic algorithm are developed to select the locations of temporary distribution centers (TDCs and assign medial aid points (MAPs to each TDC. In the second stage, an integer-programming model is developed to determine the delivery routes. Numerical experiments verified the effectiveness of the approach, and observed several findings: (i More TDCs often increase the efficiency and utility of medical supplies; (ii It is not definitely true that vehicles should load more and more medical supplies in emergency responses; (iii The more contrasting the traveling speeds of helicopters and vehicles are, the more advantageous the intermodal transportation is.

  16. A Two-Stage LGSM for Three-Point BVPs of Second-Order ODEs

    Directory of Open Access Journals (Sweden)

    Chein-Shan Liu

    2008-08-01

    Full Text Available The study in this paper is a numerical integration of second-order three-point boundary value problems under two imposed nonlocal boundary conditions at t=t0, t=ξ, and t=t1 in a general setting, where t0<ξtwo-stage Lie-group shooting method for finding unknown initial conditions, which are obtained through an iterative solution of derived algebraic equations in terms of a weighting factor r∈(0,1. The best r is selected by matching the target with a minimal discrepancy. Numerical examples are examined to confirm that the new approach has high efficiency and accuracy with a fast speed of convergence. Even for multiple solutions, the present method is also effective to find them.

  17. A Two-Stage LGSM for Three-Point BVPs of Second-Order ODEs

    Directory of Open Access Journals (Sweden)

    Liu Chein-Shan

    2008-01-01

    Full Text Available Abstract The study in this paper is a numerical integration of second-order three-point boundary value problems under two imposed nonlocal boundary conditions at , , and in a general setting, where . We construct a two-stage Lie-group shooting method for finding unknown initial conditions, which are obtained through an iterative solution of derived algebraic equations in terms of a weighting factor . The best is selected by matching the target with a minimal discrepancy. Numerical examples are examined to confirm that the new approach has high efficiency and accuracy with a fast speed of convergence. Even for multiple solutions, the present method is also effective to find them.

  18. Shaft Position Influence on Technical Characteristics of Universal Two-Stages Helical Speed Reducers

    Directory of Open Access Journals (Sweden)

    Мilan Rackov

    2005-10-01

    Full Text Available Purchasers of speed reducers decide on buying those reducers, that can the most approximately satisfy their demands with much smaller costs. Amount of used material, ie. mass and dimensions of gear unit influences on gear units price. Mass and dimensions of gear unit, besides output torque, gear unit ratio and efficiency, are the most important parameters of technical characteristics of gear units and their quality. Centre distance and position of shafts have significant influence on output torque, gear unit ratio and mass of gear unit through overall dimension of gear unit housing. Thus these characteristics are dependent on each other. This paper deals with analyzing of centre distance and shaft position influence on output torque and ratio of universal two stages gear units.

  19. A Two-stage Tuning Method of Servo Parameters for Feed Drives in Machine Tools

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the evaluation of dynamic performance for feed drives in machine tools, this paper presents a two-stage tuning method of servo parameters. In the first stage, the evaluation of dynamic performance, parameter tuning and optimization on a mechatronic integrated system simulation platform of feed drives are performed. As a result, a servo parameter combination is acquired. In the second stage, the servo parameter combination from the first stage is set and tuned further in a real machine tool whose dynamic performance is measured and evaluated using the cross grid encoder developed by Heidenhain GmbH. A case study shows that this method simplifies the test process effectively and results in a good dynamic performance in a real machine tool.

  20. Treatment of Domestic Sewage by Two-Stage-Bio-Contact Oxidation Process

    Institute of Scientific and Technical Information of China (English)

    LI Xiang-dong; FENG Qi-yan; LIU Zhong-wei; XIAO Xin; LIN Guo-hua

    2005-01-01

    Effects of hydraulic retention time (HRT) and gas volume on efficiency of wastewater treatment are discussed based on a simulation experiment in which the domestic swage was treated by the two-stage-bio-contact oxidation process. The result shows that the average CODcr, BOD5, suspended solid (SS), and ammonia-nitrogen removal rate are 94.5 %, 93.2 %, 91.7 % and 46.9 %, respectively, under the conditions of a total air/water ratio of 5: 1 , an air/water ratio of 3:1 for oxidation tank 1 and 2:1for oxidation tank 2 and a hydraulic retention time of 1 h for each stage. This method is suitable for domestic sewage treatment of residential community and small towns as well.

  1. Alignment and characterization of the two-stage time delay compensating XUV monochromator

    CERN Document Server

    Eckstein, Martin; Kubin, Markus; Yang, Chung-Hsin; Frassetto, Fabio; Poletto, Luca; Vrakking, Marc J J; Kornilov, Oleg

    2016-01-01

    We present the design, implementation and alignment procedure for a two-stage time delay compensating monochromator. The setup spectrally filters the radiation of a high-order harmonic generation source providing wavelength-selected XUV pulses with a bandwidth of 300 to 600~meV in the photon energy range of 3 to 50~eV. XUV pulses as short as $12\\pm3$~fs are demonstrated. Transmission of the 400~nm (3.1~eV) light facilitates precise alignment of the monochromator. This alignment strategy together with the stable mechanical design of the motorized beamline components enables us to automatically scan the XUV photon energ in pump-probe experiments that require XUV beam pointing stability. The performance of the beamline is demonstrated by the generation of IR-assisted sidebands in XUV photoionization of argon atoms.

  2. Final two-stage MOAO on-sky demonstration with CANARY

    Science.gov (United States)

    Gendron, E.; Morris, T.; Basden, A.; Vidal, F.; Atkinson, D.; Bitenc, U.; Buey, T.; Chemla, F.; Cohen, M.; Dickson, C.; Dipper, N.; Feautrier, P.; Gach, J.-L.; Gratadour, D.; Henry, D.; Huet, J.-M.; Morel, C.; Morris, S.; Myers, R.; Osborn, J.; Perret, D.; Reeves, A.; Rousset, G.; Sevin, A.; Stadler, E.; Talbot, G.; Todd, S.; Younger, E.

    2016-07-01

    CANARY is an on-sky Laser Guide Star (LGS) tomographic AO demonstrator in operation at the 4.2m William Herschel Telescope (WHT) in La Palma. From the early demonstration of open-loop tomography on a single deformable mirror using natural guide stars in 2010, CANARY has been progressively upgraded each year to reach its final goal in July 2015. It is now a two-stage system that mimics the future E-ELT: a GLAO-driven woofer based on 4 laser guide stars delivers a ground-layer compensated field to a figure sensor locked tweeter DM, that achieves the final on-axis tomographic compensation. We present the overall system, the control strategy and an overview of its on-sky performance.

  3. Performance of a highly loaded two stage axial-flow fan

    Science.gov (United States)

    Ruggeri, R. S.; Benser, W. A.

    1974-01-01

    A two-stage axial-flow fan with a tip speed of 1450 ft/sec (442 m/sec) and an overall pressure ratio of 2.8 was designed, built, and tested. At design speed and pressure ratio, the measured flow matched the design value of 184.2 lbm/sec (83.55kg/sec). The adiabatic efficiency at the design operating point was 85.7 percent. The stall margin at design speed was 10 percent. A first-bending-mode flutter of the second-stage rotor blades was encountered near stall at speeds between 77 and 93 percent of design, and also at high pressure ratios at speeds above 105 percent of design. A 5 deg closed reset of the first-stage stator eliminated second-stage flutter for all but a narrow speed range near 90 percent of design.

  4. A Two-stage Kalman Filter for Sensorless Direct Torque Controlled PM Synchronous Motor Drive

    Directory of Open Access Journals (Sweden)

    Boyu Yi

    2013-01-01

    Full Text Available This paper presents an optimal two-stage extended Kalman filter (OTSEKF for closed-loop flux, torque, and speed estimation of a permanent magnet synchronous motor (PMSM to achieve sensorless DTC-SVPWM operation of drive system. The novel observer is obtained by using the same transformation as in a linear Kalman observer, which is proposed by C.-S. Hsieh and F.-C. Chen in 1999. The OTSEKF is an effective implementation of the extended Kalman filter (EKF and provides a recursive optimum state estimation for PMSMs using terminal signals that may be polluted by noise. Compared to a conventional EKF, the OTSEKF reduces the number of arithmetic operations. Simulation and experimental results verify the effectiveness of the proposed OTSEKF observer for DTC of PMSMs.

  5. Synchronous rapid start-up of the methanation and anammox processes in two-stage ASBRs

    Science.gov (United States)

    Duan, Y.; Li, W. R.; Zhao, Y.

    2017-01-01

    The “methanation + anaerobic ammonia oxidation autotrophic denitrification” method was adopted by using anaerobic sequencing batch reactors (ASBRs) and realized a satisfactory synchronous removal of chemical oxygen demand (COD) and ammonia-nitrogen (NH4 +-N) in wastewater after 75 days operation. 90% of COD was removed at a COD load of 1.2 kg/(m3•d) and 90% of TN was removed at a TN load of 0.14 kg/(m3•d). The anammox reaction ratio was estimated to be 1: 1.32: 0.26. The results showed that synchronous rapid start-up of the methanation and anaerobic ammonia oxidation processes in two-stage ASBRs was feasible.

  6. a Remote Liquid Target Loading System for a Two-Stage Gas Gun

    Science.gov (United States)

    Gibson, L. L.; Bartram, B.; Dattelbaum, D. M.; Sheffield, S. A.; Stahl, D. B.

    2009-12-01

    A Remote Liquid Loading System (RLLS) was designed and tested for the application of loading high-hazard liquid materials into instrumented target cells for gas gun-driven plate impact experiments. These high hazard liquids tend to react with confining materials in a short period of time, degrading target assemblies and potentially building up pressure through the evolution of gas in the reactions. Therefore, the ability to load a gas gun target immediately prior to gun firing provides the most stable and reliable target fielding approach. We present the design and evaluation of an RLLS built for the LANL two-stage gas gun. The system has been used successfully to interrogate the shock initiation behavior of ˜98 wt% percent hydrogen peroxide (H2O2) solutions, using embedded electromagnetic gauges for measurement of shock wave profiles in-situ.

  7. Two-Stage Surgery for a Large Cervical Dumbbell Tumour in Neurofibromatosis 1: A Case Report

    Directory of Open Access Journals (Sweden)

    Mohd Ariff S

    2011-11-01

    Full Text Available Spinal neurofibromas occur sporadically and typically occur in association with neurofibromatosis 1. Patients afflicted with neurofibromatosis 1 usually present with involvement of several nerve roots. This report describes the case of a 14- year-old child with a large intraspinal, but extradural tumour with paraspinal extension, dumbbell neurofibroma of the cervical region extending from the C2 to C4 vertebrae. The lesions were readily detected by MR imaging and were successfully resected in a two-stage surgery. The time interval between the first and second surgery was one month. We provide a brief review of the literature regarding various surgical approaches, emphasising the utility of anterior and posterior approaches.

  8. Effekt of a two-stage nursing assesment and intervention - a randomized intervention study

    DEFF Research Database (Denmark)

    Rosted, Elizabeth Emilie; Poulsen, Ingrid; Hendriksen, Carsten

    to the geriatric outpatient clinic, community health centre, primary physician or arrangements with next-of-kin. Findings: Primary endpoints will be presented as unplanned readmission to ED; admission to nursing home; and death. Secondary endpoints will be presented as physical function; depressive symptoms......Background: Geriatric patients recently discharged from hospital are at risk of unplanned readmissions and admission to nursing home. When discharged directly from Emergency Department (ED) the risk increases, as time pressure often requires focus on the presenting problem, although 80...... % of geriatric patients have complex and often unresolved caring needs. The objective was to examine the effect of a two-stage nursing assessment and intervention to address the patients uncompensated problems given just after discharge from ED and one and six months after. Method: We conducted a prospective...

  9. Colorimetric characterization of liquid crystal display using an improved two-stage model

    Institute of Scientific and Technical Information of China (English)

    Yong Wang; Haisong Xu

    2006-01-01

    @@ An improved two-stage model of colorimetric characterization for liquid crystal display (LCD) was proposed. The model included an S-shape nonlinear function with four coefficients for each channel to fit the Tone reproduction curve (TRC), and a linear transfer matrix with black-level correction. To compare with the simple model (SM), gain-offset-gain (GOG), S-curve and three-one-dimensional look-up tables (3-1D LUTs) models, an identical LCD was characterized and the color differences were calculated and summarized using the set of 7 × 7 × 7 digital-to-analog converter (DAC) triplets as test data. The experimental results showed that the model was outperformed in comparison with the GOG and SM ones, and near to that of the S-curve model and 3-1D LUTs method.

  10. Fast Image Segmentation Based on a Two-Stage Geometrical Active Contour

    Institute of Scientific and Technical Information of China (English)

    肖昌炎; 张素; 陈亚珠

    2005-01-01

    A fast two-stage geometric active contour algorithm for image segmentation is developed. First, the Eikonal equation problem is quickly solved using an improved fast sweeping method, and a criterion of local minimum of area gradient (LMAG) is presented to extract the optimal arrival time. Then, the final time function is passed as an initial state to an area and length minimizing flow model, which adjusts the interface more accurately and prevents it from leaking. For object with complete and salient edge, using the first stage only is able to obtain an ideal result, and this results in a time complexity of O(M), where M is the number of points in each coordinate direction. Both stages are needed for convoluted shapes, but the computation cost can be drastically reduced. Efficiency of the algorithm is verified in segmentation experiments of real images with different feature.

  11. Parametric theoretical study of a two-stage solar organic Rankine cycle for RO desalination

    Energy Technology Data Exchange (ETDEWEB)

    Kosmadakis, G.; Manolakos, D.; Papadakis, G. [Department of Natural Resources and Agricultural Engineering, Agricultural University of Athens, 75 Iera Odos Street, 11855 Athens (Greece)

    2010-05-15

    The present work concerns the parametric study of an autonomous, two-stage solar organic Rankine cycle for RO desalination. The main goal of the current simulation is to estimate the efficiency, as well as to calculate the annual mechanical energy available for desalination in the considered cases, in order to evaluate the influence of various parameters on the performance of the system. The parametric study concerns the variation of different parameters, without changing actually the baseline case. The effect of the collectors' slope and the total number of evacuated tube collectors used, have been extensively examined. The total cost is also taken into consideration and is calculated for the different cases examined, along with the specific fresh water cost (EUR/m{sup 3}). (author)

  12. Removal of trichloroethylene (TCE) contaminated soil using a two-stage anaerobic-aerobic composting technique.

    Science.gov (United States)

    Ponza, Supat; Parkpian, Preeda; Polprasert, Chongrak; Shrestha, Rajendra P; Jugsujinda, Aroon

    2010-01-01

    The effect of organic carbon addition on remediation of trichloroethylene (TCE) contaminated clay soil was investigated using a two stage anaerobic-aerobic composting system. TCE removal rate and processes involved were determined. Uncontaminated clay soil was treated with composting materials (dried cow manure, rice husk and cane molasses) to represent carbon based treatments (5%, 10% and 20% OC). All treatments were spiked with TCE at 1,000 mg TCE/kg DW and incubated under anaerobic and mesophillic condition (35 degrees C) for 8 weeks followed by continuous aerobic condition for another 6 weeks. TCE dissipation, its metabolites and biogas composition were measured throughout the experimental period. Results show that TCE degradation depended upon the amount of organic carbon (OC) contained within the composting treatments/matrices. The highest TCE removal percentage (97%) and rate (75.06 micro Mole/kg DW/day) were obtained from a treatment of 10% OC composting matrices as compared to 87% and 27.75 micro Mole/kg DW/day for 20% OC, and 83% and 38.08 micro Mole/kg DW/day for soil control treatment. TCE removal rate was first order reaction kinetics. Highest degradation rate constant (k(1) = 0.035 day(- 1)) was also obtained from the 10% OC treatment, followed by 20% OC (k(1) = 0.026 day(- 1)) and 5% OC or soil control treatment (k(1) = 0.023 day(- 1)). The half-life was 20, 27 and 30 days, respectively. The overall results suggest that sequential two stages anaerobic-aerobic composting technique has potential for remediation of TCE in heavy texture soil, providing that easily biodegradable source of organic carbon is present.

  13. Two-Stage Surgical Treatment for Non-Union of a Shortened Osteoporotic Femur

    Directory of Open Access Journals (Sweden)

    Galal Zaki Said

    2013-01-01

    Full Text Available Introduction: We report a case of non-union with severe shortening of the femur following diaphysectomy for chronic osteomyelitis.Case Presentation: A boy, aged 16 years presented with a dangling and excessively short left lower limb. He was using an elbow crutch in his right hand to help him walk. He had a history of diaphysectomy for chronic osteomyelitis at the age of 9. Examination revealed a freely mobile non-union of the left femur. The femur was the seat of an 18 cm shortening and a 4 cm defect at the non-union site; the knee joint was ankylosed in extension. The tibia and fibula were 10 cm short. Considering the extensive shortening in the femur and tibia in addition to osteoporosis, he was treated in two stages. In stage I, the femoral non-union was treated by open reduction, internal fixation and iliac bone grafting. The patient was then allowed to walk with full weight bearing in an extension brace for 7 months. In Stage II, equalization of leg length discrepancy (LLD was achieved by simultaneous distraction of the femur and tibia by unilateral frames. At the 6 month follow- up, he was fully weight bearing without any walking aid, with a heel lift to compensate the 1.5 cm shortening. Three years later he reported that he was satisfied with the result of treatment and was leading a normal life as a university student.Conclusions: Two-stage treatment succeeded to restore about 20 cm of the femoral shortening in a severely osteoporotic bone. It has also succeeded in reducing the treatment time of the external fixator.

  14. Design and Characterization of two stage High-Speed CMOS Operational Amplifier

    Directory of Open Access Journals (Sweden)

    Rahul Chaudhari

    2014-03-01

    Full Text Available A method described in this paper is to design a Two Stage CMOS operational amplifier and analyze the effect of various aspect ratios on the characteristics of this Op-Amp, which operates at 1.8V power supply using tsmc 0.18μm CMOS technology. In this paper trade-off curves are computed between all characteristics such as Gain, PM, GBW, ICMRR, CMRR, Slew Rate etc. The OPAMP designed is a two-stage CMOS OPAMP. The OPAMP is designed to exhibit a unity gain frequency of 14MHz and exhibits a gain of 59.98dB with a 61.235 phase margin. Design has been carried out in Mentor graphics tools. Simulation results are verified using Model Sim Eldo and Design Architect IC. The task of CMOS operational amplifiers (Op-Amps design optimization is investigated in this work. This Paper focused on the optimization of various aspect ratios, which gave the result of different parameter. When this task is analyzed as a search problem, it can be translated into a multi-objective optimization application in which various Op-Amps’ specifications have to be taken into account, i.e., Gain, GBW (gain-bandwidth product, phase margin and others. The results are compared with respect to standard characteristics of the op-amp with the help of graph and table. Simulation results agree with theoretical predictions. Simulations confirm that the settling time can be further improved by increasing the value of GBW, the settling time is achieved 19ns. It has been demonstrated that when W/L increases the parameters GBW increases and settling time reduces.

  15. Anti-kindling Induced by Two-Stage Coordinated Reset Stimulation with Weak Onset Intensity

    Science.gov (United States)

    Zeitler, Magteld; Tass, Peter A.

    2016-01-01

    Abnormal neuronal synchrony plays an important role in a number of brain diseases. To specifically counteract abnormal neuronal synchrony by desynchronization, Coordinated Reset (CR) stimulation, a spatiotemporally patterned stimulation technique, was designed with computational means. In neuronal networks with spike timing–dependent plasticity CR stimulation causes a decrease of synaptic weights and finally anti-kindling, i.e., unlearning of abnormally strong synaptic connectivity and abnormal neuronal synchrony. Long-lasting desynchronizing aftereffects of CR stimulation have been verified in pre-clinical and clinical proof of concept studies. In general, for different neuromodulation approaches, both invasive and non-invasive, it is desirable to enable effective stimulation at reduced stimulation intensities, thereby avoiding side effects. For the first time, we here present a two-stage CR stimulation protocol, where two qualitatively different types of CR stimulation are delivered one after another, and the first stage comes at a particularly weak stimulation intensity. Numerical simulations show that a two-stage CR stimulation can induce the same degree of anti-kindling as a single-stage CR stimulation with intermediate stimulation intensity. This stimulation approach might be clinically beneficial in patients suffering from brain diseases characterized by abnormal neuronal synchrony where a first treatment stage should be performed at particularly weak stimulation intensities in order to avoid side effects. This might, e.g., be relevant in the context of acoustic CR stimulation in tinnitus patients with hyperacusis or in the case of electrical deep brain CR stimulation with sub-optimally positioned leads or side effects caused by stimulation of the target itself. We discuss how to apply our method in first in man and proof of concept studies. PMID:27242500

  16. Focused ultrasound simultaneous irradiation/MRI imaging, and two-stage general kinetic model.

    Directory of Open Access Journals (Sweden)

    Sheng-Yao Huang

    Full Text Available Many studies have investigated how to use focused ultrasound (FUS to temporarily disrupt the blood-brain barrier (BBB in order to facilitate the delivery of medication into lesion sites in the brain. In this study, through the setup of a real-time system, FUS irradiation and injections of ultrasound contrast agent (UCA and Gadodiamide (Gd, an MRI contrast agent can be conducted simultaneously during MRI scanning. By using this real-time system, we were able to investigate in detail how the general kinetic model (GKM is used to estimate Gd penetration in the FUS irradiated area in a rat's brain resulting from UCA concentration changes after single FUS irradiation. Two-stage GKM was proposed to estimate the Gd penetration in the FUS irradiated area in a rat's brain under experimental conditions with repeated FUS irradiation combined with different UCA concentrations. The results showed that the focal increase in the transfer rate constant of Ktrans caused by BBB disruption was dependent on the doses of UCA. Moreover, the amount of in vivo penetration of Evans blue in the FUS irradiated area in a rat's brain under various FUS irradiation experimental conditions was assessed to show the positive correlation with the transfer rate constants. Compared to the GKM method, the Two-stage GKM is more suitable for estimating the transfer rate constants of the brain treated with repeated FUS irradiations. This study demonstrated that the entire process of BBB disrupted by FUS could be quantitatively monitored by real-time dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI.

  17. Condition monitoring of distributed systems using two-stage Bayesian inference data fusion

    Science.gov (United States)

    Jaramillo, Víctor H.; Ottewill, James R.; Dudek, Rafał; Lepiarczyk, Dariusz; Pawlik, Paweł

    2017-03-01

    In industrial practice, condition monitoring is typically applied to critical machinery. A particular piece of machinery may have its own condition monitoring system that allows the health condition of said piece of equipment to be assessed independently of any connected assets. However, industrial machines are typically complex sets of components that continuously interact with one another. In some cases, dynamics resulting from the inception and development of a fault can propagate between individual components. For example, a fault in one component may lead to an increased vibration level in both the faulty component, as well as in connected healthy components. In such cases, a condition monitoring system focusing on a specific element in a connected set of components may either incorrectly indicate a fault, or conversely, a fault might be missed or masked due to the interaction of a piece of equipment with neighboring machines. In such cases, a more holistic condition monitoring approach that can not only account for such interactions, but utilize them to provide a more complete and definitive diagnostic picture of the health of the machinery is highly desirable. In this paper, a Two-Stage Bayesian Inference approach allowing data from separate condition monitoring systems to be combined is presented. Data from distributed condition monitoring systems are combined in two stages, the first data fusion occurring at a local, or component, level, and the second fusion combining data at a global level. Data obtained from an experimental rig consisting of an electric motor, two gearboxes, and a load, operating under a range of different fault conditions is used to illustrate the efficacy of the method at pinpointing the root cause of a problem. The obtained results suggest that the approach is adept at refining the diagnostic information obtained from each of the different machine components monitored, therefore improving the reliability of the health assessment of

  18. Novel two-stage piezoelectric-based ocean wave energy harvesters for moored or unmoored buoys

    Science.gov (United States)

    Murray, R.; Rastegar, J.

    2009-03-01

    Harvesting mechanical energy from ocean wave oscillations for conversion to electrical energy has long been pursued as an alternative or self-contained power source. The attraction to harvesting energy from ocean waves stems from the sheer power of the wave motion, which can easily exceed 50 kW per meter of wave front. The principal barrier to harvesting this power is the very low and varying frequency of ocean waves, which generally vary from 0.1Hz to 0.5Hz. In this paper the application of a novel class of two-stage electrical energy generators to buoyant structures is presented. The generators use the buoy's interaction with the ocean waves as a low-speed input to a primary system, which, in turn, successively excites an array of vibratory elements (secondary system) into resonance - like a musician strumming a guitar. The key advantage of the present system is that by having two decoupled systems, the low frequency and highly varying buoy motion is converted into constant and much higher frequency mechanical vibrations. Electrical energy may then be harvested from the vibrating elements of the secondary system with high efficiency using piezoelectric elements. The operating principles of the novel two-stage technique are presented, including analytical formulations describing the transfer of energy between the two systems. Also, prototypical design examples are offered, as well as an in-depth computer simulation of a prototypical heaving-based wave energy harvester which generates electrical energy from the up-and-down motion of a buoy riding on the ocean's surface.

  19. Anti-kindling induced by two-stage coordinated reset stimulation with weak onset intensity

    Directory of Open Access Journals (Sweden)

    Magteld eZeitler

    2016-05-01

    Full Text Available Abnormal neuronal synchrony plays an important role in a number of brain diseases. To specifically counteract abnormal neuronal synchrony by desynchronization, Coordinated Reset (CR stimulation, a spatiotemporally patterned stimulation technique, was designed with computational means. In neuronal networks with spike timing–dependent plasticity CR stimulation causes a decrease of synaptic weights and finally anti-kindling, i.e. unlearning of abnormally strong synaptic connectivity and abnormal neuronal synchrony. Long-lasting desynchronizing aftereffects of CR stimulation have been verified in pre-clinical and clinical proof of concept studies. In general, for different neuromodulation approaches, both invasive and non-invasive, it is desirable to enable effective stimulation at reduced stimulation intensities, thereby avoiding side effects. For the first time, we here present a two-stage CR stimulation protocol, where two qualitatively different types of CR stimulation are delivered one after another, and the first stage comes at a particularly weak stimulation intensity. Numerical simulations show that a two-stage CR stimulation can induce the same degree of anti-kindling as a single-stage CR stimulation with intermediate stimulation intensity. This stimulation approach might be clinically beneficial in patients suffering from brain diseases characterized by abnormal neuronal synchrony where a first treatment stage should be performed at particularly weak stimulation intensities in order to avoid side effects. This might, e.g., be relevant in the context of acoustic CR stimulation in tinnitus patients with hyperacusis or in the case of electrical deep brain CR stimulation with sub-optimally positioned leads or side effects caused by stimulation of the target itself. We discuss how to apply our method in first in man and proof of concept studies.

  20. Two-Stage Latissimus Dorsi Flap with Implant for Unilateral Breast Reconstruction: Getting the Size Right

    Directory of Open Access Journals (Sweden)

    Jiajun Feng

    2016-03-01

    Full Text Available BackgroundThe aim of unilateral breast reconstruction after mastectomy is to craft a natural-looking breast with symmetry. The latissimus dorsi (LD flap with implant is an established technique for this purpose. However, it is challenging to obtain adequate volume and satisfactory aesthetic results using a one-stage operation when considering factors such as muscle atrophy, wound dehiscence and excessive scarring. The two-stage reconstruction addresses these difficulties by using a tissue expander to gradually enlarge the skin pocket which eventually holds an appropriately sized implant.MethodsWe analyzed nine patients who underwent unilateral two-stage LD reconstruction. In the first stage, an expander was placed along with the LD flap to reconstruct the mastectomy defect, followed by gradual tissue expansion to achieve overexpansion of the skin pocket. The final implant volume was determined by measuring the residual expander volume after aspirating the excess saline. Finally, the expander was replaced with the chosen implant.ResultsThe average volume of tissue expansion was 460 mL. The resultant expansion allowed an implant ranging in volume from 255 to 420 mL to be placed alongside the LD muscle. Seven patients scored less than six on the relative breast retraction assessment formula for breast symmetry, indicating excellent breast symmetry. The remaining two patients scored between six and eight, indicating good symmetry.ConclusionsThis approach allows the size of the eventual implant to be estimated after the skin pocket has healed completely and the LD muscle has undergone natural atrophy. Optimal reconstruction results were achieved using this approach.