WorldWideScience

Sample records for two-stage cluster sample

  1. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  2. A two-stage cluster sampling method using gridded population data, a GIS, and Google EarthTM imagery in a population-based mortality survey in Iraq

    Directory of Open Access Journals (Sweden)

    Galway LP

    2012-04-01

    Full Text Available Abstract Background Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. Results We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Conclusion Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings.

  3. Quantification of physical activity using the QAPACE Questionnaire: a two stage cluster sample design survey of children and adolescents attending urban school.

    Science.gov (United States)

    Barbosa, Nicolas; Sanchez, Carlos E; Patino, Efrain; Lozano, Benigno; Thalabard, Jean C; LE Bozec, Serge; Rieu, Michel

    2016-05-01

    Quantification of physical activity as energy expenditure is important since youth for the prevention of chronic non communicable diseases in adulthood. It is necessary to quantify physical activity expressed in daily energy expenditure (DEE) in school children and adolescents between 8-16 years, by age, gender and socioeconomic level (SEL) in Bogotá. This is a Two Stage Cluster Survey Sample. From a universe of 4700 schools and 760000 students from three existing socioeconomic levels in Bogotá (low, medium and high). The random sample was 20 schools and 1840 students (904 boys and 936 girls). Foreshadowing desertion of participants and inconsistency in the questionnaire responses, the sample size was increased. Thus, 6 individuals of each gender for each of the nine age groups were selected, resulting in a total sample of 2160 individuals. Selected students filled the QAPACE questionnaire under supervision. The data was analyzed comparing means with multivariate general linear model. Fixed factors used were: gender (boys and girls), age (8 to 16 years old) and tri-strata SEL (low, medium and high); as independent variables were assessed: height, weight, leisure time, expressed in hours/day and dependent variable: daily energy expenditure DEE (kJ.kg-1.day-1): during leisure time (DEE-LT), during school time (DEE-ST), during vacation time (DEE-VT), and total mean DEE per year (DEEm-TY) RESULTS: Differences in DEE by gender, in boys, LT and all DEE, with the SEL all variables were significant; but age-SEL was only significant in DEE-VT. In girls, with the SEL all variables were significant. The post hoc multiple comparisons tests were significant with age using Fisher's Least Significant Difference (LSD) test in all variables. For both genders and for all SELs the values in girls had the higher value except SEL high (5-6) The boys have higher values in DEE-LT, DEE-ST, DEE-VT; except in DEEm-TY in SEL (5-6) In SEL (5-6) all DEEs for both genders are highest. For SEL

  4. Two-Stage Variable Sample-Rate Conversion System

    Science.gov (United States)

    Tkacenko, Andre

    2009-01-01

    A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.

  5. TWO-STAGE CHARACTER CLASSIFICATION : A COMBINED APPROACH OF CLUSTERING AND SUPPORT VECTOR CLASSIFIERS

    NARCIS (Netherlands)

    Vuurpijl, L.; Schomaker, L.

    2000-01-01

    This paper describes a two-stage classification method for (1) classification of isolated characters and (2) verification of the classification result. Character prototypes are generated using hierarchical clustering. For those prototypes known to sometimes produce wrong classification results, a

  6. A two-stage method for microcalcification cluster segmentation in mammography by deformable models

    International Nuclear Information System (INIS)

    Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.; Karahaliou, A.; Costaridou, L.; Vassiou, K.

    2015-01-01

    Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods are applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST cluster , average of minimum distance—AMINDIST cluster ) and the area overlap measure (AOM cluster ). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing tenfold cross

  7. Study of shallow junction formation by boron-containing cluster ion implantation of silicon and two-stage annealing

    Science.gov (United States)

    Lu, Xin-Ming

    Shallow junction formation made by low energy ion implantation and rapid thermal annealing is facing a major challenge for ULSI (ultra large scale integration) as the line width decreases down to the sub micrometer region. The issues include low beam current, the channeling effect in low energy ion implantation and TED (transient enhanced diffusion) during annealing after ion implantation. In this work, boron containing small cluster ions, such as GeB, SiB and SiB2, was generated by using the SNICS (source of negative ion by cesium sputtering) ion source to implant into Si substrates to form shallow junctions. The use of boron containing cluster ions effectively reduces the boron energy while keeping the energy of the cluster ion beam at a high level. At the same time, it reduces the channeling effect due to amorphization by co-implanted heavy atoms like Ge and Si. Cluster ions have been used to produce 0.65--2keV boron for low energy ion implantation. Two stage annealing, which is a combination of low temperature (550°C) preannealing and high temperature annealing (1000°C), was carried out to anneal the Si sample implanted by GeB, SiBn clusters. The key concept of two-step annealing, that is, the separation of crystal regrowth, point defects removal with dopant activation from dopant diffusion, is discussed in detail. The advantages of the two stage annealing include better lattice structure, better dopant activation and retarded boron diffusion. The junction depth of the two stage annealed GeB sample was only half that of the one-step annealed sample, indicating that TED was suppressed by two stage annealing. Junction depths as small as 30 nm have been achieved by two stage annealing of sample implanted with 5 x 10-4/cm2 of 5 keV GeB at 1000°C for 1 second. The samples were evaluated by SIMS (secondary ion mass spectrometry) profiling, TEM (transmission electron microscopy) and RBS (Rutherford Backscattering Spectrometry)/channeling. Cluster ion implantation

  8. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  9. Non-ideal magnetohydrodynamic simulations of the two-stage fragmentation model for cluster formation

    International Nuclear Information System (INIS)

    Bailey, Nicole D.; Basu, Shantanu

    2014-01-01

    We model molecular cloud fragmentation with thin-disk, non-ideal magnetohydrodynamic simulations that include ambipolar diffusion and partial ionization that transitions from primarily ultraviolet-dominated to cosmic-ray-dominated regimes. These simulations are used to determine the conditions required for star clusters to form through a two-stage fragmentation scenario. Recent linear analyses have shown that the fragmentation length scales and timescales can undergo a dramatic drop across the column density boundary that separates the ultraviolet- and cosmic-ray-dominated ionization regimes. As found in earlier studies, the absence of an ionization drop and regular perturbations leads to a single-stage fragmentation on pc scales in transcritical clouds, so that the nonlinear evolution yields the same fragment sizes as predicted by linear theory. However, we find that a combination of initial transcritical mass-to-flux ratio, evolution through a column density regime in which the ionization drop takes place, and regular small perturbations to the mass-to-flux ratio is sufficient to cause a second stage of fragmentation during the nonlinear evolution. Cores of size ∼0.1 pc are formed within an initial fragment of ∼pc size. Regular perturbations to the mass-to-flux ratio also accelerate the onset of runaway collapse.

  10. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  11. Economic Design of Acceptance Sampling Plans in a Two-Stage Supply Chain

    Directory of Open Access Journals (Sweden)

    Lie-Fern Hsu

    2012-01-01

    Full Text Available Supply Chain Management, which is concerned with material and information flows between facilities and the final customers, has been considered the most popular operations strategy for improving organizational competitiveness nowadays. With the advanced development of computer technology, it is getting easier to derive an acceptance sampling plan satisfying both the producer's and consumer's quality and risk requirements. However, all the available QC tables and computer software determine the sampling plan on a noneconomic basis. In this paper, we design an economic model to determine the optimal sampling plan in a two-stage supply chain that minimizes the producer's and the consumer's total quality cost while satisfying both the producer's and consumer's quality and risk requirements. Numerical examples show that the optimal sampling plan is quite sensitive to the producer's product quality. The product's inspection, internal failure, and postsale failure costs also have an effect on the optimal sampling plan.

  12. A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin

    Science.gov (United States)

    Blaschek, Michael; Duttmann, Rainer

    2015-04-01

    The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using

  13. Core condensation in heavy halos: a two-stage theory for galaxy formation and clustering

    Energy Technology Data Exchange (ETDEWEB)

    White, S D.M.; Rees, M J [Cambridge Univ. Inst. of Astronomy (UK)

    1978-05-01

    It is suggested that most of the material in the Universe condensed at an early epoch into small 'dark' objects. Irrespective of their nature, these objects must subsequently have undergone hierarchical clustering, whose present scale is inferred from the large-scale distribution of galaxies. As each stage of the hierarchy forms and collapses, relaxation effects wipe out its substructure, and to a self-similar distribution of bound masses. The entire luminous content of galaxies, however, results from the cooling and fragmentation of residual gas within the transient potential wells provided by the dark matter. Every galaxy thus forms as a concentrated luminous core embedded in an extensive dark halo. The observed sizes of galaxies and their survival through later stages of the hierarchy seem inexplicable without invoking substantial dissipation; this dissipation allows the galaxies to become sufficiently concentrated to survive the disruption of their halos in groups and clusters of galaxies. A specific model is proposed in which ..cap omega.. approximately equals 0.2, the dark matter makes up 80 per cent of the total mass, and half the residual gas has been converted into luminous galaxies by the present time. This model is consistent with the inferred proportions of dark matter and gas in rich clusters, with the observed luminosity density of the Universe and with the observed radii of galaxies; further, it predicts the characteristic luminosities of bright galaxies can give a luminosity function of the observed shape.

  14. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  15. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    OpenAIRE

    Liu Yang; Yao Xiong; Xiao-jiao Tong

    2017-01-01

    We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD) constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA) method to approximate the expected values of the underlying r...

  16. Evaluating the Validity of a Two-stage Sample in a Birth Cohort Established from Administrative Databases.

    Science.gov (United States)

    El-Zein, Mariam; Conus, Florence; Benedetti, Andrea; Parent, Marie-Elise; Rousseau, Marie-Claude

    2016-01-01

    When using administrative databases for epidemiologic research, a subsample of subjects can be interviewed, eliciting information on undocumented confounders. This article presents a thorough investigation of the validity of a two-stage sample encompassing an assessment of nonparticipation and quantification of the extent of bias. Established through record linkage of administrative databases, the Québec Birth Cohort on Immunity and Health (n = 81,496) aims to study the association between Bacillus Calmette-Guérin vaccination and asthma. Among 76,623 subjects classified in four Bacillus Calmette-Guérin-asthma strata, a two-stage sampling strategy with a balanced design was used to randomly select individuals for interviews. We compared stratum-specific sociodemographic characteristics and healthcare utilization of stage 2 participants (n = 1,643) with those of eligible nonparticipants (n = 74,980) and nonrespondents (n = 3,157). We used logistic regression to determine whether participation varied across strata according to these characteristics. The effect of nonparticipation was described by the relative odds ratio (ROR = ORparticipants/ORsource population) for the association between sociodemographic characteristics and asthma. Parental age at childbirth, area of residence, family income, and healthcare utilization were comparable between groups. Participants were slightly more likely to be women and have a mother born in Québec. Participation did not vary across strata by sex, parental birthplace, or material and social deprivation. Estimates were not biased by nonparticipation; most RORs were below one and bias never exceeded 20%. Our analyses evaluate and provide a detailed demonstration of the validity of a two-stage sample for researchers assembling similar research infrastructures.

  17. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2017-01-01

    Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.

  18. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Could the clinical interpretability of subgroups detected using clustering methods be improved by using a novel two-stage approach?

    DEFF Research Database (Denmark)

    Kent, Peter; Stochkendahl, Mette Jensen; Wulff Christensen, Henrik

    2015-01-01

    participation, psychological factors, biomarkers and imaging. However, such ‘whole person’ research may result in data-driven subgroups that are complex, difficult to interpret and challenging to recognise clinically. This paper describes a novel approach to applying statistical clustering techniques that may...... potential benefits but requires broad testing, in multiple patient samples, to determine its clinical value. The usefulness of the approach is likely to be context-specific, depending on the characteristics of the available data and the research question being asked of it....

  20. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    Science.gov (United States)

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  1. Remote Sensing Based Two-Stage Sampling for Accuracy Assessment and Area Estimation of Land Cover Changes

    Directory of Open Access Journals (Sweden)

    Heinz Gallaun

    2015-09-01

    Full Text Available Land cover change processes are accelerating at the regional to global level. The remote sensing community has developed reliable and robust methods for wall-to-wall mapping of land cover changes; however, land cover changes often occur at rates below the mapping errors. In the current publication, we propose a cost-effective approach to complement wall-to-wall land cover change maps with a sampling approach, which is used for accuracy assessment and accurate estimation of areas undergoing land cover changes, including provision of confidence intervals. We propose a two-stage sampling approach in order to keep accuracy, efficiency, and effort of the estimations in balance. Stratification is applied in both stages in order to gain control over the sample size allocated to rare land cover change classes on the one hand and the cost constraints for very high resolution reference imagery on the other. Bootstrapping is used to complement the accuracy measures and the area estimates with confidence intervals. The area estimates and verification estimations rely on a high quality visual interpretation of the sampling units based on time series of satellite imagery. To demonstrate the cost-effective operational applicability of the approach we applied it for assessment of deforestation in an area characterized by frequent cloud cover and very low change rate in the Republic of Congo, which makes accurate deforestation monitoring particularly challenging.

  2. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    Science.gov (United States)

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  3. Two-stage clustering (TSC: a pipeline for selecting operational taxonomic units for the high-throughput sequencing of PCR amplicons.

    Directory of Open Access Journals (Sweden)

    Xiao-Tao Jiang

    Full Text Available Clustering 16S/18S rRNA amplicon sequences into operational taxonomic units (OTUs is a critical step for the bioinformatic analysis of microbial diversity. Here, we report a pipeline for selecting OTUs with a relatively low computational demand and a high degree of accuracy. This pipeline is referred to as two-stage clustering (TSC because it divides tags into two groups according to their abundance and clusters them sequentially. The more abundant group is clustered using a hierarchical algorithm similar to that in ESPRIT, which has a high degree of accuracy but is computationally costly for large datasets. The rarer group, which includes the majority of tags, is then heuristically clustered to improve efficiency. To further improve the computational efficiency and accuracy, two preclustering steps are implemented. To maintain clustering accuracy, all tags are grouped into an OTU depending on their pairwise Needleman-Wunsch distance. This method not only improved the computational efficiency but also mitigated the spurious OTU estimation from 'noise' sequences. In addition, OTUs clustered using TSC showed comparable or improved performance in beta-diversity comparisons compared to existing OTU selection methods. This study suggests that the distribution of sequencing datasets is a useful property for improving the computational efficiency and increasing the clustering accuracy of the high-throughput sequencing of PCR amplicons. The software and user guide are freely available at http://hwzhoulab.smu.edu.cn/paperdata/.

  4. Relative Efficiencies of a Three-Stage Versus a Two-Stage Sample Design For a New NLS Cohort Study. 22U-884-38.

    Science.gov (United States)

    Folsom, R. E.; Weber, J. H.

    Two sampling designs were compared for the planned 1978 national longitudinal survey of high school seniors with respect to statistical efficiency and cost. The 1972 survey used a stratified two-stage sample of high schools and seniors within schools. In order to minimize interviewer travel costs, an alternate sampling design was proposed,…

  5. A two-stage approach to estimate spatial and spatio-temporal disease risks in the presence of local discontinuities and clusters.

    Science.gov (United States)

    Adin, A; Lee, D; Goicoa, T; Ugarte, María Dolores

    2018-01-01

    Disease risk maps for areal unit data are often estimated from Poisson mixed models with local spatial smoothing, for example by incorporating random effects with a conditional autoregressive prior distribution. However, one of the limitations is that local discontinuities in the spatial pattern are not usually modelled, leading to over-smoothing of the risk maps and a masking of clusters of hot/coldspot areas. In this paper, we propose a novel two-stage approach to estimate and map disease risk in the presence of such local discontinuities and clusters. We propose approaches in both spatial and spatio-temporal domains, where for the latter the clusters can either be fixed or allowed to vary over time. In the first stage, we apply an agglomerative hierarchical clustering algorithm to training data to provide sets of potential clusters, and in the second stage, a two-level spatial or spatio-temporal model is applied to each potential cluster configuration. The superiority of the proposed approach with regard to a previous proposal is shown by simulation, and the methodology is applied to two important public health problems in Spain, namely stomach cancer mortality across Spain and brain cancer incidence in the Navarre and Basque Country regions of Spain.

  6. Estimating Accuracy of Land-Cover Composition From Two-Stage Clustering Sampling

    Science.gov (United States)

    Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), ...

  7. Extending cluster Lot Quality Assurance Sampling designs for surveillance programs

    OpenAIRE

    Hund, Lauren; Pagano, Marcello

    2014-01-01

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than ...

  8. The implementation of two stages clustering (k-means clustering and adaptive neuro fuzzy inference system) for prediction of medicine need based on medical data

    Science.gov (United States)

    Husein, A. M.; Harahap, M.; Aisyah, S.; Purba, W.; Muhazir, A.

    2018-03-01

    Medication planning aim to get types, amount of medicine according to needs, and avoid the emptiness medicine based on patterns of disease. In making the medicine planning is still rely on ability and leadership experience, this is due to take a long time, skill, difficult to obtain a definite disease data, need a good record keeping and reporting, and the dependence of the budget resulted in planning is not going well, and lead to frequent lack and excess of medicines. In this research, we propose Adaptive Neuro Fuzzy Inference System (ANFIS) method to predict medication needs in 2016 and 2017 based on medical data in 2015 and 2016 from two source of hospital. The framework of analysis using two approaches. The first phase is implementing ANFIS to a data source, while the second approach we keep using ANFIS, but after the process of clustering from K-Means algorithm, both approaches are calculated values of Root Mean Square Error (RMSE) for training and testing. From the testing result, the proposed method with better prediction rates based on the evaluation analysis of quantitative and qualitative compared with existing systems, however the implementation of K-Means Algorithm against ANFIS have an effect on the timing of the training process and provide a classification accuracy significantly better without clustering.

  9. Systematic Sampling and Cluster Sampling of Packet Delays

    OpenAIRE

    Lindh, Thomas

    2006-01-01

    Based on experiences of a traffic flow performance meter this papersuggests and evaluates cluster sampling and systematic sampling as methods toestimate average packet delays. Systematic sampling facilitates for exampletime analysis, frequency analysis and jitter measurements. Cluster samplingwith repeated trains of periodically spaced sampling units separated by randomstarting periods, and systematic sampling are evaluated with respect to accuracyand precision. Packet delay traces have been ...

  10. Extending cluster lot quality assurance sampling designs for surveillance programs.

    Science.gov (United States)

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Two stages of economic development

    OpenAIRE

    Gong, Gang

    2016-01-01

    This study suggests that the development process of a less-developed country can be divided into two stages, which demonstrate significantly different properties in areas such as structural endowments, production modes, income distribution, and the forces that drive economic growth. The two stages of economic development have been indicated in the growth theory of macroeconomics and in the various "turning point" theories in development economics, including Lewis's dual economy theory, Kuznet...

  12. Two-stage implant systems.

    Science.gov (United States)

    Fritz, M E

    1999-06-01

    Since the advent of osseointegration approximately 20 years ago, there has been a great deal of scientific data developed on two-stage integrated implant systems. Although these implants were originally designed primarily for fixed prostheses in the mandibular arch, they have been used in partially dentate patients, in patients needing overdentures, and in single-tooth restorations. In addition, this implant system has been placed in extraction sites, in bone-grafted areas, and in maxillary sinus elevations. Often, the documentation of these procedures has lagged. In addition, most of the reports use survival criteria to describe results, often providing overly optimistic data. It can be said that the literature describes a true adhesion of the epithelium to the implant similar to adhesion to teeth, that two-stage implants appear to have direct contact somewhere between 50% and 70% of the implant surface, that the microbial flora of the two-stage implant system closely resembles that of the natural tooth, and that the microbiology of periodontitis appears to be closely related to peri-implantitis. In evaluations of the data from implant placement in all of the above-noted situations by means of meta-analysis, it appears that there is a strong case that two-stage dental implants are successful, usually showing a confidence interval of over 90%. It also appears that the mandibular implants are more successful than maxillary implants. Studies also show that overdenture therapy is valid, and that single-tooth implants and implants placed in partially dentate mouths have a success rate that is quite good, although not quite as high as in the fully edentulous dentition. It would also appear that the potential causes of failure in the two-stage dental implant systems are peri-implantitis, placement of implants in poor-quality bone, and improper loading of implants. There are now data addressing modifications of the implant surface to alter the percentage of

  13. Two-stage nonrecursive filter/decimator

    International Nuclear Information System (INIS)

    Yoder, J.R.; Richard, B.D.

    1980-08-01

    A two-stage digital filter/decimator has been designed and implemented to reduce the sampling rate associated with the long-term computer storage of certain digital waveforms. This report describes the design selection and implementation process and serves as documentation for the system actually installed. A filter design with finite-impulse response (nonrecursive) was chosen for implementation via direct convolution. A newly-developed system-test statistic validates the system under different computer-operating environments

  14. Re-estimating sample size in cluster randomized trials with active recruitment within clusters

    NARCIS (Netherlands)

    van Schie, Sander; Moerbeek, Mirjam

    2014-01-01

    Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster

  15. Spectral embedded clustering: a framework for in-sample and out-of-sample spectral clustering.

    Science.gov (United States)

    Nie, Feiping; Zeng, Zinan; Tsang, Ivor W; Xu, Dong; Zhang, Changshui

    2011-11-01

    Spectral clustering (SC) methods have been successfully applied to many real-world applications. The success of these SC methods is largely based on the manifold assumption, namely, that two nearby data points in the high-density region of a low-dimensional data manifold have the same cluster label. However, such an assumption might not always hold on high-dimensional data. When the data do not exhibit a clear low-dimensional manifold structure (e.g., high-dimensional and sparse data), the clustering performance of SC will be degraded and become even worse than K -means clustering. In this paper, motivated by the observation that the true cluster assignment matrix for high-dimensional data can be always embedded in a linear space spanned by the data, we propose the spectral embedded clustering (SEC) framework, in which a linearity regularization is explicitly added into the objective function of SC methods. More importantly, the proposed SEC framework can naturally deal with out-of-sample data. We also present a new Laplacian matrix constructed from a local regression of each pattern and incorporate it into our SEC framework to capture both local and global discriminative information for clustering. Comprehensive experiments on eight real-world high-dimensional datasets demonstrate the effectiveness and advantages of our SEC framework over existing SC methods and K-means-based clustering methods. Our SEC framework significantly outperforms SC using the Nyström algorithm on unseen data.

  16. AGN Clustering in the BAT Sample

    Science.gov (United States)

    Powell, Meredith; Cappelluti, Nico; Urry, Meg; Koss, Michael; BASS Team

    2018-01-01

    We characterize the environments of local growing supermassive black holes by measuring the clustering of AGN in the Swift-BAT Spectroscopic Survey (BASS). With 548 AGN in the redshift range 0.012MASS galaxies, we constrain the halo occupation distribution (HOD) of the full sample with unprecedented sensitivity, as well as in bins of obscuration with matched luminosity distributions. In doing so, we find that AGN tend to reside in galaxy groups, agreeing with previous studies of AGN throughout a large range of luminosity and redshift. We also find evidence that obscured AGN tend to reside in denser environments than unobscured AGN.

  17. Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?

    Science.gov (United States)

    Morris, Tim P; Fisher, David J; Kenward, Michael G; Carpenter, James R

    2018-04-30

    Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  18. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    Science.gov (United States)

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure

  19. Two-stage electrolysis to enrich tritium in environmental water

    International Nuclear Information System (INIS)

    Shima, Nagayoshi; Muranaka, Takeshi

    2007-01-01

    We present a two-stage electrolyzing procedure to enrich tritium in environmental waters. Tritium is first enriched rapidly through a commercially-available electrolyser with a large 50A current, and then through a newly-designed electrolyser that avoids the memory effect, with a 6A current. Tritium recovery factor obtained by such a two-stage electrolysis was greater than that obtained when using the commercially-available device solely. Water samples collected in 2006 in lakes and along the Pacific coast of Aomori prefecture, Japan, were electrolyzed using the two-stage method. Tritium concentrations in these samples ranged from 0.2 to 0.9 Bq/L and were half or less, that in samples collected at the same sites in 1992. (author)

  20. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Directory of Open Access Journals (Sweden)

    Lauren Hund

    Full Text Available Lot quality assurance sampling (LQAS surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  1. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Science.gov (United States)

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  2. Stochastic coupled cluster theory: Efficient sampling of the coupled cluster expansion

    Science.gov (United States)

    Scott, Charles J. C.; Thom, Alex J. W.

    2017-09-01

    We consider the sampling of the coupled cluster expansion within stochastic coupled cluster theory. Observing the limitations of previous approaches due to the inherently non-linear behavior of a coupled cluster wavefunction representation, we propose new approaches based on an intuitive, well-defined condition for sampling weights and on sampling the expansion in cluster operators of different excitation levels. We term these modifications even and truncated selections, respectively. Utilising both approaches demonstrates dramatically improved calculation stability as well as reduced computational and memory costs. These modifications are particularly effective at higher truncation levels owing to the large number of terms within the cluster expansion that can be neglected, as demonstrated by the reduction of the number of terms to be sampled when truncating at triple excitations by 77% and hextuple excitations by 98%.

  3. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-07-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  4. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-12-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  5. Two-stage precipitation of plutonium trifluoride

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1984-04-01

    Plutonium trifluoride was precipitated using a two-stage precipitation system. A series of precipitation experiments identified the significant process variables affecting precipitate characteristics. A mathematical precipitation model was developed which was based on the formation of plutonium fluoride complexes. The precipitation model relates all process variables, in a single equation, to a single parameter that can be used to control particle characteristics

  6. Two-Stage Series-Resonant Inverter

    Science.gov (United States)

    Stuart, Thomas A.

    1994-01-01

    Two-stage inverter includes variable-frequency, voltage-regulating first stage and fixed-frequency second stage. Lightweight circuit provides regulated power and is invulnerable to output short circuits. Does not require large capacitor across ac bus, like parallel resonant designs. Particularly suitable for use in ac-power-distribution system of aircraft.

  7. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys

    OpenAIRE

    Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we comp...

  8. Condensate from a two-stage gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Henriksen, Ulrik Birk; Hindsgaul, Claus

    2000-01-01

    Condensate, produced when gas from downdraft biomass gasifier is cooled, contains organic compounds that inhibit nitrifiers. Treatment with activated carbon removes most of the organics and makes the condensate far less inhibitory. The condensate from an optimised two-stage gasifier is so clean...... that the organic compounds and the inhibition effect are very low even before treatment with activated carbon. The moderate inhibition effect relates to a high content of ammonia in the condensate. The nitrifiers become tolerant to the condensate after a few weeks of exposure. The level of organic compounds...... and the level of inhibition are so low that condensate from the optimised two-stage gasifier can be led to the public sewer....

  9. Two stage-type railgun accelerator

    International Nuclear Information System (INIS)

    Ogino, Mutsuo; Azuma, Kingo.

    1995-01-01

    The present invention provides a two stage-type railgun accelerator capable of spiking a flying body (ice pellet) formed by solidifying a gaseous hydrogen isotope as a fuel to a thermonuclear reactor at a higher speed into a central portion of plasmas. Namely, the two stage-type railgun accelerator accelerates the flying body spiked from a initial stage accelerator to a portion between rails by Lorentz force generated when electric current is supplied to the two rails by way of a plasma armature. In this case, two sets of solenoids are disposed for compressing the plasma armature in the longitudinal direction of the rails. The first and the second sets of solenoid coils are previously supplied with electric current. After passing of the flying body, the armature formed into plasmas by a gas laser disposed at the back of the flying body is compressed in the longitudinal direction of the rails by a magnetic force of the first and the second sets of solenoid coils to increase the plasma density. A current density is also increased simultaneously. Then, the first solenoid coil current is turned OFF to accelerate the flying body in two stages by the compressed plasma armature. (I.S.)

  10. Two-stage free electron laser research

    Science.gov (United States)

    Segall, S. B.

    1984-10-01

    KMS Fusion, Inc. began studying the feasibility of two-stage free electron lasers for the Office of Naval Research in June, 1980. At that time, the two-stage FEL was only a concept that had been proposed by Luis Elias. The range of parameters over which such a laser could be successfully operated, attainable power output, and constraints on laser operation were not known. The primary reason for supporting this research at that time was that it had the potential for producing short-wavelength radiation using a relatively low voltage electron beam. One advantage of a low-voltage two-stage FEL would be that shielding requirements would be greatly reduced compared with single-stage short-wavelength FEL's. If the electron energy were kept below about 10 MeV, X-rays, generated by electrons striking the beam line wall, would not excite neutron resonance in atomic nuclei. These resonances cause the emission of neutrons with subsequent induced radioactivity. Therefore, above about 10 MeV, a meter or more of concrete shielding is required for the system, whereas below 10 MeV, a few millimeters of lead would be adequate.

  11. Hypospadias repair: Byar's two stage operation revisited.

    Science.gov (United States)

    Arshad, A R

    2005-06-01

    Hypospadias is a congenital deformity characterised by an abnormally located urethral opening, that could occur anywhere proximal to its normal location on the ventral surface of glans penis to the perineum. Many operations had been described for the management of this deformity. One hundred and fifteen patients with hypospadias were treated at the Department of Plastic Surgery, Hospital Kuala Lumpur, Malaysia between September 1987 and December 2002, of which 100 had Byar's procedure performed on them. The age of the patients ranged from neonates to 26 years old. Sixty-seven patients had penoscrotal (58%), 20 had proximal penile (18%), 13 had distal penile (11%) and 15 had subcoronal hypospadias (13%). Operations performed were Byar's two-staged (100), Bracka's two-staged (11), flip-flap (2) and MAGPI operation (2). The most common complication encountered following hypospadias surgery was urethral fistula at a rate of 18%. There is a higher incidence of proximal hypospadias in the Malaysian community. Byar's procedure is a very versatile technique and can be used for all types of hypospadias. Fistula rate is 18% in this series.

  12. Don't spin the pen: two alternative methods for second-stage sampling in urban cluster surveys

    Directory of Open Access Journals (Sweden)

    Rose Angela MC

    2007-06-01

    Full Text Available Abstract In two-stage cluster surveys, the traditional method used in second-stage sampling (in which the first household in a cluster is selected is time-consuming and may result in biased estimates of the indicator of interest. Firstly, a random direction from the center of the cluster is selected, usually by spinning a pen. The houses along that direction are then counted out to the boundary of the cluster, and one is then selected at random to be the first household surveyed. This process favors households towards the center of the cluster, but it could easily be improved. During a recent meningitis vaccination coverage survey in Maradi, Niger, we compared this method of first household selection to two alternatives in urban zones: 1 using a superimposed grid on the map of the cluster area and randomly selecting an intersection; and 2 drawing the perimeter of the cluster area using a Global Positioning System (GPS and randomly selecting one point within the perimeter. Although we only compared a limited number of clusters using each method, we found the sampling grid method to be the fastest and easiest for field survey teams, although it does require a map of the area. Selecting a random GPS point was also found to be a good method, once adequate training can be provided. Spinning the pen and counting households to the boundary was the most complicated and time-consuming. The two methods tested here represent simpler, quicker and potentially more robust alternatives to spinning the pen for cluster surveys in urban areas. However, in rural areas, these alternatives would favor initial household selection from lower density (or even potentially empty areas. Bearing in mind these limitations, as well as available resources and feasibility, investigators should choose the most appropriate method for their particular survey context.

  13. Hydration of Atmospheric Molecular Clusters: Systematic Configurational Sampling.

    Science.gov (United States)

    Kildgaard, Jens; Mikkelsen, Kurt V; Bilde, Merete; Elm, Jonas

    2018-05-09

    We present a new systematic configurational sampling algorithm for investigating the potential energy surface of hydrated atmospheric molecular clusters. The algo- rithm is based on creating a Fibonacci sphere around each atom in the cluster and adding water molecules to each point in 9 different orientations. To allow the sam- pling of water molecules to existing hydrogen bonds, the cluster is displaced along the hydrogen bond and a water molecule is placed in between in three different ori- entations. Generated redundant structures are eliminated based on minimizing the root mean square distance (RMSD) of different conformers. Initially, the clusters are sampled using the semiempirical PM6 method and subsequently using density func- tional theory (M06-2X and ωB97X-D) with the 6-31++G(d,p) basis set. Applying the developed algorithm we study the hydration of sulfuric acid with up to 15 water molecules. We find that the additions of the first four water molecules "saturate" the sulfuric acid molecule and are more thermodynamically favourable than the addition of water molecule 5-15. Using the large generated set of conformers, we assess the performance of approximate methods (ωB97X-D, M06-2X, PW91 and PW6B95-D3) in calculating the binding energies and assigning the global minimum conformation compared to high level CCSD(T)-F12a/VDZ-F12 reference calculations. The tested DFT functionals systematically overestimates the binding energies compared to cou- pled cluster calculations, and we find that this deficiency can be corrected by a simple scaling factor.

  14. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  15. ATCA observations of the MACS-Planck Radio Halo Cluster Project. II. Radio observations of an intermediate redshift cluster sample

    Science.gov (United States)

    Martinez Aviles, G.; Johnston-Hollitt, M.; Ferrari, C.; Venturi, T.; Democles, J.; Dallacasa, D.; Cassano, R.; Brunetti, G.; Giacintucci, S.; Pratt, G. W.; Arnaud, M.; Aghanim, N.; Brown, S.; Douspis, M.; Hurier, J.; Intema, H. T.; Langer, M.; Macario, G.; Pointecouteau, E.

    2018-04-01

    Aim. A fraction of galaxy clusters host diffuse radio sources whose origins are investigated through multi-wavelength studies of cluster samples. We investigate the presence of diffuse radio emission in a sample of seven galaxy clusters in the largely unexplored intermediate redshift range (0.3 http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A94

  16. Short term load forecasting: two stage modelling

    Directory of Open Access Journals (Sweden)

    SOARES, L. J.

    2009-06-01

    Full Text Available This paper studies the hourly electricity load demand in the area covered by a utility situated in the Seattle, USA, called Puget Sound Power and Light Company. Our proposal is put into proof with the famous dataset from this company. We propose a stochastic model which employs ANN (Artificial Neural Networks to model short-run dynamics and the dependence among adjacent hours. The model proposed treats each hour's load separately as individual single series. This approach avoids modeling the intricate intra-day pattern (load profile displayed by the load, which varies throughout days of the week and seasons. The forecasting performance of the model is evaluated in similiar mode a TLSAR (Two-Level Seasonal Autoregressive model proposed by Soares (2003 using the years of 1995 and 1996 as the holdout sample. Moreover, we conclude that non linearity is present in some series of these data. The model results are analyzed. The experiment shows that our tool can be used to produce load forecasting in tropical climate places.

  17. On the prior probabilities for two-stage Bayesian estimates

    International Nuclear Information System (INIS)

    Kohut, P.

    1992-01-01

    The method of Bayesian inference is reexamined for its applicability and for the required underlying assumptions in obtaining and using prior probability estimates. Two different approaches are suggested to determine the first-stage priors in the two-stage Bayesian analysis which avoid certain assumptions required for other techniques. In the first scheme, the prior is obtained through a true frequency based distribution generated at selected intervals utilizing actual sampling of the failure rate distributions. The population variability distribution is generated as the weighed average of the frequency distributions. The second method is based on a non-parametric Bayesian approach using the Maximum Entropy Principle. Specific features such as integral properties or selected parameters of prior distributions may be obtained with minimal assumptions. It is indicated how various quantiles may also be generated with a least square technique

  18. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-03-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer support, and one resolution enhancing step with nonsmooth mixed regularization. The first step is strictly direct and of sampling type, and it faithfully detects the scatterer support. The second step is an innovative application of nonsmooth mixed regularization, and it accurately resolves the scatterer size as well as intensities. The nonsmooth model can be efficiently solved by a semi-smooth Newton-type method. Numerical results for two- and three-dimensional examples indicate that the new approach is accurate, computationally efficient, and robust with respect to data noise. © 2012 Elsevier Inc.

  19. Two-stage anaerobic digestion of cheese whey

    Energy Technology Data Exchange (ETDEWEB)

    Lo, K V; Liao, P H

    1986-01-01

    A two-stage digestion of cheese whey was studied using two anaerobic rotating biological contact reactors. The second-stage reactor receiving partially treated effluent from the first-stage reactor could be operated at a hydraulic retention time of one day. The results indicated that two-stage digestion is a feasible alternative for treating whey. 6 references.

  20. Systematic adaptive cluster sampling for the assessment of rare tree species in Nepal

    NARCIS (Netherlands)

    Acharya, B.; Bhattarai, G.; Gier, de A.; Stein, A.

    2000-01-01

    Sampling to assess rare tree species poses methodic problems, because they may cluster and many plots with no such trees are to be expected. We used systematic adaptive cluster sampling (SACS) to sample three rare tree species in a forest area of about 40 ha in Nepal. We checked its applicability

  1. Cluster lot quality assurance sampling: effect of increasing the number of clusters on classification precision and operational feasibility.

    Science.gov (United States)

    Okayasu, Hiromasa; Brown, Alexandra E; Nzioki, Michael M; Gasasira, Alex N; Takane, Marina; Mkanda, Pascal; Wassilak, Steven G F; Sutter, Roland W

    2014-11-01

    To assess the quality of supplementary immunization activities (SIAs), the Global Polio Eradication Initiative (GPEI) has used cluster lot quality assurance sampling (C-LQAS) methods since 2009. However, since the inception of C-LQAS, questions have been raised about the optimal balance between operational feasibility and precision of classification of lots to identify areas with low SIA quality that require corrective programmatic action. To determine if an increased precision in classification would result in differential programmatic decision making, we conducted a pilot evaluation in 4 local government areas (LGAs) in Nigeria with an expanded LQAS sample size of 16 clusters (instead of the standard 6 clusters) of 10 subjects each. The results showed greater heterogeneity between clusters than the assumed standard deviation of 10%, ranging from 12% to 23%. Comparing the distribution of 4-outcome classifications obtained from all possible combinations of 6-cluster subsamples to the observed classification of the 16-cluster sample, we obtained an exact match in classification in 56% to 85% of instances. We concluded that the 6-cluster C-LQAS provides acceptable classification precision for programmatic action. Considering the greater resources required to implement an expanded C-LQAS, the improvement in precision was deemed insufficient to warrant the effort. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  2. Two-Stage Performance Engineering of Container-based Virtualization

    Directory of Open Access Journals (Sweden)

    Zheng Li

    2018-02-01

    Full Text Available Cloud computing has become a compelling paradigm built on compute and storage virtualization technologies. The current virtualization solution in the Cloud widely relies on hypervisor-based technologies. Given the recent booming of the container ecosystem, the container-based virtualization starts receiving more attention for being a promising alternative. Although the container technologies are generally considered to be lightweight, no virtualization solution is ideally resource-free, and the corresponding performance overheads will lead to negative impacts on the quality of Cloud services. To facilitate understanding container technologies from the performance engineering’s perspective, we conducted two-stage performance investigations into Docker containers as a concrete example. At the first stage, we used a physical machine with “just-enough” resource as a baseline to investigate the performance overhead of a standalone Docker container against a standalone virtual machine (VM. With findings contrary to the related work, our evaluation results show that the virtualization’s performance overhead could vary not only on a feature-by-feature basis but also on a job-to-job basis. Moreover, the hypervisor-based technology does not come with higher performance overhead in every case. For example, Docker containers particularly exhibit lower QoS in terms of storage transaction speed. At the ongoing second stage, we employed a physical machine with “fair-enough” resource to implement a container-based MapReduce application and try to optimize its performance. In fact, this machine failed in affording VM-based MapReduce clusters in the same scale. The performance tuning results show that the effects of different optimization strategies could largely be related to the data characteristics. For example, LZO compression can bring the most significant performance improvement when dealing with text data in our case.

  3. Effect of Silica Fume on two-stage Concrete Strength

    Science.gov (United States)

    Abdelgader, H. S.; El-Baden, A. S.

    2015-11-01

    Two-stage concrete (TSC) is an innovative concrete that does not require vibration for placing and compaction. TSC is a simple concept; it is made using the same basic constituents as traditional concrete: cement, coarse aggregate, sand and water as well as mineral and chemical admixtures. As its name suggests, it is produced through a two-stage process. Firstly washed coarse aggregate is placed into the formwork in-situ. Later a specifically designed self compacting grout is introduced into the form from the lowest point under gravity pressure to fill the voids, cementing the aggregate into a monolith. The hardened concrete is dense, homogeneous and has in general improved engineering properties and durability. This paper presents the results from a research work attempt to study the effect of silica fume (SF) and superplasticizers admixtures (SP) on compressive and tensile strength of TSC using various combinations of water to cement ratio (w/c) and cement to sand ratio (c/s). Thirty six concrete mixes with different grout constituents were tested. From each mix twenty four standard cylinder samples of size (150mm×300mm) of concrete containing crushed aggregate were produced. The tested samples were made from combinations of w/c equal to: 0.45, 0.55 and 0.85, and three c/s of values: 0.5, 1 and 1.5. Silica fume was added at a dosage of 6% of weight of cement, while superplasticizer was added at a dosage of 2% of cement weight. Results indicated that both tensile and compressive strength of TSC can be statistically derived as a function of w/c and c/s with good correlation coefficients. The basic principle of traditional concrete, which says that an increase in water/cement ratio will lead to a reduction in compressive strength, was shown to hold true for TSC specimens tested. Using a combination of both silica fume and superplasticisers caused a significant increase in strength relative to control mixes.

  4. Further observations on comparison of immunization coverage by lot quality assurance sampling and 30 cluster sampling.

    Science.gov (United States)

    Singh, J; Jain, D C; Sharma, R S; Verghese, T

    1996-06-01

    Lot Quality Assurance Sampling (LQAS) and standard EPI methodology (30 cluster sampling) were used to evaluate immunization coverage in a Primary Health Center (PHC) where coverage levels were reported to be more than 85%. Of 27 sub-centers (lots) evaluated by LQAS, only 2 were accepted for child coverage, whereas none was accepted for tetanus toxoid (TT) coverage in mothers. LQAS data were combined to obtain an estimate of coverage in the entire population; 41% (95% CI 36-46) infants were immunized appropriately for their ages, while 42% (95% CI 37-47) of their mothers had received a second/ booster dose of TT. TT coverage in 149 contemporary mothers sampled in EPI survey was also 42% (95% CI 31-52). Although results by the two sampling methods were consistent with each other, a big gap was evident between reported coverage (in children as well as mothers) and survey results. LQAS was found to be operationally feasible, but it cost 40% more and required 2.5 times more time than the EPI survey. LQAS therefore, is not a good substitute for current EPI methodology to evaluate immunization coverage in a large administrative area. However, LQAS has potential as method to monitor health programs on a routine basis in small population sub-units, especially in areas with high and heterogeneously distributed immunization coverage.

  5. Clustering problems for geochemical data

    International Nuclear Information System (INIS)

    Kane, V.E.; Larson, N.M.

    1977-01-01

    The Union Carbide Corporation, Nuclear Division, Uranium Resource Evaluation Project uses a two-stage sampling program to identify potential uranium districts. Cluster analysis techniques are used in locating high density sampling areas as well as in identifying potential uranium districts. Problems are considered involving the analysis of multivariate censored data, laboratory measurement error, and data standardization

  6. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi; Jin, Bangti; Zou, Jun

    2013-01-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer

  7. Evidence of two-stage melting of Wigner solids

    Science.gov (United States)

    Knighton, Talbot; Wu, Zhe; Huang, Jian; Serafin, Alessandro; Xia, J. S.; Pfeiffer, L. N.; West, K. W.

    2018-02-01

    Ultralow carrier concentrations of two-dimensional holes down to p =1 ×109cm-2 are realized. Remarkable insulating states are found below a critical density of pc=4 ×109cm-2 or rs≈40 . Sensitive dc V-I measurement as a function of temperature and electric field reveals a two-stage phase transition supporting the melting of a Wigner solid as a two-stage first-order transition.

  8. Variation in rank abundance replicate samples and impact of clustering

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    Calculating a single-sample rank abundance curve by using the negative-binomial distribution provides a way to investigate the variability within rank abundance replicate samples and yields a measure of the degree of heterogeneity of the sampled community. The calculation of the single-sample rank

  9. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    Science.gov (United States)

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  10. Accurate recapture identification for genetic mark–recapture studies with error-tolerant likelihood-based match calling and sample clustering

    Science.gov (United States)

    Sethi, Suresh; Linden, Daniel; Wenburg, John; Lewis, Cara; Lemons, Patrick R.; Fuller, Angela K.; Hare, Matthew P.

    2016-01-01

    Error-tolerant likelihood-based match calling presents a promising technique to accurately identify recapture events in genetic mark–recapture studies by combining probabilities of latent genotypes and probabilities of observed genotypes, which may contain genotyping errors. Combined with clustering algorithms to group samples into sets of recaptures based upon pairwise match calls, these tools can be used to reconstruct accurate capture histories for mark–recapture modelling. Here, we assess the performance of a recently introduced error-tolerant likelihood-based match-calling model and sample clustering algorithm for genetic mark–recapture studies. We assessed both biallelic (i.e. single nucleotide polymorphisms; SNP) and multiallelic (i.e. microsatellite; MSAT) markers using a combination of simulation analyses and case study data on Pacific walrus (Odobenus rosmarus divergens) and fishers (Pekania pennanti). A novel two-stage clustering approach is demonstrated for genetic mark–recapture applications. First, repeat captures within a sampling occasion are identified. Subsequently, recaptures across sampling occasions are identified. The likelihood-based matching protocol performed well in simulation trials, demonstrating utility for use in a wide range of genetic mark–recapture studies. Moderately sized SNP (64+) and MSAT (10–15) panels produced accurate match calls for recaptures and accurate non-match calls for samples from closely related individuals in the face of low to moderate genotyping error. Furthermore, matching performance remained stable or increased as the number of genetic markers increased, genotyping error notwithstanding.

  11. Occurrence of Radio Minihalos in a Mass-limited Sample of Galaxy Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Giacintucci, Simona; Clarke, Tracy E. [Naval Research Laboratory, 4555 Overlook Avenue SW, Code 7213, Washington, DC 20375 (United States); Markevitch, Maxim [NASA/Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Cassano, Rossella; Venturi, Tiziana; Brunetti, Gianfranco, E-mail: simona.giacintucci@nrl.navy.mil [INAF—Istituto di Radioastronomia, via Gobetti 101, I-40129 Bologna (Italy)

    2017-06-01

    We investigate the occurrence of radio minihalos—diffuse radio sources of unknown origin observed in the cores of some galaxy clusters—in a statistical sample of 58 clusters drawn from the Planck Sunyaev–Zel’dovich cluster catalog using a mass cut ( M {sub 500} > 6 × 10{sup 14} M {sub ⊙}). We supplement our statistical sample with a similarly sized nonstatistical sample mostly consisting of clusters in the ACCEPT X-ray catalog with suitable X-ray and radio data, which includes lower-mass clusters. Where necessary (for nine clusters), we reanalyzed the Very Large Array archival radio data to determine whether a minihalo is present. Our total sample includes all 28 currently known and recently discovered radio minihalos, including six candidates. We classify clusters as cool-core or non-cool-core according to the value of the specific entropy floor in the cluster center, rederived or newly derived from the Chandra X-ray density and temperature profiles where necessary (for 27 clusters). Contrary to the common wisdom that minihalos are rare, we find that almost all cool cores—at least 12 out of 15 (80%)—in our complete sample of massive clusters exhibit minihalos. The supplementary sample shows that the occurrence of minihalos may be lower in lower-mass cool-core clusters. No minihalos are found in non-cool cores or “warm cores.” These findings will help test theories of the origin of minihalos and provide information on the physical processes and energetics of the cluster cores.

  12. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  13. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    Science.gov (United States)

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  14. Two-stage thermal/nonthermal waste treatment process

    International Nuclear Information System (INIS)

    Rosocha, L.A.; Anderson, G.K.; Coogan, J.J.; Kang, M.; Tennant, R.A.; Wantuck, P.J.

    1993-01-01

    An innovative waste treatment technology is being developed in Los Alamos to address the destruction of hazardous organic wastes. The technology described in this report uses two stages: a packed bed reactor (PBR) in the first stage to volatilize and/or combust liquid organics and a silent discharge plasma (SDP) reactor to remove entrained hazardous compounds in the off-gas to even lower levels. We have constructed pre-pilot-scale PBR-SDP apparatus and tested the two stages separately and in combined modes. These tests are described in the report

  15. Adaptive cluster sampling: An efficient method for assessing inconspicuous species

    Science.gov (United States)

    Andrea M. Silletti; Joan Walker

    2003-01-01

    Restorationistis typically evaluate the success of a project by estimating the population sizes of species that have been planted or seeded. Because total census is raely feasible, they must rely on sampling methods for population estimates. However, traditional random sampling designs may be inefficient for species that, for one reason or another, are challenging to...

  16. Development of Explosive Ripper with Two-Stage Combustion

    Science.gov (United States)

    1974-10-01

    inch pipe duct work, the width of this duct proved to be detrimental in marginally rippable material; the duct, instead of the penetrator tip, was...marginally rippable rock. ID. Operating Requirements 2. Fuel The two-stage combustion device is designed to operate using S A 42. the same diesel

  17. Engineering analysis of the two-stage trifluoride precipitation process

    International Nuclear Information System (INIS)

    Luerkens, D.w.W.

    1984-06-01

    An engineering analysis of two-stage trifluoride precipitation processes is developed. Precipitation kinetics are modeled using consecutive reactions to represent fluoride complexation. Material balances across the precipitators are used to model the time dependent concentration profiles of the main chemical species. The results of the engineering analysis are correlated with previous experimental work on plutonium trifluoride and cerium trifluoride

  18. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  19. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2012-01-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general

  20. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    Directory of Open Access Journals (Sweden)

    Yanju Chen

    2015-01-01

    Full Text Available This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the second-stage programming problem is derived. As a result, the proposed two-stage model is equivalent to a single-stage model, and the analytical optimal solution of the two-stage model is obtained, which helps us to discuss the properties of the optimal solution. Finally, some numerical experiments are performed to demonstrate the new modeling idea and the effectiveness. The computational results provided by the proposed model show that the more risk-averse investor will invest more wealth in the risk-free security. They also show that the optimal invested amount in risky security increases as the risk-free return decreases and the optimal utility increases as the risk-free return increases, whereas the optimal utility increases as the transaction costs decrease. In most instances the utilities provided by the proposed two-stage model are larger than those provided by the single-stage model.

  1. The Hubble Space Telescope Medium Deep Survey Cluster Sample: Methodology and Data

    Science.gov (United States)

    Ostrander, E. J.; Nichol, R. C.; Ratnatunga, K. U.; Griffiths, R. E.

    1998-12-01

    We present a new, objectively selected, sample of galaxy overdensities detected in the Hubble Space Telescope Medium Deep Survey (MDS). These clusters/groups were found using an automated procedure that involved searching for statistically significant galaxy overdensities. The contrast of the clusters against the field galaxy population is increased when morphological data are used to search around bulge-dominated galaxies. In total, we present 92 overdensities above a probability threshold of 99.5%. We show, via extensive Monte Carlo simulations, that at least 60% of these overdensities are likely to be real clusters and groups and not random line-of-sight superpositions of galaxies. For each overdensity in the MDS cluster sample, we provide a richness and the average of the bulge-to-total ratio of galaxies within each system. This MDS cluster sample potentially contains some of the most distant clusters/groups ever detected, with about 25% of the overdensities having estimated redshifts z > ~0.9. We have made this sample publicly available to facilitate spectroscopic confirmation of these clusters and help more detailed studies of cluster and galaxy evolution. We also report the serendipitous discovery of a new cluster close on the sky to the rich optical cluster Cl l0016+16 at z = 0.546. This new overdensity, HST 001831+16208, may be coincident with both an X-ray source and a radio source. HST 001831+16208 is the third cluster/group discovered near to Cl 0016+16 and appears to strengthen the claims of Connolly et al. of superclustering at high redshift.

  2. Clustering for high-dimension, low-sample size data using distance vectors

    OpenAIRE

    Terada, Yoshikazu

    2013-01-01

    In high-dimension, low-sample size (HDLSS) data, it is not always true that closeness of two objects reflects a hidden cluster structure. We point out the important fact that it is not the closeness, but the "values" of distance that contain information of the cluster structure in high-dimensional space. Based on this fact, we propose an efficient and simple clustering approach, called distance vector clustering, for HDLSS data. Under the assumptions given in the work of Hall et al. (2005), w...

  3. Medical Image Retrieval Based On the Parallelization of the Cluster Sampling Algorithm

    OpenAIRE

    Ali, Hesham Arafat; Attiya, Salah; El-henawy, Ibrahim

    2017-01-01

    In this paper we develop parallel cluster sampling algorithms and show that a multi-chain version is embarrassingly parallel and can be used efficiently for medical image retrieval among other applications.

  4. HICOSMO - cosmology with a complete sample of galaxy clusters - I. Data analysis, sample selection and luminosity-mass scaling relation

    Science.gov (United States)

    Schellenberger, G.; Reiprich, T. H.

    2017-08-01

    The X-ray regime, where the most massive visible component of galaxy clusters, the intracluster medium, is visible, offers directly measured quantities, like the luminosity, and derived quantities, like the total mass, to characterize these objects. The aim of this project is to analyse a complete sample of galaxy clusters in detail and constrain cosmological parameters, like the matter density, Ωm, or the amplitude of initial density fluctuations, σ8. The purely X-ray flux-limited sample (HIFLUGCS) consists of the 64 X-ray brightest galaxy clusters, which are excellent targets to study the systematic effects, that can bias results. We analysed in total 196 Chandra observations of the 64 HIFLUGCS clusters, with a total exposure time of 7.7 Ms. Here, we present our data analysis procedure (including an automated substructure detection and an energy band optimization for surface brightness profile analysis) that gives individually determined, robust total mass estimates. These masses are tested against dynamical and Planck Sunyaev-Zeldovich (SZ) derived masses of the same clusters, where good overall agreement is found with the dynamical masses. The Planck SZ masses seem to show a mass-dependent bias to our hydrostatic masses; possible biases in this mass-mass comparison are discussed including the Planck selection function. Furthermore, we show the results for the (0.1-2.4) keV luminosity versus mass scaling relation. The overall slope of the sample (1.34) is in agreement with expectations and values from literature. Splitting the sample into galaxy groups and clusters reveals, even after a selection bias correction, that galaxy groups exhibit a significantly steeper slope (1.88) compared to clusters (1.06).

  5. Energy demand in Portuguese manufacturing: a two-stage model

    International Nuclear Information System (INIS)

    Borges, A.M.; Pereira, A.M.

    1992-01-01

    We use a two-stage model of factor demand to estimate the parameters determining energy demand in Portuguese manufacturing. In the first stage, a capital-labor-energy-materials framework is used to analyze the substitutability between energy as a whole and other factors of production. In the second stage, total energy demand is decomposed into oil, coal and electricity demands. The two stages are fully integrated since the energy composite used in the first stage and its price are obtained from the second stage energy sub-model. The estimates obtained indicate that energy demand in manufacturing responds significantly to price changes. In addition, estimation results suggest that there are important substitution possibilities among energy forms and between energy and other factors of production. The role of price changes in energy-demand forecasting, as well as in energy policy in general, is clearly established. (author)

  6. Two-step two-stage fission gas release model

    International Nuclear Information System (INIS)

    Kim, Yong-soo; Lee, Chan-bock

    2006-01-01

    Based on the recent theoretical model, two-step two-stage model is developed which incorporates two stage diffusion processes, grain lattice and grain boundary diffusion, coupled with the two step burn-up factor in the low and high burn-up regime. FRAPCON-3 code and its in-pile data sets have been used for the benchmarking and validation of this model. Results reveals that its prediction is in better agreement with the experimental measurements than that by any model contained in the FRAPCON-3 code such as ANS 5.4, modified ANS5.4, and Forsberg-Massih model over whole burn-up range up to 70,000 MWd/MTU. (author)

  7. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    OpenAIRE

    Chen, Yanju; Wang, Ye

    2015-01-01

    This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the s...

  8. Two-stage precipitation of neptunium (IV) oxalate

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1983-07-01

    Neptunium (IV) oxalate was precipitated using a two-stage precipitation system. A series of precipitation experiments was used to identify the significant process variables affecting precipitate characteristics. Process variables tested were input concentrations, solubility conditions in the first stage precipitator, precipitation temperatures, and residence time in the first stage precipitator. A procedure has been demonstrated that produces neptunium (IV) oxalate particles that filter well and readily calcine to the oxide

  9. A HIGH FIDELITY SAMPLE OF COLD FRONT CLUSTERS FROM THE CHANDRA ARCHIVE

    International Nuclear Information System (INIS)

    Owers, Matt S.; Nulsen, Paul E. J.; Markevitch, Maxim; Couch, Warrick J.

    2009-01-01

    This paper presents a sample of 'cold front' clusters selected from the Chandra archive. The clusters are selected based purely on the existence of surface brightness edges in their Chandra images which are modeled as density jumps. A combination of the derived density and temperature jumps across the fronts is used to select nine robust examples of cold front clusters: 1ES0657 - 558, Abell 1201, Abell 1758N, MS1455.0+2232, Abell 2069, Abell 2142, Abell 2163, RXJ1720.1+2638, and Abell 3667. This sample is the subject of an ongoing study aimed at relating cold fronts to cluster merger activity, and understanding how the merging environment affects the cluster constituents. Here, temperature maps are presented along with the Chandra X-ray images. A dichotomy is found in the sample in that there exists a subsample of cold front clusters which are clearly mergers based on their X-ray morphologies, and a second subsample of clusters which harbor cold fronts, but have surprisingly relaxed X-ray morphologies, and minimal evidence for merger activity at other wavelengths. For this second subsample, the existence of a cold front provides the sole evidence for merger activity at X-ray wavelengths. We discuss how cold fronts can provide additional information which may be used to constrain merger histories, and also the possibility of using cold fronts to distinguish major and minor mergers.

  10. Cluster chemical ionization for improved confidence level in sample identification by gas chromatography/mass spectrometry.

    Science.gov (United States)

    Fialkov, Alexander B; Amirav, Aviv

    2003-01-01

    Upon the supersonic expansion of helium mixed with vapor from an organic solvent (e.g. methanol), various clusters of the solvent with the sample molecules can be formed. As a result of 70 eV electron ionization of these clusters, cluster chemical ionization (cluster CI) mass spectra are obtained. These spectra are characterized by the combination of EI mass spectra of vibrationally cold molecules in the supersonic molecular beam (cold EI) with CI-like appearance of abundant protonated molecules, together with satellite peaks corresponding to protonated or non-protonated clusters of sample compounds with 1-3 solvent molecules. Like CI, cluster CI preferably occurs for polar compounds with high proton affinity. However, in contrast to conventional CI, for non-polar compounds or those with reduced proton affinity the cluster CI mass spectrum converges to that of cold EI. The appearance of a protonated molecule and its solvent cluster peaks, plus the lack of protonation and cluster satellites for prominent EI fragments, enable the unambiguous identification of the molecular ion. In turn, the insertion of the proper molecular ion into the NIST library search of the cold EI mass spectra eliminates those candidates with incorrect molecular mass and thus significantly increases the confidence level in sample identification. Furthermore, molecular mass identification is of prime importance for the analysis of unknown compounds that are absent in the library. Examples are given with emphasis on the cluster CI analysis of carbamate pesticides, high explosives and unknown samples, to demonstrate the usefulness of Supersonic GC/MS (GC/MS with supersonic molecular beam) in the analysis of these thermally labile compounds. Cluster CI is shown to be a practical ionization method, due to its ease-of-use and fast instrumental conversion between EI and cluster CI, which involves the opening of only one valve located at the make-up gas path. The ease-of-use of cluster CI is analogous

  11. Hot Zone Identification: Analyzing Effects of Data Sampling on Spam Clustering

    Directory of Open Access Journals (Sweden)

    Rasib Khan

    2014-03-01

    Full Text Available Email is the most common and comparatively the most efficient means of exchanging information in today's world. However, given the widespread use of emails in all sectors, they have been the target of spammers since the beginning. Filtering spam emails has now led to critical actions such as forensic activities based on mining spam email. The data mine for spam emails at the University of Alabama at Birmingham is considered to be one of the most prominent resources for mining and identifying spam sources. It is a widely researched repository used by researchers from different global organizations. The usual process of mining the spam data involves going through every email in the data mine and clustering them based on their different attributes. However, given the size of the data mine, it takes an exceptionally long time to execute the clustering mechanism each time. In this paper, we have illustrated sampling as an efficient tool for data reduction, while preserving the information within the clusters, which would thus allow the spam forensic experts to quickly and effectively identify the ‘hot zone’ from the spam campaigns. We have provided detailed comparative analysis of the quality of the clusters after sampling, the overall distribution of clusters on the spam data, and timing measurements for our sampling approach. Additionally, we present different strategies which allowed us to optimize the sampling process using data-preprocessing and using the database engine's computational resources, and thus improving the performance of the clustering process.

  12. On the errors on Omega(0): Monte Carlo simulations of the EMSS cluster sample

    DEFF Research Database (Denmark)

    Oukbir, J.; Arnaud, M.

    2001-01-01

    We perform Monte Carlo simulations of synthetic EMSS cluster samples, to quantify the systematic errors and the statistical uncertainties on the estimate of Omega (0) derived from fits to the cluster number density evolution and to the X-ray temperature distribution up to z=0.83. We identify...... the scatter around the relation between cluster X-ray luminosity and temperature to be a source of systematic error, of the order of Delta (syst)Omega (0) = 0.09, if not properly taken into account in the modelling. After correcting for this bias, our best Omega (0) is 0.66. The uncertainties on the shape...

  13. Planck early results. VIII. The all-sky early Sunyaev-Zeldovich cluster sample

    DEFF Research Database (Denmark)

    Poutanen, T.; Natoli, P.; Polenta, G.

    2011-01-01

    We present the first all-sky sample of galaxy clusters detected blindly by the Planck satellite through the Sunyaev-Zeldovich (SZ) effect from its six highest frequencies. This early SZ (ESZ) sample is comprised of 189 candidates, which have a high signal-to-noise ratio ranging from 6 to 29. Its ...

  14. Two stage treatment of dairy effluent using immobilized Chlorella pyrenoidosa

    Science.gov (United States)

    2013-01-01

    Background Dairy effluents contains high organic load and unscrupulous discharge of these effluents into aquatic bodies is a matter of serious concern besides deteriorating their water quality. Whilst physico-chemical treatment is the common mode of treatment, immobilized microalgae can be potentially employed to treat high organic content which offer numerous benefits along with waste water treatment. Methods A novel low cost two stage treatment was employed for the complete treatment of dairy effluent. The first stage consists of treating the diary effluent in a photobioreactor (1 L) using immobilized Chlorella pyrenoidosa while the second stage involves a two column sand bed filtration technique. Results Whilst NH4+-N was completely removed, a 98% removal of PO43--P was achieved within 96 h of two stage purification processes. The filtrate was tested for toxicity and no mortality was observed in the zebra fish which was used as a model at the end of 96 h bioassay. Moreover, a significant decrease in biological oxygen demand and chemical oxygen demand was achieved by this novel method. Also the biomass separated was tested as a biofertilizer to the rice seeds and a 30% increase in terms of length of root and shoot was observed after the addition of biomass to the rice plants. Conclusions We conclude that the two stage treatment of dairy effluent is highly effective in removal of BOD and COD besides nutrients like nitrates and phosphates. The treatment also helps in discharging treated waste water safely into the receiving water bodies since it is non toxic for aquatic life. Further, the algal biomass separated after first stage of treatment was highly capable of increasing the growth of rice plants because of nitrogen fixation ability of the green alga and offers a great potential as a biofertilizer. PMID:24355316

  15. Experimental studies of two-stage centrifugal dust concentrator

    Science.gov (United States)

    Vechkanova, M. V.; Fadin, Yu M.; Ovsyannikov, Yu G.

    2018-03-01

    The article presents data of experimental results of two-stage centrifugal dust concentrator, describes its design, and shows the development of a method of engineering calculation and laboratory investigations. For the experiments, the authors used quartz, ceramic dust and slag. Experimental dispersion analysis of dust particles was obtained by sedimentation method. To build a mathematical model of the process, dust collection was built using central composite rotatable design of the four factorial experiment. A sequence of experiments was conducted in accordance with the table of random numbers. Conclusion were made.

  16. Evaluating damping elements for two-stage suspension vehicles

    Directory of Open Access Journals (Sweden)

    Ronald M. Martinod R.

    2012-01-01

    Full Text Available The technical state of the damping elements for a vehicle having two-stage suspension was evaluated by using numerical models based on the multi-body system theory; a set of virtual tests used the eigenproblem mathematical method. A test was developed based on experimental modal analysis (EMA applied to a physical system as the basis for validating the numerical models. The study focused on evaluating vehicle dynamics to determine the influence of the dampers’ technical state in each suspension state.

  17. Two-Stage Fan I: Aerodynamic and Mechanical Design

    Science.gov (United States)

    Messenger, H. E.; Kennedy, E. E.

    1972-01-01

    A two-stage, highly-loaded fan was designed to deliver an overall pressure ratio of 2.8 with an adiabatic efficiency of 83.9 percent. At the first rotor inlet, design flow per unit annulus area is 42 lbm/sec/sq ft (205 kg/sec/sq m), hub/tip ratio is 0.4 with a tip diameter of 31 inches (0.787 m), and design tip speed is 1450 ft/sec (441.96 m/sec). Other features include use of multiple-circular-arc airfoils, resettable stators, and split casings over the rotor tip sections for casing treatment tests.

  18. Spatially explicit population estimates for black bears based on cluster sampling

    Science.gov (United States)

    Humm, J.; McCown, J. Walter; Scheick, B.K.; Clark, Joseph D.

    2017-01-01

    We estimated abundance and density of the 5 major black bear (Ursus americanus) subpopulations (i.e., Eglin, Apalachicola, Osceola, Ocala-St. Johns, Big Cypress) in Florida, USA with spatially explicit capture-mark-recapture (SCR) by extracting DNA from hair samples collected at barbed-wire hair sampling sites. We employed a clustered sampling configuration with sampling sites arranged in 3 × 3 clusters spaced 2 km apart within each cluster and cluster centers spaced 16 km apart (center to center). We surveyed all 5 subpopulations encompassing 38,960 km2 during 2014 and 2015. Several landscape variables, most associated with forest cover, helped refine density estimates for the 5 subpopulations we sampled. Detection probabilities were affected by site-specific behavioral responses coupled with individual capture heterogeneity associated with sex. Model-averaged bear population estimates ranged from 120 (95% CI = 59–276) bears or a mean 0.025 bears/km2 (95% CI = 0.011–0.44) for the Eglin subpopulation to 1,198 bears (95% CI = 949–1,537) or 0.127 bears/km2 (95% CI = 0.101–0.163) for the Ocala-St. Johns subpopulation. The total population estimate for our 5 study areas was 3,916 bears (95% CI = 2,914–5,451). The clustered sampling method coupled with information on land cover was efficient and allowed us to estimate abundance across extensive areas that would not have been possible otherwise. Clustered sampling combined with spatially explicit capture-recapture methods has the potential to provide rigorous population estimates for a wide array of species that are extensive and heterogeneous in their distribution.

  19. Two-stage, high power X-band amplifier experiment

    International Nuclear Information System (INIS)

    Kuang, E.; Davis, T.J.; Ivers, J.D.; Kerslick, G.S.; Nation, J.A.; Schaechter, L.

    1993-01-01

    At output powers in excess of 100 MW the authors have noted the development of sidebands in many TWT structures. To address this problem an experiment using a narrow bandwidth, two-stage TWT is in progress. The TWT amplifier consists of a dielectric (e = 5) slow-wave structure, a 30 dB sever section and a 8.8-9.0 GHz passband periodic, metallic structure. The electron beam used in this experiment is a 950 kV, 1 kA, 50 ns pencil beam propagating along an applied axial field of 9 kG. The dielectric first stage has a maximum gain of 30 dB measured at 8.87 GHz, with output powers of up to 50 MW in the TM 01 mode. In these experiments the dielectric amplifier output power is about 3-5 MW and the output power of the complete two-stage device is ∼160 MW at the input frequency. The sidebands detected in earlier experiments have been eliminated. The authors also report measurements of the energy spread of the electron beam resulting from the amplification process. These experimental results are compared with MAGIC code simulations and analytic work they have carried out on such devices

  20. Two-stage liquefaction of a Spanish subbituminous coal

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, M.T.; Fernandez, I.; Benito, A.M.; Cebolla, V.; Miranda, J.L.; Oelert, H.H. (Instituto de Carboquimica, Zaragoza (Spain))

    1993-05-01

    A Spanish subbituminous coal has been processed in two-stage liquefaction in a non-integrated process. The first-stage coal liquefaction has been carried out in a continuous pilot plant in Germany at Clausthal Technical University at 400[degree]C, 20 MPa hydrogen pressure and anthracene oil as solvent. The second-stage coal liquefaction has been performed in continuous operation in a hydroprocessing unit at the Instituto de Carboquimica at 450[degree]C and 10 MPa hydrogen pressure, with two commercial catalysts: Harshaw HT-400E (Co-Mo/Al[sub 2]O[sub 3]) and HT-500E (Ni-Mo/Al[sub 2]O[sub 3]). The total conversion for the first-stage coal liquefaction was 75.41 wt% (coal d.a.f.), being 3.79 wt% gases, 2.58 wt% primary condensate and 69.04 wt% heavy liquids. The heteroatoms removal for the second-stage liquefaction was 97-99 wt% of S, 85-87 wt% of N and 93-100 wt% of O. The hydroprocessed liquids have about 70% of compounds with boiling point below 350[degree]C, and meet the sulphur and nitrogen specifications for refinery feedstocks. Liquids from two-stage coal liquefaction have been distilled, and the naphtha, kerosene and diesel fractions obtained have been characterized. 39 refs., 3 figs., 8 tabs.

  1. Two-stage perceptual learning to break visual crowding.

    Science.gov (United States)

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  2. Runway Operations Planning: A Two-Stage Heuristic Algorithm

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a two stage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.

  3. TWO-STAGE HEAT PUMPS FOR ENERGY SAVING TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    A. E. Denysova

    2017-09-01

    Full Text Available The problem of energy saving becomes one of the most important in power engineering. It is caused by exhaustion of world reserves in hydrocarbon fuel, such as gas, oil and coal representing sources of traditional heat supply. Conventional sources have essential shortcomings: low power, ecological and economic efficiencies, that can be eliminated by using alternative methods of power supply, like the considered one: low-temperature natural heat of ground waters of on the basis of heat pump installations application. The heat supply system considered provides an effective use of two stages heat pump installation operating as heat source at ground waters during the lowest ambient temperature period. Proposed is a calculation method of heat pump installations on the basis of groundwater energy. Calculated are the values of electric energy consumption by the compressors’ drive, and the heat supply system transformation coefficient µ for a low-potential source of heat from ground waters allowing to estimate high efficiency of two stages heat pump installations.

  4. Two stage approach to dynamic soil structure interaction

    International Nuclear Information System (INIS)

    Nelson, I.

    1981-01-01

    A two stage approach is used to reduce the effective size of soil island required to solve dynamic soil structure interaction problems. The ficticious boundaries of the conventional soil island are chosen sufficiently far from the structure so that the presence of the structure causes only a slight perturbation on the soil response near the boundaries. While the resulting finite element model of the soil structure system can be solved, it requires a formidable computational effort. Currently, a two stage approach is used to reduce this effort. The combined soil structure system has many frequencies and wavelengths. For a stiff structure, the lowest frequencies are those associated with the motion of the structure as a rigid body. In the soil, these modes have the longest wavelengths and attenuate most slowly. The higher frequency deformational modes of the structure have shorter wavelengths and their effect attenuates more rapidly with distance from the structure. The difference in soil response between a computation with a refined structural model, and one with a crude model, tends towards zero a very short distance from the structure. In the current work, the 'crude model' is a rigid structure with the same geometry and inertial properties as the refined model. Preliminary calculations indicated that a rigid structure would be a good low frequency approximation to the actual structure, provided the structure was much stiffer than the native soil. (orig./RW)

  5. Repetitive, small-bore two-stage light gas gun

    International Nuclear Information System (INIS)

    Combs, S.K.; Foust, C.R.; Fehling, D.T.; Gouge, M.J.; Milora, S.L.

    1991-01-01

    A repetitive two-stage light gas gun for high-speed pellet injection has been developed at Oak Ridge National Laboratory. In general, applications of the two-stage light gas gun have been limited to only single shots, with a finite time (at least minutes) needed for recovery and preparation for the next shot. The new device overcomes problems associated with repetitive operation, including rapidly evacuating the propellant gases, reloading the gun breech with a new projectile, returning the piston to its initial position, and refilling the first- and second-stage gas volumes to the appropriate pressure levels. In addition, some components are subjected to and must survive severe operating conditions, which include rapid cycling to high pressures and temperatures (up to thousands of bars and thousands of kelvins) and significant mechanical shocks. Small plastic projectiles (4-mm nominal size) and helium gas have been used in the prototype device, which was equipped with a 1-m-long pump tube and a 1-m-long gun barrel, to demonstrate repetitive operation (up to 1 Hz) at relatively high pellet velocities (up to 3000 m/s). The equipment is described, and experimental results are presented. 124 refs., 6 figs., 5 tabs

  6. Sensitivity Sampling Over Dynamic Geometric Data Streams with Applications to $k$-Clustering

    OpenAIRE

    Song, Zhao; Yang, Lin F.; Zhong, Peilin

    2018-01-01

    Sensitivity based sampling is crucial for constructing nearly-optimal coreset for $k$-means / median clustering. In this paper, we provide a novel data structure that enables sensitivity sampling over a dynamic data stream, where points from a high dimensional discrete Euclidean space can be either inserted or deleted. Based on this data structure, we provide a one-pass coreset construction for $k$-means %and M-estimator clustering using space $\\widetilde{O}(k\\mathrm{poly}(d))$ over $d$-dimen...

  7. The ellipticities of a sample of globular clusters in M31

    International Nuclear Information System (INIS)

    Lupton, R.H.

    1989-01-01

    Images for a sample of 18 globular clusters in M31 have been obtained. The mean ellipticity on the sky in the range 7-14 pc (2-4 arcsec) is 0.08 + or - 0.02 and 0.12 + or - 0.01 in the range 14-21 pc (4-6 arcsec), with corresponding true ellipticities of 0.12 and 0.18. The difference between the inner and outer parts is significant at a 99 percent level. The flattening of the inner parts is statistically indistinguishable from that of the Galactic globular clusters, while the outer parts are flatter than the Galactic clusters at a 99.8 percent confidence level. There is a significant anticorrelation of ellipticity with line strength; such a correlation may in retrospect also be seen in the Galactic globular cluster system. For the M31 data, this anticorrelation is stronger in the inner parts of the galaxy. 30 refs

  8. Two-stage hydroprocessing of synthetic crude gas oil

    Energy Technology Data Exchange (ETDEWEB)

    Mahay, A.; Chmielowiec, J.; Fisher, I.P.; Monnier, J. (Petro-Canada Products, Missisauga, ON (Canada). Research and Development Centre)

    1992-02-01

    The hydrocracking of synthetic crude gas oils (SGO), which are commercially produced from Canadian oil sands, is strongly inhibited by nitrogen-containing species. To alleviate the pronounced effect of these nitrogenous compounds, SGO was hydrotreated at severe conditions prior to hydrocracking to reduce its N content from 1665 to about 390 ppm (by weight). Hydrocracking was then performed using a commercial nickel-tungsten catalyst supported on silica-alumina. Two-stage hydroprocessing of SGO was assessed in terms of product yields and quality. As expected, higher gas oil conversion were achieved mostly from an increase in naphtha yield. The middle distillate product quality was also clearly improved as the diesel fuel cetane number increased by 13%. Diesel engine tests indicated that particulate emissions in exhaust gases were lowered by 20%. Finally, pseudo first-order kinetic equations were derived for the overall conversion of the major gas oil components. 17 refs., 2 figs., 8 tabs.

  9. Quick pace of property acquisitions requires two-stage evaluations

    International Nuclear Information System (INIS)

    Hollo, R.; Lockwood, S.

    1994-01-01

    The traditional method of evaluating oil and gas reserves may be too cumbersome for the quick pace of oil and gas property acquisition. An acquisition evaluator must decide quickly if a property meets basic purchase criteria. The current business climate requires a two-stage approach. First, the evaluator makes a quick assessment of the property and submits a bid. If the bid is accepted then the evaluator goes on with a detailed analysis, which represents the second stage. Acquisition of producing properties has become an important activity for many independent oil and gas producers, who must be able to evaluate reserves quickly enough to make effective business decisions yet accurately enough to avoid costly mistakes. Independent thus must be familiar with how transactions usually progress as well as with the basic methods of property evaluation. The paper discusses acquisition activity, the initial offer, the final offer, property evaluation, and fair market value

  10. Hybrid biogas upgrading in a two-stage thermophilic reactor

    DEFF Research Database (Denmark)

    Corbellini, Viola; Kougias, Panagiotis; Treu, Laura

    2018-01-01

    The aim of this study is to propose a hybrid biogas upgrading configuration composed of two-stage thermophilic reactors. Hydrogen is directly injected in the first stage reactor. The output gas from the first reactor (in-situ biogas upgrade) is subsequently transferred to a second upflow reactor...... (ex-situ upgrade), in which enriched hydrogenotrophic culture is responsible for the hydrogenation of carbon dioxide to methane. The overall objective of the work was to perform an initial methane enrichment in the in-situ reactor, avoiding deterioration of the process due to elevated pH levels......, and subsequently, to complete the biogas upgrading process in the ex-situ chamber. The methane content in the first stage reactor reached on average 87% and the corresponding value in the second stage was 91%, with a maximum of 95%. A remarkable accumulation of volatile fatty acids was observed in the first...

  11. Two-Stage Part-Based Pedestrian Detection

    DEFF Research Database (Denmark)

    Møgelmose, Andreas; Prioletti, Antonio; Trivedi, Mohan M.

    2012-01-01

    Detecting pedestrians is still a challenging task for automotive vision system due the extreme variability of targets, lighting conditions, occlusions, and high speed vehicle motion. A lot of research has been focused on this problem in the last 10 years and detectors based on classifiers has...... gained a special place among the different approaches presented. This work presents a state-of-the-art pedestrian detection system based on a two stages classifier. Candidates are extracted with a Haar cascade classifier trained with the DaimlerDB dataset and then validated through part-based HOG...... of several metrics, such as detection rate, false positives per hour, and frame rate. The novelty of this system rely in the combination of HOG part-based approach, tracking based on specific optimized feature and porting on a real prototype....

  12. Device for two-stage cementing of casing

    Energy Technology Data Exchange (ETDEWEB)

    Kudimov, D A; Goncharevskiy, Ye N; Luneva, L G; Shchelochkov, S N; Shil' nikova, L N; Tereshchenko, V G; Vasiliev, V A; Volkova, V V; Zhdokov, K I

    1981-01-01

    A device is claimed for two-stage cementing of casing. It consists of a body with lateral plugging vents, upper and lower movable sleeves, a check valve with axial channels that's situated in the lower sleeve, and a displacement limiting device for the lower sleeve. To improve the cementing process of the casing by preventing overflow of cementing fluids from the annular space into the first stage casing, the limiter is equipped with a spring rod that is capable of covering the axial channels of the check valve while it's in an operating mode. In addition, the rod in the upper part is equipped with a reinforced area under the axial channels of the check valve.

  13. Two-stage decision approach to material accounting

    International Nuclear Information System (INIS)

    Opelka, J.H.; Sutton, W.B.

    1982-01-01

    The validity of the alarm threshold 4sigma has been checked for hypothetical large and small facilities using a two-stage decision model in which the diverter's strategic variable is the quantity diverted, and the defender's strategic variables are the alarm threshold and the effectiveness of the physical security and material control systems in the possible presence of a diverter. For large facilities, the material accounting system inherently appears not to be a particularly useful system for the deterrence of diversions, and essentially no improvement can be made by lowering the alarm threshold below 4sigma. For small facilities, reduction of the threshold to 2sigma or 3sigma is a cost effective change for the accounting system, but is probably less cost effective than making improvements in the material control and physical security systems

  14. The hybrid two stage anticlockwise cycle for ecological energy conversion

    Directory of Open Access Journals (Sweden)

    Cyklis Piotr

    2016-01-01

    Full Text Available The anticlockwise cycle is commonly used for refrigeration, air conditioning and heat pumps applications. The application of refrigerant in the compression cycle is within the temperature limits of the triple point and the critical point. New refrigerants such as 1234yf or 1234ze have many disadvantages, therefore natural refrigerants application is favourable. The carbon dioxide and water can be applied only in the hybrid two stages cycle. The possibilities of this solutions are shown for refrigerating applications, as well some experimental results of the adsorption-compression double stages cycle, powered with solar collectors are shown. As a high temperature cycle the adsorption system is applied. The low temperature cycle is the compression stage with carbon dioxide as a working fluid. This allows to achieve relatively high COP for low temperature cycle and for the whole system.

  15. High Performance Gasification with the Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Gøbel, Benny; Hindsgaul, Claus; Henriksen, Ulrik Birk

    2002-01-01

    , air preheating and pyrolysis, hereby very high energy efficiencies can be achieved. Encouraging results are obtained at a 100 kWth laboratory facility. The tar content in the raw gas is measured to be below 25 mg/Nm3 and around 5 mg/Nm3 after gas cleaning with traditional baghouse filter. Furthermore...... a cold gas efficiency exceeding 90% is obtained. In the original design of the two-stage gasification process, the pyrolysis unit consists of a screw conveyor with external heating, and the char unit is a fixed bed gasifier. This design is well proven during more than 1000 hours of testing with various...... fuels, and is a suitable design for medium size gasifiers....

  16. Runway Operations Planning: A Two-Stage Solution Methodology

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. Thus, Runway Operations Planning (ROP) is a critical component of airport operations planning in general and surface operations planning in particular. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, may be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. Generating optimal runway operations plans was approached in with a 'one-stage' optimization routine that considered all the desired objectives and constraints, and the characteristics of each aircraft (weight class, destination, Air Traffic Control (ATC) constraints) at the same time. Since, however, at any given point in time, there is less uncertainty in the predicted demand for departure resources in terms of weight class than in terms of specific aircraft, the ROP problem can be parsed into two stages. In the context of the Departure Planner (OP) research project, this paper introduces Runway Operations Planning (ROP) as part of the wider Surface Operations Optimization (SOO) and describes a proposed 'two stage' heuristic algorithm for solving the Runway Operations Planning (ROP) problem. Focus is specifically given on including runway crossings in the planning process of runway operations. In the first stage, sequences of departure class slots and runwy crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the

  17. A clustering algorithm for sample data based on environmental pollution characteristics

    Science.gov (United States)

    Chen, Mei; Wang, Pengfei; Chen, Qiang; Wu, Jiadong; Chen, Xiaoyun

    2015-04-01

    Environmental pollution has become an issue of serious international concern in recent years. Among the receptor-oriented pollution models, CMB, PMF, UNMIX, and PCA are widely used as source apportionment models. To improve the accuracy of source apportionment and classify the sample data for these models, this study proposes an easy-to-use, high-dimensional EPC algorithm that not only organizes all of the sample data into different groups according to the similarities in pollution characteristics such as pollution sources and concentrations but also simultaneously detects outliers. The main clustering process consists of selecting the first unlabelled point as the cluster centre, then assigning each data point in the sample dataset to its most similar cluster centre according to both the user-defined threshold and the value of similarity function in each iteration, and finally modifying the clusters using a method similar to k-Means. The validity and accuracy of the algorithm are tested using both real and synthetic datasets, which makes the EPC algorithm practical and effective for appropriately classifying sample data for source apportionment models and helpful for better understanding and interpreting the sources of pollution.

  18. A simple sample size formula for analysis of covariance in cluster randomized trials.

    NARCIS (Netherlands)

    Teerenstra, S.; Eldridge, S.; Graff, M.J.; Hoop, E. de; Borm, G.F.

    2012-01-01

    For cluster randomized trials with a continuous outcome, the sample size is often calculated as if an analysis of the outcomes at the end of the treatment period (follow-up scores) would be performed. However, often a baseline measurement of the outcome is available or feasible to obtain. An

  19. A sampling device for counting insect egg clusters and measuring vertical distribution of vegetation

    Science.gov (United States)

    Robert L. Talerico; Robert W., Jr. Wilson

    1978-01-01

    The use of a vertical sampling pole that delineates known volumes and position is illustrated and demonstrated for counting egg clusters of N. sertifer. The pole can also be used to estimate vertical and horizontal coverage, distribution or damage of vegetation or foliage.

  20. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    Science.gov (United States)

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with

  1. Clustering of samples and elements based on multi-variable chemical data

    International Nuclear Information System (INIS)

    Op de Beeck, J.

    1984-01-01

    Clustering and classification are defined in the context of multivariable chemical analysis data. Classical multi-variate techniques, commonly used to interpret such data, are shown to be based on probabilistic and geometrical principles which are not justified for analytical data, since in that case one assumes or expects a system of more or less systematically related objects (samples) as defined by measurements on more or less systematically interdependent variables (elements). For the specific analytical problem of data set concerning a large number of trace elements determined in a large number of samples, a deterministic cluster analysis can be used to develop the underlying classification structure. Three main steps can be distinguished: diagnostic evaluation and preprocessing of the raw input data; computation of a symmetric matrix with pairwise standardized dissimilarity values between all possible pairs of samples and/or elements; and ultrametric clustering strategy to produce the final classification as a dendrogram. The software packages designed to perform these tasks are discussed and final results are given. Conclusions are formulated concerning the dangers of using multivariate, clustering and classification software packages as a black-box

  2. HICOSMO - X-ray analysis of a complete sample of galaxy clusters

    Science.gov (United States)

    Schellenberger, G.; Reiprich, T.

    2017-10-01

    Galaxy clusters are known to be the largest virialized objects in the Universe. Based on the theory of structure formation one can use them as cosmological probes, since they originate from collapsed overdensities in the early Universe and witness its history. The X-ray regime provides the unique possibility to measure in detail the most massive visible component, the intra cluster medium. Using Chandra observations of a local sample of 64 bright clusters (HIFLUGCS) we provide total (hydrostatic) and gas mass estimates of each cluster individually. Making use of the completeness of the sample we quantify two interesting cosmological parameters by a Bayesian cosmological likelihood analysis. We find Ω_{M}=0.3±0.01 and σ_{8}=0.79±0.03 (statistical uncertainties) using our default analysis strategy combining both, a mass function analysis and the gas mass fraction results. The main sources of biases that we discuss and correct here are (1) the influence of galaxy groups (higher incompleteness in parent samples and a differing behavior of the L_{x} - M relation), (2) the hydrostatic mass bias (as determined by recent hydrodynamical simulations), (3) the extrapolation of the total mass (comparing various methods), (4) the theoretical halo mass function and (5) other cosmological (non-negligible neutrino mass), and instrumental (calibration) effects.

  3. Hierarchical Bayesian modelling of gene expression time series across irregularly sampled replicates and clusters.

    Science.gov (United States)

    Hensman, James; Lawrence, Neil D; Rattray, Magnus

    2013-08-20

    Time course data from microarrays and high-throughput sequencing experiments require simple, computationally efficient and powerful statistical models to extract meaningful biological signal, and for tasks such as data fusion and clustering. Existing methodologies fail to capture either the temporal or replicated nature of the experiments, and often impose constraints on the data collection process, such as regularly spaced samples, or similar sampling schema across replications. We propose hierarchical Gaussian processes as a general model of gene expression time-series, with application to a variety of problems. In particular, we illustrate the method's capacity for missing data imputation, data fusion and clustering.The method can impute data which is missing both systematically and at random: in a hold-out test on real data, performance is significantly better than commonly used imputation methods. The method's ability to model inter- and intra-cluster variance leads to more biologically meaningful clusters. The approach removes the necessity for evenly spaced samples, an advantage illustrated on a developmental Drosophila dataset with irregular replications. The hierarchical Gaussian process model provides an excellent statistical basis for several gene-expression time-series tasks. It has only a few additional parameters over a regular GP, has negligible additional complexity, is easily implemented and can be integrated into several existing algorithms. Our experiments were implemented in python, and are available from the authors' website: http://staffwww.dcs.shef.ac.uk/people/J.Hensman/.

  4. Identification of Clusters of Foot Pain Location in a Community Sample.

    Science.gov (United States)

    Gill, Tiffany K; Menz, Hylton B; Landorf, Karl B; Arnold, John B; Taylor, Anne W; Hill, Catherine L

    2017-12-01

    To identify foot pain clusters according to pain location in a community-based sample of the general population. This study analyzed data from the North West Adelaide Health Study. Data were obtained between 2004 and 2006, using computer-assisted telephone interviewing, clinical assessment, and self-completed questionnaire. The location of foot pain was assessed using a diagram during the clinical assessment. Hierarchical cluster analysis was undertaken to identify foot pain location clusters, which were then compared in relation to demographics, comorbidities, and podiatry services utilization. There were 558 participants with foot pain (mean age 54.4 years, 57.5% female). Five clusters were identified: 1 with predominantly arch and ball pain (26.8%), 1 with rearfoot pain (20.9%), 1 with heel pain (13.3%), and 2 with predominantly forefoot, toe, and nail pain (28.3% and 10.7%). Each cluster was distinct in age, sex, and comorbidity profile. Of the two clusters with predominantly forefoot, toe, and nail pain, one of them had a higher proportion of men and those classified as obese, had diabetes mellitus, and used podiatry services (30%), while the other was comprised of a higher proportion of women who were overweight and reported less use of podiatry services (17.5%). Five clusters of foot pain according to pain location were identified, all with distinct age, sex, and comorbidity profiles. These findings may assist in the identification of individuals at risk for developing foot pain and in the development of targeted preventive strategies and treatments. © 2017, American College of Rheumatology.

  5. Eliminating Survivor Bias in Two-stage Instrumental Variable Estimators.

    Science.gov (United States)

    Vansteelandt, Stijn; Walter, Stefan; Tchetgen Tchetgen, Eric

    2018-07-01

    Mendelian randomization studies commonly focus on elderly populations. This makes the instrumental variables analysis of such studies sensitive to survivor bias, a type of selection bias. A particular concern is that the instrumental variable conditions, even when valid for the source population, may be violated for the selective population of individuals who survive the onset of the study. This is potentially very damaging because Mendelian randomization studies are known to be sensitive to bias due to even minor violations of the instrumental variable conditions. Interestingly, the instrumental variable conditions continue to hold within certain risk sets of individuals who are still alive at a given age when the instrument and unmeasured confounders exert additive effects on the exposure, and moreover, the exposure and unmeasured confounders exert additive effects on the hazard of death. In this article, we will exploit this property to derive a two-stage instrumental variable estimator for the effect of exposure on mortality, which is insulated against the above described selection bias under these additivity assumptions.

  6. Two-stage image denoising considering interscale and intrascale dependencies

    Science.gov (United States)

    Shahdoosti, Hamid Reza

    2017-11-01

    A solution to the problem of reducing the noise of grayscale images is presented. To consider the intrascale and interscale dependencies, this study makes use of a model. It is shown that the dependency between a wavelet coefficient and its predecessors can be modeled by the first-order Markov chain, which means that the parent conveys all of the information necessary for efficient estimation. Using this fact, the proposed method employs the Kalman filter in the wavelet domain for image denoising. The proposed method has two stages. The first stage employs a simple denoising algorithm to provide the noise-free image, by which the parameters of the model such as state transition matrix, variance of the process noise, the observation model, and the covariance of the observation noise are estimated. In the second stage, the Kalman filter is applied to the wavelet coefficients of the noisy image to estimate the noise-free coefficients. In fact, the Kalman filter is used to estimate the coefficients of high-frequency subbands from the coefficients of coarser scales and noisy observations of neighboring coefficients. In this way, both the interscale and intrascale dependencies are taken into account. Results are presented and discussed on a set of standard 8-bit grayscale images. The experimental results demonstrate that the proposed method achieves performances competitive with the state-of-the-art denoising methods in terms of both peak-signal-to-noise ratio and subjective visual quality.

  7. Two-Stage Electricity Demand Modeling Using Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Krzysztof Gajowniczek

    2017-10-01

    Full Text Available Forecasting of electricity demand has become one of the most important areas of research in the electric power industry, as it is a critical component of cost-efficient power system management and planning. In this context, accurate and robust load forecasting is supposed to play a key role in reducing generation costs, and deals with the reliability of the power system. However, due to demand peaks in the power system, forecasts are inaccurate and prone to high numbers of errors. In this paper, our contributions comprise a proposed data-mining scheme for demand modeling through peak detection, as well as the use of this information to feed the forecasting system. For this purpose, we have taken a different approach from that of time series forecasting, representing it as a two-stage pattern recognition problem. We have developed a peak classification model followed by a forecasting model to estimate an aggregated demand volume. We have utilized a set of machine learning algorithms to benefit from both accurate detection of the peaks and precise forecasts, as applied to the Polish power system. The key finding is that the algorithms can detect 96.3% of electricity peaks (load value equal to or above the 99th percentile of the load distribution and deliver accurate forecasts, with mean absolute percentage error (MAPE of 3.10% and resistant mean absolute percentage error (r-MAPE of 2.70% for the 24 h forecasting horizon.

  8. FIRST DIRECT EVIDENCE OF TWO STAGES IN FREE RECALL

    Directory of Open Access Journals (Sweden)

    Eugen Tarnow

    2015-12-01

    Full Text Available I find that exactly two stages can be seen directly in sequential free recall distributions. These distributions show that the first three recalls come from the emptying of working memory, recalls 6 and above come from a second stage and the 4th and 5th recalls are mixtures of the two.A discontinuity, a rounded step function, is shown to exist in the fitted linear slope of the recall distributions as the recall shifts from the emptying of working memory (positive slope to the second stage (negative slope. The discontinuity leads to a first estimate of the capacity of working memory at 4-4.5 items. The total recall is shown to be a linear combination of the content of working memory and items recalled in the second stage with 3.0-3.9 items coming from working memory, a second estimate of the capacity of working memory. A third, separate upper limit on the capacity of working memory is found (3.06 items, corresponding to the requirement that the content of working memory cannot exceed the total recall, item by item. This third limit is presumably the best limit on the average capacity of unchunked working memory.The second stage of recall is shown to be reactivation: The average times to retrieve additional items in free recall obey a linear relationship as a function of the recall probability which mimics recognition and cued recall, both mechanisms using reactivation (Tarnow, 2008.

  9. A two-stage DEA approach for environmental efficiency measurement.

    Science.gov (United States)

    Song, Malin; Wang, Shuhong; Liu, Wei

    2014-05-01

    The slacks-based measure (SBM) model based on the constant returns to scale has achieved some good results in addressing the undesirable outputs, such as waste water and water gas, in measuring environmental efficiency. However, the traditional SBM model cannot deal with the scenario in which desirable outputs are constant. Based on the axiomatic theory of productivity, this paper carries out a systematic research on the SBM model considering undesirable outputs, and further expands the SBM model from the perspective of network analysis. The new model can not only perform efficiency evaluation considering undesirable outputs, but also calculate desirable and undesirable outputs separately. The latter advantage successfully solves the "dependence" problem of outputs, that is, we can not increase the desirable outputs without producing any undesirable outputs. The following illustration shows that the efficiency values obtained by two-stage approach are smaller than those obtained by the traditional SBM model. Our approach provides a more profound analysis on how to improve environmental efficiency of the decision making units.

  10. Two-stage nuclear refrigeration with enhanced nuclear moments

    International Nuclear Information System (INIS)

    Hunik, R.

    1979-01-01

    Experiments are described in which an enhanced nuclear system is used as a precoolant for a nuclear demagnetisation stage. The results show the promising advantages of such a system in those circumstances for which a large cooling power is required at extremely low temperatures. A theoretical review of nuclear enhancement at the microscopic level and its macroscopic thermodynamical consequences is given. The experimental equipment for the implementation of the nuclear enhanced refrigeration method is described and the experiments on two-stage nuclear demagnetisation are discussed. With the nuclear enhanced system PrCu 6 the author could precool a nuclear stage of indium in a magnetic field of 6 T down to temperatures below 10 mK; this resulted in temperature below 1 mK after demagnetisation of the indium. It is demonstrated that the interaction energy between the nuclear moments in an enhanced nuclear system can exceed the nuclear dipolar interaction. Several experiments are described on pulsed nuclear magnetic resonance, as utilised for thermometry purposes. It is shown that platinum NMR-thermometry gives very satisfactory results around 1 mK. The results of experiments on nuclear orientation of radioactive nuclei, e.g. the brute force polarisation of 95 NbPt and 60 CoCu, are presented, some of which are of major importance for the thermometry in the milli-Kelvin region. (Auth.)

  11. Risk averse optimal operation of a virtual power plant using two stage stochastic programming

    International Nuclear Information System (INIS)

    Tajeddini, Mohammad Amin; Rahimi-Kian, Ashkan; Soroudi, Alireza

    2014-01-01

    VPP (Virtual Power Plant) is defined as a cluster of energy conversion/storage units which are centrally operated in order to improve the technical and economic performance. This paper addresses the optimal operation of a VPP considering the risk factors affecting its daily operation profits. The optimal operation is modelled in both day ahead and balancing markets as a two-stage stochastic mixed integer linear programming in order to maximize a GenCo (generation companies) expected profit. Furthermore, the CVaR (Conditional Value at Risk) is used as a risk measure technique in order to control the risk of low profit scenarios. The uncertain parameters, including the PV power output, wind power output and day-ahead market prices are modelled through scenarios. The proposed model is successfully applied to a real case study to show its applicability and the results are presented and thoroughly discussed. - Highlights: • Virtual power plant modelling considering a set of energy generating and conversion units. • Uncertainty modelling using two stage stochastic programming technique. • Risk modelling using conditional value at risk. • Flexible operation of renewable energy resources. • Electricity price uncertainty in day ahead energy markets

  12. Clustered lot quality assurance sampling to assess immunisation coverage: increasing rapidity and maintaining precision.

    Science.gov (United States)

    Pezzoli, Lorenzo; Andrews, Nick; Ronveaux, Olivier

    2010-05-01

    Vaccination programmes targeting disease elimination aim to achieve very high coverage levels (e.g. 95%). We calculated the precision of different clustered lot quality assurance sampling (LQAS) designs in computer-simulated surveys to provide local health officers in the field with preset LQAS plans to simply and rapidly assess programmes with high coverage targets. We calculated sample size (N), decision value (d) and misclassification errors (alpha and beta) of several LQAS plans by running 10 000 simulations. We kept the upper coverage threshold (UT) at 90% or 95% and decreased the lower threshold (LT) progressively by 5%. We measured the proportion of simulations with d unvaccinated individuals if the coverage was LT% (pLT) to calculate alpha (1-pLT). We divided N in clusters (between 5 and 10) and recalculated the errors hypothesising that the coverage would vary in the clusters according to a binomial distribution with preset standard deviations of 0.05 and 0.1 from the mean lot coverage. We selected the plans fulfilling these criteria: alpha LQAS plans dividing the lot in five clusters with N = 50 (5 x 10) and d = 4 to evaluate programmes with 95% coverage target and d = 7 to evaluate programmes with 90% target. These plans will considerably increase the feasibility and the rapidity of conducting the LQAS in the field.

  13. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  14. Two-stage Catalytic Reduction of NOx with Hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Umit S. Ozkan; Erik M. Holmgreen; Matthew M. Yung; Jonathan Halter; Joel Hiltner

    2005-12-21

    A two-stage system for the catalytic reduction of NO from lean-burn natural gas reciprocating engine exhaust is investigated. Each of the two stages uses a distinct catalyst. The first stage is oxidation of NO to NO{sub 2} and the second stage is reduction of NO{sub 2} to N{sub 2} with a hydrocarbon. The central idea is that since NO{sub 2} is a more easily reduced species than NO, it should be better able to compete with oxygen for the combustion reaction of hydrocarbon, which is a challenge in lean conditions. Early work focused on demonstrating that the N{sub 2} yield obtained when NO{sub 2} was reduced was greater than when NO was reduced. NO{sub 2} reduction catalysts were designed and silver supported on alumina (Ag/Al{sub 2}O{sub 3}) was found to be quite active, able to achieve 95% N{sub 2} yield in 10% O{sub 2} using propane as the reducing agent. The design of a catalyst for NO oxidation was also investigated, and a Co/TiO{sub 2} catalyst prepared by sol-gel was shown to have high activity for the reaction, able to reach equilibrium conversion of 80% at 300 C at GHSV of 50,000h{sup -1}. After it was shown that NO{sub 2} could be more easily reduced to N{sub 2} than NO, the focus shifted on developing a catalyst that could use methane as the reducing agent. The Ag/Al{sub 2}O{sub 3} catalyst was tested and found to be inactive for NOx reduction with methane. Through iterative catalyst design, a palladium-based catalyst on a sulfated-zirconia support (Pd/SZ) was synthesized and shown to be able to selectively reduce NO{sub 2} in lean conditions using methane. Development of catalysts for the oxidation reaction also continued and higher activity, as well as stability in 10% water, was observed on a Co/ZrO{sub 2} catalyst, which reached equilibrium conversion of 94% at 250 C at the same GHSV. The Co/ZrO{sub 2} catalyst was also found to be extremely active for oxidation of CO, ethane, and propane, which could potential eliminate the need for any separate

  15. Causes for the two stages of the disruption energy quench

    Energy Technology Data Exchange (ETDEWEB)

    Schueller, F.C.; Donne, A.J.H.; Heijnen, S.H.; Rommers, J.R.; Tanzi, C.P. [FOM-Instituut voor Plasmafysica, Rijnhuizen (Netherlands); Vries, P.C. de; Waidmann, G. [Forschungszentrum Juelich GmbH (Germany). Inst. fuer Plasmaphysik

    1994-12-31

    It is a well-established fact that the energy quench of tokamak disruptions takes place in two stages separated by a plateau period. The total quench duration of typically a few hundred {mu}s is thought to be a combination of Alfven and magnetic diffusion times: Phase 1: a large cold m=1 bubble eats out the hot core within the q=1 surface. Since the normal thermal isolation of the outer layers is still intact this phase means an adiabatic flattening of the inner temperature distribution. Phase 2: after a plateau period the second quench occurs when the edge thermal barrier collapses and a major part of the plasma energy is lost in conjunction with a negative surface voltage spike and a positive spike of the plasma current. In the experimental and theoretical literature on this subject not much attention is given to the evolution of the density distribution during these two phases. This may be caused by the great difficulties one has to keep the fringe counters of multichannel interferometers on track during the very fast changing evolution. The interferometer at TEXTOR can follow this evolution. The spatial resolution after inversion is limited because of the modest number of interferometer channels. In RTP an 18-channel fast interferometer is available next to a 4-channel pulse radar reflectometer which makes it possible to investigate the density profile evolution with both good time (2 {mu}s)- and spatial (0.1a)-resolution. A fast 20-channel ECE-heterodyne radiometer and a 5-camera SXR system allows to follow the temperature profile evolution as well. In this paper theoretical models will be revisited and compared to the new experimental evidence. (author) 9 refs., 3 figs.

  16. Causes for the two stages of the disruption energy quench

    International Nuclear Information System (INIS)

    Schueller, F.C.; Donne, A.J.H.; Heijnen, S.H.; Rommers, J.R.; Tanzi, C.P.; Vries, P.C. de; Waidmann, G.

    1994-01-01

    It is a well-established fact that the energy quench of tokamak disruptions takes place in two stages separated by a plateau period. The total quench duration of typically a few hundred μs is thought to be a combination of Alfven and magnetic diffusion times: Phase 1: a large cold m=1 bubble eats out the hot core within the q=1 surface. Since the normal thermal isolation of the outer layers is still intact this phase means an adiabatic flattening of the inner temperature distribution. Phase 2: after a plateau period the second quench occurs when the edge thermal barrier collapses and a major part of the plasma energy is lost in conjunction with a negative surface voltage spike and a positive spike of the plasma current. In the experimental and theoretical literature on this subject not much attention is given to the evolution of the density distribution during these two phases. This may be caused by the great difficulties one has to keep the fringe counters of multichannel interferometers on track during the very fast changing evolution. The interferometer at TEXTOR can follow this evolution. The spatial resolution after inversion is limited because of the modest number of interferometer channels. In RTP an 18-channel fast interferometer is available next to a 4-channel pulse radar reflectometer which makes it possible to investigate the density profile evolution with both good time (2 μs)- and spatial (0.1a)-resolution. A fast 20-channel ECE-heterodyne radiometer and a 5-camera SXR system allows to follow the temperature profile evolution as well. In this paper theoretical models will be revisited and compared to the new experimental evidence. (author) 9 refs., 3 figs

  17. Transport fuels from two-stage coal liquefaction

    Energy Technology Data Exchange (ETDEWEB)

    Benito, A.; Cebolla, V.; Fernandez, I.; Martinez, M.T.; Miranda, J.L.; Oelert, H.; Prado, J.G. (Instituto de Carboquimica CSIC, Zaragoza (Spain))

    1994-03-01

    Four Spanish lignites and their vitrinite concentrates were evaluated for coal liquefaction. Correlationships between the content of vitrinite and conversion in direct liquefaction were observed for the lignites but not for the vitrinite concentrates. The most reactive of the four coals was processed in two-stage liquefaction at a higher scale. First-stage coal liquefaction was carried out in a continuous unit at Clausthal University at a temperature of 400[degree]C at 20 MPa hydrogen pressure and with anthracene oil as a solvent. The coal conversion obtained was 75.41% being 3.79% gases, 2.58% primary condensate and 69.04% heavy liquids. A hydroprocessing unit was built at the Instituto de Carboquimica for the second-stage coal liquefaction. Whole and deasphalted liquids from the first-stage liquefaction were processed at 450[degree]C and 10 MPa hydrogen pressure, with two commercial catalysts: Harshaw HT-400E (Co-Mo/Al[sub 2]O[sub 3]) and HT-500E (Ni-Mo/Al[sub 2]O[sub 3]). The effects of liquid hourly space velocity (LHSV), temperature, gas/liquid ratio and catalyst on the heteroatom liquids, and levels of 5 ppm of nitrogen and 52 ppm of sulphur were reached at 450[degree]C, 10 MPa hydrogen pressure, 0.08 kg H[sub 2]/kg feedstock and with Harshaw HT-500E catalyst. The liquids obtained were hydroprocessed again at 420[degree]C, 10 MPa hydrogen pressure and 0.06 kg H[sub 2]/kg feedstock to hydrogenate the aromatic structures. In these conditions, the aromaticity was reduced considerably, and 39% of naphthas and 35% of kerosene fractions were obtained. 18 refs., 4 figs., 4 tabs.

  18. VizieR Online Data Catalog: ETGs sample for the Coma cluster (Riguccini+, 2015)

    Science.gov (United States)

    Riguccini, L.; Temi, P.; Amblard, A.; Fanelli, M.; Brighenti, F.

    2017-10-01

    For the Coma Cluster, we utilize the work of Mahajan et al. (2010, J/MNRAS/404/1745) to build our ETG sample. Mahajan et al. (2010, J/MNRAS/404/1745) used a combination of MIPS 24 μm observations and SDSS photometry and spectra to investigate the star formation history of galaxies in the Coma supercluster. All of their galaxies from the SDSS data in the Coma supercluster region are brighter than r~17.77, the completeness limit of the SDSS spectroscopic galaxy catalog. Their 24 μm fluxes are obtained from archival data covering 2x2 deg2 for Coma Cluster. Our final sample of 124 sources is composed of 49 ellipticals and 75 lenticulars. (1 data file).

  19. The clustering evolution of distant red galaxies in the GOODS-MUSIC sample

    Science.gov (United States)

    Grazian, A.; Fontana, A.; Moscardini, L.; Salimbeni, S.; Menci, N.; Giallongo, E.; de Santis, C.; Gallozzi, S.; Nonino, M.; Cristiani, S.; Vanzella, E.

    2006-07-01

    Aims.We study the clustering properties of Distant Red Galaxies (DRGs) to test whether they are the progenitors of local massive galaxies. Methods.We use the GOODS-MUSIC sample, a catalog of ~3000 Ks-selected galaxies based on VLT and HST observation of the GOODS-South field with extended multi-wavelength coverage (from 0.3 to 8~μm) and accurate estimates of the photometric redshifts to select 179 DRGs with J-Ks≥ 1.3 in an area of 135 sq. arcmin.Results.We first show that the J-Ks≥ 1.3 criterion selects a rather heterogeneous sample of galaxies, going from the targeted high-redshift luminous evolved systems, to a significant fraction of lower redshift (1mass, like groups or small galaxy clusters. Low-z DRGs, on the other hand, will likely evolve into slightly less massive field galaxies.

  20. Space density and clustering properties of a new sample of emission-line galaxies

    International Nuclear Information System (INIS)

    Wasilewski, A.J.

    1982-01-01

    A moderate-dispersion objective-prism survey for low-redshift emission-line galaxies has been carried out in an 825 sq. deg. region of sky with the Burrell Schmidt telescope of Case Western Reserve University. A 4 0 prism (300 A/mm at H#betta#) was used with the Illa-J emulsion to show that a new sample of emission-line galaxies is available even in areas already searched with the excess uv-continuum technique. The new emission-line galaxies occur quite commonly in systems with peculiar morphology indicating gravitational interaction with a close companion or other disturbance. About 10 to 15% of the sample are Seyfert galaxies. It is suggested that tidal interaction involving matter infall play a significant role in the generation of an emission-line spectrum. The space density of the new galaxies is found to be similar to the space density of the Makarian galaxies. Like the Markarian sample, the galaxies in the present survey represent about 10% of all galaxies in the absolute magnitude range M/sub p/ = -16 to -22. The observations also indicate that current estimates of dwarf galaxy space densities may be too low. The clustering properties of the new galaxies have been investigated using two approaches: cluster contour maps and the spatial correlation function. These tests suggest that there is weak clustering and possibly superclustering within the sample itself and that the galaxies considered here are about as common in clusters of ordinary galaxies as in the field

  1. Clinical evaluation of nonsyndromic dental anomalies in Dravidian population: A cluster sample analysis

    OpenAIRE

    Yamunadevi, Andamuthu; Selvamani, M.; Vinitha, V.; Srivandhana, R.; Balakrithiga, M.; Prabhu, S.; Ganapathy, N.

    2015-01-01

    Aim: To record the prevalence rate of dental anomalies in Dravidian population and analyze the percentage of individual anomalies in the population. Methodology: A cluster sample analysis was done, where 244 subjects studying in a dental institution were all included and analyzed for occurrence of dental anomalies by clinical examination, excluding third molars from analysis. Results: 31.55% of the study subjects had dental anomalies and shape anomalies were more prevalent (22.1%), followed b...

  2. Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.

    Science.gov (United States)

    Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping

    2015-06-07

    Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.

  3. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    Science.gov (United States)

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  4. Fuzzy C-Means Clustering Model Data Mining For Recognizing Stock Data Sampling Pattern

    Directory of Open Access Journals (Sweden)

    Sylvia Jane Annatje Sumarauw

    2007-06-01

    Full Text Available Abstract Capital market has been beneficial to companies and investor. For investors, the capital market provides two economical advantages, namely deviden and capital gain, and a non-economical one that is a voting .} hare in Shareholders General Meeting. But, it can also penalize the share owners. In order to prevent them from the risk, the investors should predict the prospect of their companies. As a consequence of having an abstract commodity, the share quality will be determined by the validity of their company profile information. Any information of stock value fluctuation from Jakarta Stock Exchange can be a useful consideration and a good measurement for data analysis. In the context of preventing the shareholders from the risk, this research focuses on stock data sample category or stock data sample pattern by using Fuzzy c-Me, MS Clustering Model which providing any useful information jar the investors. lite research analyses stock data such as Individual Index, Volume and Amount on Property and Real Estate Emitter Group at Jakarta Stock Exchange from January 1 till December 31 of 204. 'he mining process follows Cross Industry Standard Process model for Data Mining (CRISP,. DM in the form of circle with these steps: Business Understanding, Data Understanding, Data Preparation, Modelling, Evaluation and Deployment. At this modelling process, the Fuzzy c-Means Clustering Model will be applied. Data Mining Fuzzy c-Means Clustering Model can analyze stock data in a big database with many complex variables especially for finding the data sample pattern, and then building Fuzzy Inference System for stimulating inputs to be outputs that based on Fuzzy Logic by recognising the pattern. Keywords: Data Mining, AUz..:y c-Means Clustering Model, Pattern Recognition

  5. Evaluation of primary immunization coverage of infants under universal immunization programme in an urban area of Bangalore city using cluster sampling and lot quality assurance sampling techniques

    Directory of Open Access Journals (Sweden)

    Punith K

    2008-01-01

    Full Text Available Research Question: Is LQAS technique better than cluster sampling technique in terms of resources to evaluate the immunization coverage in an urban area? Objective: To assess and compare the lot quality assurance sampling against cluster sampling in the evaluation of primary immunization coverage. Study Design: Population-based cross-sectional study. Study Setting: Areas under Mathikere Urban Health Center. Study Subjects: Children aged 12 months to 23 months. Sample Size: 220 in cluster sampling, 76 in lot quality assurance sampling. Statistical Analysis: Percentages and Proportions, Chi square Test. Results: (1 Using cluster sampling, the percentage of completely immunized, partially immunized and unimmunized children were 84.09%, 14.09% and 1.82%, respectively. With lot quality assurance sampling, it was 92.11%, 6.58% and 1.31%, respectively. (2 Immunization coverage levels as evaluated by cluster sampling technique were not statistically different from the coverage value as obtained by lot quality assurance sampling techniques. Considering the time and resources required, it was found that lot quality assurance sampling is a better technique in evaluating the primary immunization coverage in urban area.

  6. The Two-stage Constrained Equal Awards and Losses Rules for Multi-Issue Allocation Situation

    NARCIS (Netherlands)

    Lorenzo-Freire, S.; Casas-Mendez, B.; Hendrickx, R.L.P.

    2005-01-01

    This paper considers two-stage solutions for multi-issue allocation situations.Characterisations are provided for the two-stage constrained equal awards and constrained equal losses rules, based on the properties of composition and path independence.

  7. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    Science.gov (United States)

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  8. Grouped fuzzy SVM with EM-based partition of sample space for clustered microcalcification detection.

    Science.gov (United States)

    Wang, Huiya; Feng, Jun; Wang, Hongyu

    2017-07-20

    Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.

  9. Two Stage Secure Dynamic Load Balancing Architecture for SIP Server Clusters

    Directory of Open Access Journals (Sweden)

    G. Vennila

    2014-08-01

    Full Text Available Session Initiation Protocol (SIP is a signaling protocol emerged with an aim to enhance the IP network capabilities in terms of complex service provision. SIP server scalability with load balancing has a greater concern due to the dramatic increase in SIP service demand. Load balancing of session method (request/response and security measures optimizes the SIP server to regulate of network traffic in Voice over Internet Protocol (VoIP. Establishing a honeywall prior to the load balancer significantly reduces SIP traffic and drops inbound malicious load. In this paper, we propose Active Least Call in SIP Server (ALC_Server algorithm fulfills objectives like congestion avoidance, improved response times, throughput, resource utilization, reducing server faults, scalability and protection of SIP call from DoS attacks. From the test bed, the proposed two-tier architecture demonstrates that the ALC_Server method dynamically controls the overload and provides robust security, uniform load distribution for SIP servers.

  10. Evaluation of primary immunization coverage of infants under universal immunization programme in an urban area of bangalore city using cluster sampling and lot quality assurance sampling techniques.

    Science.gov (United States)

    K, Punith; K, Lalitha; G, Suman; Bs, Pradeep; Kumar K, Jayanth

    2008-07-01

    Is LQAS technique better than cluster sampling technique in terms of resources to evaluate the immunization coverage in an urban area? To assess and compare the lot quality assurance sampling against cluster sampling in the evaluation of primary immunization coverage. Population-based cross-sectional study. Areas under Mathikere Urban Health Center. Children aged 12 months to 23 months. 220 in cluster sampling, 76 in lot quality assurance sampling. Percentages and Proportions, Chi square Test. (1) Using cluster sampling, the percentage of completely immunized, partially immunized and unimmunized children were 84.09%, 14.09% and 1.82%, respectively. With lot quality assurance sampling, it was 92.11%, 6.58% and 1.31%, respectively. (2) Immunization coverage levels as evaluated by cluster sampling technique were not statistically different from the coverage value as obtained by lot quality assurance sampling techniques. Considering the time and resources required, it was found that lot quality assurance sampling is a better technique in evaluating the primary immunization coverage in urban area.

  11. Planck/SDSS Cluster Mass and Gas Scaling Relations for a Volume-Complete redMaPPer Sample

    Science.gov (United States)

    Jimeno, Pablo; Diego, Jose M.; Broadhurst, Tom; De Martino, I.; Lazkoz, Ruth

    2018-04-01

    Using Planck satellite data, we construct Sunyaev-Zel'dovich (SZ) gas pressure profiles for a large, volume-complete sample of optically selected clusters. We have defined a sample of over 8,000 redMaPPer clusters from the Sloan Digital Sky Survey (SDSS), within the volume-complete redshift region 0.100 trend towards larger break radius with increasing cluster mass. Our SZ-based masses fall ˜16% below the mass-richness relations from weak lensing, in a similar fashion as the "hydrostatic bias" related with X-ray derived masses. Finally, we derive a tight Y500-M500 relation over a wide range of cluster mass, with a power law slope equal to 1.70 ± 0.07, that agrees well with the independent slope obtained by the Planck team with an SZ-selected cluster sample, but extends to lower masses with higher precision.

  12. Cluster Analysis of the Yale Global Tic Severity Scale (YGTSS): Symptom Dimensions and Clinical Correlates in an Outpatient Youth Sample

    Science.gov (United States)

    Kircanski, Katharina; Woods, Douglas W.; Chang, Susanna W.; Ricketts, Emily J.; Piacentini, John C.

    2010-01-01

    Tic disorders are heterogeneous, with symptoms varying widely both within and across patients. Exploration of symptom clusters may aid in the identification of symptom dimensions of empirical and treatment import. This article presents the results of two studies investigating tic symptom clusters using a sample of 99 youth (M age = 10.7, 81% male,…

  13. clusters

    Indian Academy of Sciences (India)

    2017-09-27

    Sep 27, 2017 ... Author for correspondence (zh4403701@126.com). MS received 15 ... lic clusters using density functional theory (DFT)-GGA of the DMOL3 package. ... In the process of geometric optimization, con- vergence thresholds ..... and Postgraduate Research & Practice Innovation Program of. Jiangsu Province ...

  14. clusters

    Indian Academy of Sciences (India)

    environmental as well as technical problems during fuel gas utilization. ... adsorption on some alloys of Pd, namely PdAu, PdAg ... ried out on small neutral and charged Au24,26,27, Cu,28 ... study of Zanti et al.29 on Pdn (n = 1–9) clusters.

  15. CA II TRIPLET SPECTROSCOPY OF SMALL MAGELLANIC CLOUD RED GIANTS. III. ABUNDANCES AND VELOCITIES FOR A SAMPLE OF 14 CLUSTERS

    Energy Technology Data Exchange (ETDEWEB)

    Parisi, M. C.; Clariá, J. J.; Marcionni, N. [Observatorio Astronómico, Universidad Nacional de Córdoba, Laprida 854, Córdoba, CP 5000 (Argentina); Geisler, D.; Villanova, S. [Departamento de Astronomía, Universidad de Concepción Casilla 160-C, Concepción (Chile); Sarajedini, A. [Department of Astronomy, University of Florida P.O. Box 112055, Gainesville, FL 32611 (United States); Grocholski, A. J., E-mail: celeste@oac.uncor.edu, E-mail: claria@oac.uncor.edu, E-mail: nmarcionni@oac.uncor.edu, E-mail: dgeisler@astro-udec.cl, E-mail: svillanova@astro-udec.cl, E-mail: ata@astro.ufl.edu, E-mail: grocholski@phys.lsu.edu [Department of Physics and Astronomy, Louisiana State University 202 Nicholson Hall, Tower Drive, Baton Rouge, LA 70803-4001 (United States)

    2015-05-15

    We obtained spectra of red giants in 15 Small Magellanic Cloud (SMC) clusters in the region of the Ca ii lines with FORS2 on the Very Large Telescope. We determined the mean metallicity and radial velocity with mean errors of 0.05 dex and 2.6 km s{sup −1}, respectively, from a mean of 6.5 members per cluster. One cluster (B113) was too young for a reliable metallicity determination and was excluded from the sample. We combined the sample studied here with 15 clusters previously studied by us using the same technique, and with 7 clusters whose metallicities determined by other authors are on a scale similar to ours. This compilation of 36 clusters is the largest SMC cluster sample currently available with accurate and homogeneously determined metallicities. We found a high probability that the metallicity distribution is bimodal, with potential peaks at −1.1 and −0.8 dex. Our data show no strong evidence of a metallicity gradient in the SMC clusters, somewhat at odds with recent evidence from Ca ii triplet spectra of a large sample of field stars. This may be revealing possible differences in the chemical history of clusters and field stars. Our clusters show a significant dispersion of metallicities, whatever age is considered, which could be reflecting the lack of a unique age–metallicity relation in this galaxy. None of the chemical evolution models currently available in the literature satisfactorily represents the global chemical enrichment processes of SMC clusters.

  16. Two-Stage Classification Approach for Human Detection in Camera Video in Bulk Ports

    Directory of Open Access Journals (Sweden)

    Mi Chao

    2015-09-01

    Full Text Available With the development of automation in ports, the video surveillance systems with automated human detection begun to be applied in open-air handling operation areas for safety and security. The accuracy of traditional human detection based on the video camera is not high enough to meet the requirements of operation surveillance. One of the key reasons is that Histograms of Oriented Gradients (HOG features of the human body will show great different between front & back standing (F&B and side standing (Side human body. Therefore, the final training for classifier will only gain a few useful specific features which have contribution to classification and are insufficient to support effective classification, while using the HOG features directly extracted by the samples from different human postures. This paper proposes a two-stage classification method to improve the accuracy of human detection. In the first stage, during preprocessing classification, images is mainly divided into possible F&B human body and not F&B human body, and then they were put into the second-stage classification among side human and non-human recognition. The experimental results in Tianjin port show that the two-stage classifier can improve the classification accuracy of human detection obviously.

  17. Damage evolution analysis of coal samples under cyclic loading based on single-link cluster method

    Science.gov (United States)

    Zhang, Zhibo; Wang, Enyuan; Li, Nan; Li, Xuelong; Wang, Xiaoran; Li, Zhonghui

    2018-05-01

    In this paper, the acoustic emission (AE) response of coal samples under cyclic loading is measured. The results show that there is good positive relation between AE parameters and stress. The AE signal of coal samples under cyclic loading exhibits an obvious Kaiser Effect. The single-link cluster (SLC) method is applied to analyze the spatial evolution characteristics of AE events and the damage evolution process of coal samples. It is found that a subset scale of the SLC structure becomes smaller and smaller when the number of cyclic loading increases, and there is a negative linear relationship between the subset scale and the degree of damage. The spatial correlation length ξ of an SLC structure is calculated. The results show that ξ fluctuates around a certain value from the second cyclic loading process to the fifth cyclic loading process, but spatial correlation length ξ clearly increases in the sixth loading process. Based on the criterion of microcrack density, the coal sample failure process is the transformation from small-scale damage to large-scale damage, which is the reason for changes in the spatial correlation length. Through a systematic analysis, the SLC method is an effective method to research the damage evolution process of coal samples under cyclic loading, and will provide important reference values for studying coal bursts.

  18. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations.

    Science.gov (United States)

    NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel

    2017-08-01

    Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.

  19. Ca II TRIPLET SPECTROSCOPY OF SMALL MAGELLANIC CLOUD RED GIANTS. I. ABUNDANCES AND VELOCITIES FOR A SAMPLE OF CLUSTERS

    International Nuclear Information System (INIS)

    Parisi, M. C.; Claria, J. J.; Grocholski, A. J.; Geisler, D.; Sarajedini, A.

    2009-01-01

    We have obtained near-infrared spectra covering the Ca II triplet lines for a large number of stars associated with 16 Small Magellanic Cloud (SMC) clusters using the VLT + FORS2. These data compose the largest available sample of SMC clusters with spectroscopically derived abundances and velocities. Our clusters span a wide range of ages and provide good areal coverage of the galaxy. Cluster members are selected using a combination of their positions relative to the cluster center as well as their location in the color-magnitude diagram, abundances, and radial velocities (RVs). We determine mean cluster velocities to typically 2.7 km s -1 and metallicities to 0.05 dex (random errors), from an average of 6.4 members per cluster. By combining our clusters with previously published results, we compile a sample of 25 clusters on a homogeneous metallicity scale and with relatively small metallicity errors, and thereby investigate the metallicity distribution, metallicity gradient, and age-metallicity relation (AMR) of the SMC cluster system. For all 25 clusters in our expanded sample, the mean metallicity [Fe/H] = -0.96 with σ = 0.19. The metallicity distribution may possibly be bimodal, with peaks at ∼-0.9 dex and -1.15 dex. Similar to the Large Magellanic Cloud (LMC), the SMC cluster system gives no indication of a radial metallicity gradient. However, intermediate age SMC clusters are both significantly more metal-poor and have a larger metallicity spread than their LMC counterparts. Our AMR shows evidence for three phases: a very early (>11 Gyr) phase in which the metallicity reached ∼-1.2 dex, a long intermediate phase from ∼10 to 3 Gyr in which the metallicity only slightly increased, and a final phase from 3 to 1 Gyr ago in which the rate of enrichment was substantially faster. We find good overall agreement with the model of Pagel and Tautvaisiene, which assumes a burst of star formation at 4 Gyr. Finally, we find that the mean RV of the cluster system

  20. A two-stage flow-based intrusion detection model for next-generation networks.

    Science.gov (United States)

    Umer, Muhammad Fahad; Sher, Muhammad; Bi, Yaxin

    2018-01-01

    The next-generation network provides state-of-the-art access-independent services over converged mobile and fixed networks. Security in the converged network environment is a major challenge. Traditional packet and protocol-based intrusion detection techniques cannot be used in next-generation networks due to slow throughput, low accuracy and their inability to inspect encrypted payload. An alternative solution for protection of next-generation networks is to use network flow records for detection of malicious activity in the network traffic. The network flow records are independent of access networks and user applications. In this paper, we propose a two-stage flow-based intrusion detection system for next-generation networks. The first stage uses an enhanced unsupervised one-class support vector machine which separates malicious flows from normal network traffic. The second stage uses a self-organizing map which automatically groups malicious flows into different alert clusters. We validated the proposed approach on two flow-based datasets and obtained promising results.

  1. Chandra Cluster Cosmology Project. II. Samples and X-Ray Data Reduction

    DEFF Research Database (Denmark)

    Vikhlinin, A.; Burenin, R. A.; Ebeling, H.

    2009-01-01

    We discuss the measurements of the galaxy cluster mass functions at z ≈ 0.05 and z ≈ 0.5 using high-quality Chandra observations of samples derived from the ROSAT PSPC All-Sky and 400 deg2 surveys. We provide a full reference for the data analysis procedures, present updated calibration of relati...... at a fixed mass threshold, e.g., by a factor of 5.0 ± 1.2 at M 500 = 2.5 × 1014 h –1 M sun between z = 0 and 0.5. This evolution reflects the growth of density perturbations, and can be used for the cosmological constraints complementing those from the distance-redshift relation....

  2. The use of cluster sampling to determine aid needs in Grozny, Chechnya in 1995.

    Science.gov (United States)

    Drysdale, S; Howarth, J; Powell, V; Healing, T

    2000-09-01

    War broke out in Chechnya in November 1994 following a three-year economic blockade. It caused widespread destruction in the capital Grozny. In April 1995 Medical Relief International--or Merlin, a British medical non-governmental organisation (NGO)--began a programme to provide medical supplies, support health centres, control communicable disease and promote preventive health-care in Grozny. In July 1995 the agency undertook a city-wide needs assessment using a modification of the cluster sampling technique developed by the Expanded Programme on Immunisation. This showed that most people had enough drinking-water, food and fuel but that provision of medical care was inadequate. The survey allowed Merlin to redirect resources earmarked for a clean water programme towards health education and improving primary health-care services. It also showed that rapid assessment by a statistically satisfactory method is both possible and useful in such a situation.

  3. Simultaneous alignment and clustering of peptide data using a Gibbs sampling approach

    DEFF Research Database (Denmark)

    Andreatta, Massimo; Lund, Ole; Nielsen, Morten

    2013-01-01

    Motivation: Proteins recognizing short peptide fragments play a central role in cellular signaling. As a result of high-throughput technologies, peptide-binding protein specificities can be studied using large peptide libraries at dramatically lower cost and time. Interpretation of such large...... peptide datasets, however, is a complex task, especially when the data contain multiple receptor binding motifs, and/or the motifs are found at different locations within distinct peptides.Results: The algorithm presented in this article, based on Gibbs sampling, identifies multiple specificities...... of unaligned peptide datasets of variable length. Example applications described in this article include mixtures of binders to different MHC class I and class II alleles, distinct classes of ligands for SH3 domains and sub-specificities of the HLA-A*02:01 molecule.Availability: The Gibbs clustering method...

  4. Optics of two-stage photovoltaic concentrators with dielectric second stages

    Science.gov (United States)

    Ning, Xiaohui; O'Gallagher, Joseph; Winston, Roland

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  5. Two-stage model of development of heterogeneous uranium-lead systems in zircon

    International Nuclear Information System (INIS)

    Mel'nikov, N.N.; Zevchenkov, O.A.

    1985-01-01

    Behaviour of isotope systems of multiphase zircons at their two-stage distortion is considered. The results of calculations testify to the fact that linear correlations on the diagram with concordance can be explained including two-stage discovery of U-Pb systems of cogenetic zircons if zircon is considered physically heterogeneous and losing in its different part different ratios of accumulated radiogenic lead. ''Metamorphism ages'' obtained by these two-stage opening zircons are intermediate, and they not have geochronological significance while ''crystallization ages'' remain rather close to real ones. Two-stage opening zircons in some cases can be diagnosed by discordance of their crystal component

  6. Optics of two-stage photovoltaic concentrators with dielectric second stages.

    Science.gov (United States)

    Ning, X; O'Gallagher, J; Winston, R

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  7. Tumor producing fibroblast growth factor 23 localized by two-staged venous sampling.

    NARCIS (Netherlands)

    Boekel, G.A.J van; Ruinemans-Koerts, J.; Joosten, F.; Dijkhuizen, P.; Sorge, A.A. van; Boer, H. de

    2008-01-01

    BACKGROUND: Tumor-induced osteomalacia is a rare paraneoplastic syndrome characterized by hypophosphatemia, renal phosphate wasting, suppressed 1,25-dihydroxyvitamin D production, and osteomalacia. It is caused by a usually benign mesenchymal tumor producing fibroblast growth factor 23 (FGF-23).

  8. The Gemini/HST Galaxy Cluster Project: Redshift 0.2–1.0 Cluster Sample, X-Ray Data, and Optical Photometry Catalog

    Science.gov (United States)

    Jørgensen, Inger; Chiboucas, Kristin; Hibon, Pascale; Nielsen, Louise D.; Takamiya, Marianne

    2018-04-01

    The Gemini/HST Galaxy Cluster Project (GCP) covers 14 z = 0.2–1.0 clusters with X-ray luminosity of {L}500≥slant {10}44 {erg} {{{s}}}-1 in the 0.1–2.4 keV band. In this paper, we provide homogeneously calibrated X-ray luminosities, masses, and radii, and we present the complete catalog of the ground-based photometry for the GCP clusters. The clusters were observed with either Gemini North or South in three or four of the optical passbands g‧, r‧, i‧, and z‧. The photometric catalog includes consistently calibrated total magnitudes, colors, and geometrical parameters. The photometry reaches ≈25 mag in the passband closest to the rest-frame B band. We summarize comparisons of our photometry with data from the Sloan Digital Sky Survey. We describe the sample selection for our spectroscopic observations, and establish the calibrations to obtain rest-frame magnitudes and colors. Finally, we derive the color–magnitude relations for the clusters, and briefly discuss these in the context of evolution with redshift. Consistent with our results based on spectroscopic data, the color–magnitude relations support passive evolution of the red sequence galaxies. The absence of change in the slope with redshift constrains the allowable age variation along the red sequence to <0.05 dex between the brightest cluster galaxies and those four magnitudes fainter. This paper serves as the main reference for the GCP cluster and galaxy selection, X-ray data, and ground-based photometry.

  9. Cluster Analysis of the Yale Global Tic Severity Scale (YGTSS): Symptom Dimensions and Clinical Correlates in an Outpatient Youth Sample

    OpenAIRE

    Kircanski, Katharina; Woods, Douglas W.; Chang, Susanna W.; Ricketts, Emily J.; Piacentini, John C.

    2010-01-01

    Tic disorders are heterogeneous, with symptoms varying widely both within and across patients. Exploration of symptom clusters may aid in the identification of symptom dimensions of empirical and treatment import. This article presents the results of two studies investigating tic symptom clusters using a sample of 99 youth (M age = 10.7, 81% male, 77% Caucasian) diagnosed with a primary tic disorder (Tourette?s disorder or chronic tic disorder), across two university-based outpatient clinics ...

  10. One-stage versus two-stage exchange arthroplasty for infected total knee arthroplasty: a systematic review.

    Science.gov (United States)

    Nagra, Navraj S; Hamilton, Thomas W; Ganatra, Sameer; Murray, David W; Pandit, Hemant

    2016-10-01

    Infection complicating total knee arthroplasty (TKA) has serious implications. Traditionally the debate on whether one- or two-stage exchange arthroplasty is the optimum management of infected TKA has favoured two-stage procedures; however, a paradigm shift in opinion is emerging. This study aimed to establish whether current evidence supports one-stage revision for managing infected TKA based on reinfection rates and functional outcomes post-surgery. MEDLINE/PubMed and CENTRAL databases were reviewed for studies that compared one- and two-stage exchange arthroplasty TKA in more than ten patients with a minimum 2-year follow-up. From an initial sample of 796, five cohort studies with a total of 231 patients (46 single-stage/185 two-stage; median patient age 66 years, range 61-71 years) met inclusion criteria. Overall, there were no significant differences in risk of reinfection following one- or two-stage exchange arthroplasty (OR -0.06, 95 % confidence interval -0.13, 0.01). Subgroup analysis revealed that in studies published since 2000, one-stage procedures have a significantly lower reinfection rate. One study investigated functional outcomes and reported that one-stage surgery was associated with superior functional outcomes. Scarcity of data, inconsistent study designs, surgical technique and antibiotic regime disparities limit recommendations that can be made. Recent studies suggest one-stage exchange arthroplasty may provide superior outcomes, including lower reinfection rates and superior function, in select patients. Clinically, for some patients, one-stage exchange arthroplasty may represent optimum treatment; however, patient selection criteria and key components of surgical and post-operative anti-microbial management remain to be defined. III.

  11. Cluster Sampling Bias in Government-Sponsored Evaluations: A Correlational Study of Employment and Welfare Pilots in England.

    Science.gov (United States)

    Vaganay, Arnaud

    2016-01-01

    For pilot or experimental employment programme results to apply beyond their test bed, researchers must select 'clusters' (i.e. the job centres delivering the new intervention) that are reasonably representative of the whole territory. More specifically, this requirement must account for conditions that could artificially inflate the effect of a programme, such as the fluidity of the local labour market or the performance of the local job centre. Failure to achieve representativeness results in Cluster Sampling Bias (CSB). This paper makes three contributions to the literature. Theoretically, it approaches the notion of CSB as a human behaviour. It offers a comprehensive theory, whereby researchers with limited resources and conflicting priorities tend to oversample 'effect-enhancing' clusters when piloting a new intervention. Methodologically, it advocates for a 'narrow and deep' scope, as opposed to the 'wide and shallow' scope, which has prevailed so far. The PILOT-2 dataset was developed to test this idea. Empirically, it provides evidence on the prevalence of CSB. In conditions similar to the PILOT-2 case study, investigators (1) do not sample clusters with a view to maximise generalisability; (2) do not oversample 'effect-enhancing' clusters; (3) consistently oversample some clusters, including those with higher-than-average client caseloads; and (4) report their sampling decisions in an inconsistent and generally poor manner. In conclusion, although CSB is prevalent, it is still unclear whether it is intentional and meant to mislead stakeholders about the expected effect of the intervention or due to higher-level constraints or other considerations.

  12. Possible two-stage /sup 87/Sr evolution in the Stockdale Rhyolite

    Energy Technology Data Exchange (ETDEWEB)

    Compston, W.; McDougall, I. (Australian National Univ., Canberra. Research School of Earth Sciences); Wyborn, D. (Department of Minerals and Energy, Canberra (Australia). Bureau of Mineral Resources)

    1982-12-01

    The Rb-Sr total-rock data for the Stockdale Rhyolite, of significance for the Palaeozoic time scale, are more scattered about a single-stage isochron than expected from experimental error. Two-stage /sup 87/Sr evolution for several of the samples is explored to explain this, as an alternative to variation in the initial /sup 87/Sr//sup 86/Sr which is customarily used in single-stage dating models. The deletion of certain samples having very high Rb/Sr removes most of the excess scatter and leads to an estimate of 430 +- 7 m.y. for the age of extrusion. There is a younger alignment of Rb-Sr data within each sampling site at 412 +- 7 m.y. We suggest that the Stockdale Rhyolite is at least 430 m.y. old, that its original range in Rb/Sr was smaller than now observed, and that it experienced a net loss in Sr during later hydrothermal alteration at ca. 412 m.y.

  13. Possible two-stage 87Sr evolution in the Stockdale Rhyolite

    International Nuclear Information System (INIS)

    Compston, W.; McDougall, I.; Wyborn, D.

    1982-01-01

    The Rb-Sr total-rock data for the Stockdale Rhyolite, of significance for the Palaeozoic time scale, are more scattered about a single-stage isochron than expected from experimental error. Two-stage 87 Sr evolution for several of the samples is explored to explain this, as an alternative to variation in the initial 87 Sr/ 86 Sr which is customarily used in single-stage dating models. The deletion of certain samples having very high Rb/Sr removes most of the excess scatter and leads to an estimate of 430 +- 7 m.y. for the age of extrusion. There is a younger alignment of Rb-Sr data within each sampling site at 412 +- 7 m.y. We suggest that the Stockdale Rhyolite is at least 430 m.y. old, that its original range in Rb/Sr was smaller than now observed, and that it experienced a net loss in Sr during later hydrothermal alteration at ca. 412 m.y. (orig.)

  14. Influence of Cu(NO32 initiation additive in two-stage mode conditions of coal pyrolytic decomposition

    Directory of Open Access Journals (Sweden)

    Larionov Kirill

    2017-01-01

    Full Text Available Two-stage process (pyrolysis and oxidation of brown coal sample with Cu(NO32 additive pyrolytic decomposition was studied. Additive was introduced by using capillary wetness impregnation method with 5% mass concentration. Sample reactivity was studied by thermogravimetric analysis with staged gaseous medium supply (argon and air at heating rate 10 °C/min and intermediate isothermal soaking. The initiative additive introduction was found to significantly reduce volatile release temperature and accelerate thermal decomposition of sample. Mass-spectral analysis results reveal that significant difference in process characteristics is connected to volatile matter release stage which is initiated by nitrous oxide produced during copper nitrate decomposition.

  15. The X-ray luminosity-temperature relation of a complete sample of low-mass galaxy clusters

    DEFF Research Database (Denmark)

    Zou, S.; Maughan, B. J.; Giles, P. A.

    2016-01-01

    found for massive clusters to a steeper slope for the lower mass sample studied here. Thanks to our rigorous treatment of selection biases, these measurements provide a robust reference against which to compare predictions of models of the impact of feedback on the X-ray properties of galaxy groups....... (T), taking selection biases fully into account. The logarithmic slope of the bolometric L-T relation was found to be 3.29 ± 0.33, consistent with values typically found for samples of more massive clusters. In combination with other recent studies of the L-T relation, we show...

  16. Evaluation of immunization coverage by lot quality assurance sampling compared with 30-cluster sampling in a primary health centre in India.

    OpenAIRE

    Singh, J.; Jain, D. C.; Sharma, R. S.; Verghese, T.

    1996-01-01

    The immunization coverage of infants, children and women residing in a primary health centre (PHC) area in Rajasthan was evaluated both by lot quality assurance sampling (LQAS) and by the 30-cluster sampling method recommended by WHO's Expanded Programme on Immunization (EPI). The LQAS survey was used to classify 27 mutually exclusive subunits of the population, defined as residents in health subcentre areas, on the basis of acceptable or unacceptable levels of immunization coverage among inf...

  17. Two-stage exchange knee arthroplasty: does resistance of the infecting organism influence the outcome?

    Science.gov (United States)

    Kurd, Mark F; Ghanem, Elie; Steinbrecher, Jill; Parvizi, Javad

    2010-08-01

    Periprosthetic joint infection after TKA is a challenging complication. Two-stage exchange arthroplasty is the accepted standard of care, but reported failure rates are increasing. It has been suggested this is due to the increased prevalence of methicillin-resistant infections. We asked the following questions: (1) What is the reinfection rate after two-stage exchange arthroplasty? (2) Which risk factors predict failure? (3) Which variables are associated with acquiring a resistant organism periprosthetic joint infection? This was a case-control study of 102 patients with infected TKA who underwent a two-stage exchange arthroplasty. Ninety-six patients were followed for a minimum of 2 years (mean, 34.5 months; range, 24-90.1 months). Cases were defined as failures of two-stage exchange arthroplasty. Two-stage exchange arthroplasty was successful in controlling the infection in 70 patients (73%). Patients who failed two-stage exchange arthroplasty were 3.37 times more likely to have been originally infected with a methicillin-resistant organism. Older age, higher body mass index, and history of thyroid disease were predisposing factors to infection with a methicillin-resistant organism. Innovative interventions are needed to improve the effectiveness of two-stage exchange arthroplasty for TKA infection with a methicillin-resistant organism as current treatment protocols may not be adequate for control of these virulent pathogens. Level IV, prognostic study. See Guidelines for Authors for a complete description of levels of evidence.

  18. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts.

    Science.gov (United States)

    Chien, Chia-Chang; Huang, Shu-Fen; Lung, For-Wey

    2009-01-27

    The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts. We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST) and the Wechsler Adult Intelligence Scale-Revised (WAIS-R) assessments. Logistic regression analysis showed the conceptual level responses (CLR) index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84). We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%. The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.

  19. UV TO FAR-IR CATALOG OF A GALAXY SAMPLE IN NEARBY CLUSTERS: SPECTRAL ENERGY DISTRIBUTIONS AND ENVIRONMENTAL TRENDS

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Fernandez, Jonathan D.; Iglesias-Paramo, J.; Vilchez, J. M., E-mail: jonatan@iaa.es [Instituto de Astrofisica de Andalucia, Glorieta de la Astronomia s/n, 18008 Granada (Spain)

    2012-03-01

    In this paper, we present a sample of cluster galaxies devoted to study the environmental influence on the star formation activity. This sample of galaxies inhabits in clusters showing a rich variety in their characteristics and have been observed by the SDSS-DR6 down to M{sub B} {approx} -18, and by the Galaxy Evolution Explorer AIS throughout sky regions corresponding to several megaparsecs. We assign the broadband and emission-line fluxes from ultraviolet to far-infrared to each galaxy performing an accurate spectral energy distribution for spectral fitting analysis. The clusters follow the general X-ray luminosity versus velocity dispersion trend of L{sub X} {proportional_to} {sigma}{sup 4.4}{sub c}. The analysis of the distributions of galaxy density counting up to the 5th nearest neighbor {Sigma}{sub 5} shows: (1) the virial regions and the cluster outskirts share a common range in the high density part of the distribution. This can be attributed to the presence of massive galaxy structures in the surroundings of virial regions. (2) The virial regions of massive clusters ({sigma}{sub c} > 550 km s{sup -1}) present a {Sigma}{sub 5} distribution statistically distinguishable ({approx}96%) from the corresponding distribution of low-mass clusters ({sigma}{sub c} < 550 km s{sup -1}). Both massive and low-mass clusters follow a similar density-radius trend, but the low-mass clusters avoid the high density extreme. We illustrate, with ABELL 1185, the environmental trends of galaxy populations. Maps of sky projected galaxy density show how low-luminosity star-forming galaxies appear distributed along more spread structures than their giant counterparts, whereas low-luminosity passive galaxies avoid the low-density environment. Giant passive and star-forming galaxies share rather similar sky regions with passive galaxies exhibiting more concentrated distributions.

  20. Cluster-sample surveys and lot quality assurance sampling to evaluate yellow fever immunisation coverage following a national campaign, Bolivia, 2007.

    Science.gov (United States)

    Pezzoli, Lorenzo; Pineda, Silvia; Halkyer, Percy; Crespo, Gladys; Andrews, Nick; Ronveaux, Olivier

    2009-03-01

    To estimate the yellow fever (YF) vaccine coverage for the endemic and non-endemic areas of Bolivia and to determine whether selected districts had acceptable levels of coverage (>70%). We conducted two surveys of 600 individuals (25 x 12 clusters) to estimate coverage in the endemic and non-endemic areas. We assessed 11 districts using lot quality assurance sampling (LQAS). The lot (district) sample was 35 individuals with six as decision value (alpha error 6% if true coverage 70%; beta error 6% if true coverage 90%). To increase feasibility, we divided the lots into five clusters of seven individuals; to investigate the effect of clustering, we calculated alpha and beta by conducting simulations where each cluster's true coverage was sampled from a normal distribution with a mean of 70% or 90% and standard deviations of 5% or 10%. Estimated coverage was 84.3% (95% CI: 78.9-89.7) in endemic areas, 86.8% (82.5-91.0) in non-endemic and 86.0% (82.8-89.1) nationally. LQAS showed that four lots had unacceptable coverage levels. In six lots, results were inconsistent with the estimated administrative coverage. The simulations suggested that the effect of clustering the lots is unlikely to have significantly increased the risk of making incorrect accept/reject decisions. Estimated YF coverage was high. Discrepancies between administrative coverage and LQAS results may be due to incorrect population data. Even allowing for clustering in LQAS, the statistical errors would remain low. Catch-up campaigns are recommended in districts with unacceptable coverage.

  1. Clinical evaluation of nonsyndromic dental anomalies in Dravidian population: A cluster sample analysis.

    Science.gov (United States)

    Yamunadevi, Andamuthu; Selvamani, M; Vinitha, V; Srivandhana, R; Balakrithiga, M; Prabhu, S; Ganapathy, N

    2015-08-01

    To record the prevalence rate of dental anomalies in Dravidian population and analyze the percentage of individual anomalies in the population. A cluster sample analysis was done, where 244 subjects studying in a dental institution were all included and analyzed for occurrence of dental anomalies by clinical examination, excluding third molars from analysis. 31.55% of the study subjects had dental anomalies and shape anomalies were more prevalent (22.1%), followed by size (8.6%), number (3.2%) and position anomalies (0.4%). Retained deciduous was seen in 1.63%. Among the individual anomalies, Talon's cusp (TC) was seen predominantly (14.34%), followed by microdontia (6.6%) and supernumerary cusps (5.73%). Prevalence rate of dental anomalies in the Dravidian population is 31.55% in the present study, exclusive of third molars. Shape anomalies are more common, and TC is the most commonly noted anomaly. Varying prevalence rate is reported in different geographical regions of the world.

  2. ELEMENTAL ABUNDANCE RATIOS IN STARS OF THE OUTER GALACTIC DISK. IV. A NEW SAMPLE OF OPEN CLUSTERS

    International Nuclear Information System (INIS)

    Yong, David; Carney, Bruce W.; Friel, Eileen D.

    2012-01-01

    We present radial velocities and chemical abundances for nine stars in the old, distant open clusters Be18, Be21, Be22, Be32, and PWM4. For Be18 and PWM4, these are the first chemical abundance measurements. Combining our data with literature results produces a compilation of some 68 chemical abundance measurements in 49 unique clusters. For this combined sample, we study the chemical abundances of open clusters as a function of distance, age, and metallicity. We confirm that the metallicity gradient in the outer disk is flatter than the gradient in the vicinity of the solar neighborhood. We also confirm that the open clusters in the outer disk are metal-poor with enhancements in the ratios [α/Fe] and perhaps [Eu/Fe]. All elements show negligible or small trends between [X/Fe] and distance ( –1 ), but for some elements, there is a hint that the local (R GC GC > 13 kpc) samples may have different trends with distance. There is no evidence for significant abundance trends versus age ( –1 ). We measure the linear relation between [X/Fe] and metallicity, [Fe/H], and find that the scatter about the mean trend is comparable to the measurement uncertainties. Comparison with solar neighborhood field giants shows that the open clusters share similar abundance ratios [X/Fe] at a given metallicity. While the flattening of the metallicity gradient and enhanced [α/Fe] ratios in the outer disk suggest a chemical enrichment history different from that of the solar neighborhood, we echo the sentiments expressed by Friel et al. that definitive conclusions await homogeneous analyses of larger samples of stars in larger numbers of clusters. Arguably, our understanding of the evolution of the outer disk from open clusters is currently limited by systematic abundance differences between various studies.

  3. Design considerations for single-stage and two-stage pneumatic pellet injectors

    International Nuclear Information System (INIS)

    Gouge, M.J.; Combs, S.K.; Fisher, P.W.; Milora, S.L.

    1988-09-01

    Performance of single-stage pneumatic pellet injectors is compared with several models for one-dimensional, compressible fluid flow. Agreement is quite good for models that reflect actual breech chamber geometry and incorporate nonideal effects such as gas friction. Several methods of improving the performance of single-stage pneumatic pellet injectors in the near term are outlined. The design and performance of two-stage pneumatic pellet injectors are discussed, and initial data from the two-stage pneumatic pellet injector test facility at Oak Ridge National Laboratory are presented. Finally, a concept for a repeating two-stage pneumatic pellet injector is described. 27 refs., 8 figs., 3 tabs

  4. Edge Principal Components and Squash Clustering: Using the Special Structure of Phylogenetic Placement Data for Sample Comparison

    Science.gov (United States)

    Matsen IV, Frederick A.; Evans, Steven N.

    2013-01-01

    Principal components analysis (PCA) and hierarchical clustering are two of the most heavily used techniques for analyzing the differences between nucleic acid sequence samples taken from a given environment. They have led to many insights regarding the structure of microbial communities. We have developed two new complementary methods that leverage how this microbial community data sits on a phylogenetic tree. Edge principal components analysis enables the detection of important differences between samples that contain closely related taxa. Each principal component axis is a collection of signed weights on the edges of the phylogenetic tree, and these weights are easily visualized by a suitable thickening and coloring of the edges. Squash clustering outputs a (rooted) clustering tree in which each internal node corresponds to an appropriate “average” of the original samples at the leaves below the node. Moreover, the length of an edge is a suitably defined distance between the averaged samples associated with the two incident nodes, rather than the less interpretable average of distances produced by UPGMA, the most widely used hierarchical clustering method in this context. We present these methods and illustrate their use with data from the human microbiome. PMID:23505415

  5. Hydrogen production from cellulose in a two-stage process combining fermentation and electrohydrogenesis

    KAUST Repository

    Lalaurette, Elodie; Thammannagowda, Shivegowda; Mohagheghi, Ali; Maness, Pin-Ching; Logan, Bruce E.

    2009-01-01

    A two-stage dark-fermentation and electrohydrogenesis process was used to convert the recalcitrant lignocellulosic materials into hydrogen gas at high yields and rates. Fermentation using Clostridium thermocellum produced 1.67 mol H2/mol

  6. Lingual mucosal graft two-stage Bracka technique for redo hypospadias repair

    Directory of Open Access Journals (Sweden)

    Ahmed Sakr

    2017-09-01

    Conclusion: Lingual mucosa is a reliable and versatile graft material in the armamentarium of two-stage Bracka hypospadias repair with the merits of easy harvesting and minor donor-site complications.

  7. Comparative effectiveness of one-stage versus two-stage basilic vein transposition arteriovenous fistulas.

    Science.gov (United States)

    Ghaffarian, Amir A; Griffin, Claire L; Kraiss, Larry W; Sarfati, Mark R; Brooke, Benjamin S

    2018-02-01

    Basilic vein transposition (BVT) fistulas may be performed as either a one-stage or two-stage operation, although there is debate as to which technique is superior. This study was designed to evaluate the comparative clinical efficacy and cost-effectiveness of one-stage vs two-stage BVT. We identified all patients at a single large academic hospital who had undergone creation of either a one-stage or two-stage BVT between January 2007 and January 2015. Data evaluated included patient demographics, comorbidities, medication use, reasons for abandonment, and interventions performed to maintain patency. Costs were derived from the literature, and effectiveness was expressed in quality-adjusted life-years (QALYs). We analyzed primary and secondary functional patency outcomes as well as survival during follow-up between one-stage and two-stage BVT procedures using multivariate Cox proportional hazards models and Kaplan-Meier analysis with log-rank tests. The incremental cost-effectiveness ratio was used to determine cost savings. We identified 131 patients in whom 57 (44%) one-stage BVT and 74 (56%) two-stage BVT fistulas were created among 8 different vascular surgeons during the study period that each performed both procedures. There was no significant difference in the mean age, male gender, white race, diabetes, coronary disease, or medication profile among patients undergoing one- vs two-stage BVT. After fistula transposition, the median follow-up time was 8.3 months (interquartile range, 3-21 months). Primary patency rates of one-stage BVT were 56% at 12-month follow-up, whereas primary patency rates of two-stage BVT were 72% at 12-month follow-up. Patients undergoing two-stage BVT also had significantly higher rates of secondary functional patency at 12 months (57% for one-stage BVT vs 80% for two-stage BVT) and 24 months (44% for one-stage BVT vs 73% for two-stage BVT) of follow-up (P < .001 using log-rank test). However, there was no significant difference

  8. Cost-effectiveness Analysis of a Two-stage Screening Intervention for Hepatocellular Carcinoma in Taiwan

    Directory of Open Access Journals (Sweden)

    Sophy Ting-Fang Shih

    2010-01-01

    Conclusion: Screening the population of high-risk individuals for HCC with the two-stage screening intervention in Taiwan is considered potentially cost-effective compared with opportunistic screening in the target population of an HCC endemic area.

  9. A Two-Stage Fuzzy Logic Control Method of Traffic Signal Based on Traffic Urgency Degree

    OpenAIRE

    Yan Ge

    2014-01-01

    City intersection traffic signal control is an important method to improve the efficiency of road network and alleviate traffic congestion. This paper researches traffic signal fuzzy control method on a single intersection. A two-stage traffic signal control method based on traffic urgency degree is proposed according to two-stage fuzzy inference on single intersection. At the first stage, calculate traffic urgency degree for all red phases using traffic urgency evaluation module and select t...

  10. Noncausal two-stage image filtration at presence of observations with anomalous errors

    OpenAIRE

    S. V. Vishnevyy; S. Ya. Zhuk; A. N. Pavliuchenkova

    2013-01-01

    Introduction. It is necessary to develop adaptive algorithms, which allow to detect such regions and to apply filter with respective parameters for suppression of anomalous noises for the purposes of image filtration, which consist of regions with anomalous errors. Development of adaptive algorithm for non-causal two-stage images filtration at pres-ence of observations with anomalous errors. The adaptive algorithm for noncausal two-stage filtration is developed. On the first stage the adaptiv...

  11. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts

    Directory of Open Access Journals (Sweden)

    Chia-Chang Chien

    2009-01-01

    Full Text Available Chia-Chang Chien1, Shu-Fen Huang1,2,3,4, For-Wey Lung1,2,3,41Department of Psychiatry, Kaohsiung Armed Forces General Hospital, Kaohsiung, Taiwan; 2Graduate Institute of Behavioral Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan; 3Department of Psychiatry, National Defense Medical Center, Taipei, Taiwan; 4Calo Psychiatric Center, Pingtung County, TaiwanObjective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts.Methods: We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST and the Wechsler Adult Intelligence Scale-Revised (WAIS-R assessments.Results: Logistic regression analysis showed the conceptual level responses (CLR index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84. We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%.Conclusion: The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.Keywords: intellectual disability, intelligence screening, two-stage positive screening, Wisconsin Card Sorting Test, Wechsler Adult Intelligence Scale-Revised

  12. Hydrogen production from cellulose in a two-stage process combining fermentation and electrohydrogenesis

    KAUST Repository

    Lalaurette, Elodie

    2009-08-01

    A two-stage dark-fermentation and electrohydrogenesis process was used to convert the recalcitrant lignocellulosic materials into hydrogen gas at high yields and rates. Fermentation using Clostridium thermocellum produced 1.67 mol H2/mol-glucose at a rate of 0.25 L H2/L-d with a corn stover lignocellulose feed, and 1.64 mol H2/mol-glucose and 1.65 L H2/L-d with a cellobiose feed. The lignocelluose and cellobiose fermentation effluent consisted primarily of: acetic, lactic, succinic, and formic acids and ethanol. An additional 800 ± 290 mL H2/g-COD was produced from a synthetic effluent with a wastewater inoculum (fermentation effluent inoculum; FEI) by electrohydrogensis using microbial electrolysis cells (MECs). Hydrogen yields were increased to 980 ± 110 mL H2/g-COD with the synthetic effluent by combining in the inoculum samples from multiple microbial fuel cells (MFCs) each pre-acclimated to a single substrate (single substrate inocula; SSI). Hydrogen yields and production rates with SSI and the actual fermentation effluents were 980 ± 110 mL/g-COD and 1.11 ± 0.13 L/L-d (synthetic); 900 ± 140 mL/g-COD and 0.96 ± 0.16 L/L-d (cellobiose); and 750 ± 180 mL/g-COD and 1.00 ± 0.19 L/L-d (lignocellulose). A maximum hydrogen production rate of 1.11 ± 0.13 L H2/L reactor/d was produced with synthetic effluent. Energy efficiencies based on electricity needed for the MEC using SSI were 270 ± 20% for the synthetic effluent, 230 ± 50% for lignocellulose effluent and 220 ± 30% for the cellobiose effluent. COD removals were ∼90% for the synthetic effluents, and 70-85% based on VFA removal (65% COD removal) with the cellobiose and lignocellulose effluent. The overall hydrogen yield was 9.95 mol-H2/mol-glucose for the cellobiose. These results show that pre-acclimation of MFCs to single substrates improves performance with a complex mixture of substrates, and that high hydrogen yields and gas production rates can be achieved using a two-stage fermentation and MEC

  13. A Two-Stage Maximum Entropy Prior of Location Parameter with a Stochastic Multivariate Interval Constraint and Its Properties

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2016-05-01

    Full Text Available This paper proposes a two-stage maximum entropy prior to elicit uncertainty regarding a multivariate interval constraint of the location parameter of a scale mixture of normal model. Using Shannon’s entropy, this study demonstrates how the prior, obtained by using two stages of a prior hierarchy, appropriately accounts for the information regarding the stochastic constraint and suggests an objective measure of the degree of belief in the stochastic constraint. The study also verifies that the proposed prior plays the role of bridging the gap between the canonical maximum entropy prior of the parameter with no interval constraint and that with a certain multivariate interval constraint. It is shown that the two-stage maximum entropy prior belongs to the family of rectangle screened normal distributions that is conjugate for samples from a normal distribution. Some properties of the prior density, useful for developing a Bayesian inference of the parameter with the stochastic constraint, are provided. We also propose a hierarchical constrained scale mixture of normal model (HCSMN, which uses the prior density to estimate the constrained location parameter of a scale mixture of normal model and demonstrates the scope of its applicability.

  14. Study on the effect of mutated bacillus megaterium in two-stage fermentation of vitamin C

    International Nuclear Information System (INIS)

    Lv Shujuan; Wang Jun; Yao Jianming; Yu Zengliang

    2003-01-01

    Bacillus megaterium as a companion strain in two-stage fermentation of vitamin C could secrete some active substances to spur growth of Gluconobacter oxydans to produce 2-KLG. In the fermenting system where Gluconobacter oxydans was combined with GB82-a mutated strain of B. megaterium by ion implantation, the amount of 2-KLG harvested was larger than that produced by the original B. megaterium BP52 being substituted for GB82. In this paper, authors studied the effect of the active substances secreted by GB82 to enhance the capability of Gluconobacter oxydans to produce 2-KLG. The supernate of GB82 sampled at different cultivation times all had much more activity to spur Gluconobacter oxydans to yield 2-KLG than that of the original B. megaterium, which might be due to the genetic changes in the active components caused by ion implantation. Furthermore, the active substances of GB82's supernate would lose a part of its activity in extreme environments, which is typical of some proteins

  15. Evaluation of carcinogenic potential of diuron in a rat mammary two-stage carcinogenesis model.

    Science.gov (United States)

    Grassi, Tony Fernando; Rodrigues, Maria Aparecida Marchesan; de Camargo, João Lauro Viana; Barbisan, Luís Fernando

    2011-04-01

    This study aimed to evaluate the carcinogenic potential of the herbicide Diuron in a two-stage rat medium-term mammary carcinogenesis model initiated by 7,12-dimethylbenz(a)anthracene (DMBA). Female seven-week-old Sprague-Dawley (SD) rats were allocated to six groups: groups G1 to G4 received intragastrically (i.g.) a single 50 mg/kg dose of DMBA; groups G5 and G6 received single administration of canola oil (vehicle of DMBA). Groups G1 and G5 received a basal diet, and groups G2, G3, G4, and G6 were fed the basal diet with the addition of Diuron at 250, 1250, 2500, and 2500 ppm, respectively. After twenty-five weeks, the animals were euthanized and mammary tumors were histologically confirmed and quantified. Tumor samples were also processed for immunohistochemical evaluation of the expressions of proliferating cell nuclear antigen (PCNA), cleaved caspase-3, estrogen receptor-α (ER-α), p63, bcl-2, and bak. Diuron treatment did not increase the incidence or multiplicity of mammary tumors (groups G2 to G4 versus Group G1). Also, exposure to Diuron did not alter tumor growth (cell proliferation and apoptosis indexes) or immunoreactivity to ER-α, p63 (myoephitelial marker), or bcl-2 and bak (apoptosis regulatory proteins). These findings indicate that Diuron does not have a promoting potential on mammary carcinogenesis in female SD rats initiated with DMBA.

  16. SUCCESS FACTORS IN GROWING SMBs: A STUDY OF TWO INDUSTRIES AT TWO STAGES OF DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Tor Jarl Trondsen

    2002-01-01

    Full Text Available The study attempts to identify factors for growing SMBs. An evolutionary phase approach has been used. The study also aims to find out if there are common and different denominators for newer and older firms that can affect their profitability. The study selects a sampling frame that isolates two groups of firms in two industries at two stages of development. A variety of organizational and structural data was collected and analyzed. Amongst the conclusions that may be drawn from the study are that it is not easy to find a common definition of success, it is important to stratify SMBs when studying them, an evolutionary stage approach helps to compare firms with roughly the same external and internal dynamics and each industry has its own set of success variables.The study has identified three success variables for older firms that reflect contemporary strategic thinking such as crafting a good strategy and changing it only incrementally, building core competencies and outsourcing the rest, and keeping up with innovation and honing competitive skills.

  17. A two-stage stochastic rule-based model to determine pre-assembly buffer content

    Science.gov (United States)

    Gunay, Elif Elcin; Kula, Ufuk

    2018-01-01

    This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.

  18. C II 158 ??bservations of a Sample of Late-type Galaxies from the Virgo Cluster

    Science.gov (United States)

    Leech, K.; Volk, H.; Heinrichsen, I.; Hippelein, H.; Metcalfe, L.; Pierini, D.; Popescu, C.; Tuffs, R.; Xu, C.

    1999-01-01

    We have observed 19 Virgo cluster spiral galaxies with the Long Wavelength Spectrometer (LWS) onboard ESAs Infrared Space Observatory (ISO) obtaining spectra around the [CII] 157.741 ??ine structure line.

  19. A Unique Sample of Extreme-BCG Clusters at 0.2 < z < 0.5

    Science.gov (United States)

    Garmire, Gordon

    2017-09-01

    The recently-discovered Phoenix cluster harbors the most extreme BCG in the known universe. Despite the cluster's high mass and X-ray luminosity, it was consistently identified by surveys as an isolated AGN, due to the bright central point source and the compact cool core. Armed with hindsight, we have undertaken an all-sky survey based on archival X-ray, OIR, and radio data to identify other similarly-extreme systems that were likewise missed. A pilot study demonstrated that this strategy works, leading to the discovery of a new, massive cluster at z 0.2 which was missed by previous X-ray surveys due to the presence of a bright central QSO. We propose here to observe 6 new clusters from our complete northern-sky survey, which harbor some of the most extreme central galaxies known.

  20. Dependence of the clustering properties of galaxies on stellar velocity dispersion in the Main galaxy sample of SDSS DR10

    Science.gov (United States)

    Deng, Xin-Fa; Song, Jun; Chen, Yi-Qing; Jiang, Peng; Ding, Ying-Ping

    2014-08-01

    Using two volume-limited Main galaxy samples of the Sloan Digital Sky Survey Data Release 10 (SDSS DR10), we investigate the dependence of the clustering properties of galaxies on stellar velocity dispersion by cluster analysis. It is found that in the luminous volume-limited Main galaxy sample, except at r=1.2, richer and larger systems can be more easily formed in the large stellar velocity dispersion subsample, while in the faint volume-limited Main galaxy sample, at r≥0.9, an opposite trend is observed. According to statistical analyses of the multiplicity functions, we conclude in two volume-limited Main galaxy samples: small stellar velocity dispersion galaxies preferentially form isolated galaxies, close pairs and small group, while large stellar velocity dispersion galaxies preferentially inhabit the dense groups and clusters. However, we note the difference between two volume-limited Main galaxy samples: in the faint volume-limited Main galaxy sample, at r≥0.9, the small stellar velocity dispersion subsample has a higher proportion of galaxies in superclusters ( n≥200) than the large stellar velocity dispersion subsample.

  1. HUBBLE SPACE TELESCOPE PROPER MOTION (HSTPROMO) CATALOGS OF GALACTIC GLOBULAR CLUSTERS. I. SAMPLE SELECTION, DATA REDUCTION, AND NGC 7078 RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Bellini, A.; Anderson, J.; Van der Marel, R. P.; Watkins, L. L. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); King, I. R. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Bianchini, P. [Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg (Germany); Chanamé, J. [Instituto de Astrofísica, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, Macul 782-0436, Santiago (Chile); Chandar, R. [Department of Physics and Astronomy, The University of Toledo, 2801 West Bancroft Street, Toledo, OH 43606 (United States); Cool, A. M. [Department of Physics and Astronomy, San Francisco State University, 1600 Holloway Avenue, San Francisco, CA 94132 (United States); Ferraro, F. R.; Massari, D. [Dipartimento di Fisica e Astronomia, Università di Bologna, via Ranzani 1, I-40127 Bologna (Italy); Ford, H., E-mail: bellini@stsci.edu [Department of Physics and Astronomy, The Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States)

    2014-12-20

    We present the first study of high-precision internal proper motions (PMs) in a large sample of globular clusters, based on Hubble Space Telescope (HST) data obtained over the past decade with the ACS/WFC, ACS/HRC, and WFC3/UVIS instruments. We determine PMs for over 1.3 million stars in the central regions of 22 clusters, with a median number of ∼60,000 stars per cluster. These PMs have the potential to significantly advance our understanding of the internal kinematics of globular clusters by extending past line-of-sight (LOS) velocity measurements to two- or three-dimensional velocities, lower stellar masses, and larger sample sizes. We describe the reduction pipeline that we developed to derive homogeneous PMs from the very heterogeneous archival data. We demonstrate the quality of the measurements through extensive Monte Carlo simulations. We also discuss the PM errors introduced by various systematic effects and the techniques that we have developed to correct or remove them to the extent possible. We provide in electronic form the catalog for NGC 7078 (M 15), which consists of 77,837 stars in the central 2.'4. We validate the catalog by comparison with existing PM measurements and LOS velocities and use it to study the dependence of the velocity dispersion on radius, stellar magnitude (or mass) along the main sequence, and direction in the plane of the sky (radial or tangential). Subsequent papers in this series will explore a range of applications in globular-cluster science and will also present the PM catalogs for the other sample clusters.

  2. THE CLUSTERING OF ALFALFA GALAXIES: DEPENDENCE ON H I MASS, RELATIONSHIP WITH OPTICAL SAMPLES, AND CLUES OF HOST HALO PROPERTIES

    Energy Technology Data Exchange (ETDEWEB)

    Papastergis, Emmanouil; Giovanelli, Riccardo; Haynes, Martha P.; Jones, Michael G. [Center for Radiophysics and Space Research, Space Sciences Building, Cornell University, Ithaca, NY 14853 (United States); Rodríguez-Puebla, Aldo, E-mail: papastergis@astro.cornell.edu, E-mail: riccardo@astro.cornell.edu, E-mail: haynes@astro.cornell.edu, E-mail: jonesmg@astro.cornell.edu, E-mail: apuebla@astro.unam.mx [Instituto de Astronomía, Universidad Nacional Autónoma de México, A. P. 70-264, 04510 México, D.F. (Mexico)

    2013-10-10

    We use a sample of ≈6000 galaxies detected by the Arecibo Legacy Fast ALFA (ALFALFA) 21 cm survey to measure the clustering properties of H I-selected galaxies. We find no convincing evidence for a dependence of clustering on galactic atomic hydrogen (H I) mass, over the range M{sub H{sub I}} ≈ 10{sup 8.5}-10{sup 10.5} M{sub ☉}. We show that previously reported results of weaker clustering for low H I mass galaxies are probably due to finite-volume effects. In addition, we compare the clustering of ALFALFA galaxies with optically selected samples drawn from the Sloan Digital Sky Survey (SDSS). We find that H I-selected galaxies cluster more weakly than even relatively optically faint galaxies, when no color selection is applied. Conversely, when SDSS galaxies are split based on their color, we find that the correlation function of blue optical galaxies is practically indistinguishable from that of H I-selected galaxies. At the same time, SDSS galaxies with red colors are found to cluster significantly more than H I-selected galaxies, a fact that is evident in both the projected as well as the full two-dimensional correlation function. A cross-correlation analysis further reveals that gas-rich galaxies 'avoid' being located within ≈3 Mpc of optical galaxies with red colors. Next, we consider the clustering properties of halo samples selected from the Bolshoi ΛCDM simulation. A comparison with the clustering of ALFALFA galaxies suggests that galactic H I mass is not tightly related to host halo mass and that a sizable fraction of subhalos do not host H I galaxies. Lastly, we find that we can recover fairly well the correlation function of H I galaxies by just excluding halos with low spin parameter. This finding lends support to the hypothesis that halo spin plays a key role in determining the gas content of galaxies.

  3. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis.

    Science.gov (United States)

    Jamshidy, Ladan; Mozaffari, Hamid Reza; Faraji, Payam; Sharifi, Roohollah

    2016-01-01

    Introduction . One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods . A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results . The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion . The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  4. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Ladan Jamshidy

    2016-01-01

    Full Text Available Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  5. Typical Periods for Two-Stage Synthesis by Time-Series Aggregation with Bounded Error in Objective Function

    Energy Technology Data Exchange (ETDEWEB)

    Bahl, Björn; Söhler, Theo; Hennen, Maike; Bardow, André, E-mail: andre.bardow@ltt.rwth-aachen.de [Institute of Technical Thermodynamics, RWTH Aachen University, Aachen (Germany)

    2018-01-08

    Two-stage synthesis problems simultaneously consider here-and-now decisions (e.g., optimal investment) and wait-and-see decisions (e.g., optimal operation). The optimal synthesis of energy systems reveals such a two-stage character. The synthesis of energy systems involves multiple large time series such as energy demands and energy prices. Since problem size increases with the size of the time series, synthesis of energy systems leads to complex optimization problems. To reduce the problem size without loosing solution quality, we propose a method for time-series aggregation to identify typical periods. Typical periods retain the chronology of time steps, which enables modeling of energy systems, e.g., with storage units or start-up cost. The aim of the proposed method is to obtain few typical periods with few time steps per period, while accurately representing the objective function of the full time series, e.g., cost. Thus, we determine the error of time-series aggregation as the cost difference between operating the optimal design for the aggregated time series and for the full time series. Thereby, we rigorously bound the maximum performance loss of the optimal energy system design. In an initial step, the proposed method identifies the best length of typical periods by autocorrelation analysis. Subsequently, an adaptive procedure determines aggregated typical periods employing the clustering algorithm k-medoids, which groups similar periods into clusters and selects one representative period per cluster. Moreover, the number of time steps per period is aggregated by a novel clustering algorithm maintaining chronology of the time steps in the periods. The method is iteratively repeated until the error falls below a threshold value. A case study based on a real-world synthesis problem of an energy system shows that time-series aggregation from 8,760 time steps to 2 typical periods with each 2 time steps results in an error smaller than the optimality gap of

  6. Frequency analysis of a two-stage planetary gearbox using two different methodologies

    Science.gov (United States)

    Feki, Nabih; Karray, Maha; Khabou, Mohamed Tawfik; Chaari, Fakher; Haddar, Mohamed

    2017-12-01

    This paper is focused on the characterization of the frequency content of vibration signals issued from a two-stage planetary gearbox. To achieve this goal, two different methodologies are adopted: the lumped-parameter modeling approach and the phenomenological modeling approach. The two methodologies aim to describe the complex vibrations generated by a two-stage planetary gearbox. The phenomenological model describes directly the vibrations as measured by a sensor fixed outside the fixed ring gear with respect to an inertial reference frame, while results from a lumped-parameter model are referenced with respect to a rotating frame and then transferred into an inertial reference frame. Two different case studies of the two-stage planetary gear are adopted to describe the vibration and the corresponding spectra using both models. Each case presents a specific geometry and a specific spectral structure.

  7. Optimisation of Refrigeration System with Two-Stage and Intercooler Using Fuzzy Logic and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Bayram Kılıç

    2017-04-01

    Full Text Available Two-stage compression operation prevents excessive compressor outlet pressure and temperature and this operation provides more efficient working condition in low-temperature refrigeration applications. Vapor compression refrigeration system with two-stage and intercooler is very good solution for low-temperature refrigeration applications. In this study, refrigeration system with two-stage and intercooler were optimized using fuzzy logic and genetic algorithm. The necessary thermodynamic characteristics for optimization were estimated with Fuzzy Logic and liquid phase enthalpy, vapour phase enthalpy, liquid phase entropy, vapour phase entropy values were compared with actual values. As a result, optimum working condition of system was estimated by the Genetic Algorithm as -6.0449 oC for evaporator temperature, 25.0115 oC for condenser temperature and 5.9666 for COP. Morever, irreversibility values of the refrigeration system are calculated.

  8. Design and construction of the X-2 two-stage free piston driven expansion tube

    Science.gov (United States)

    Doolan, Con

    1995-01-01

    This report outlines the design and construction of the X-2 two-stage free piston driven expansion tube. The project has completed its construction phase and the facility has been installed in the new impulsive research laboratory where commissioning is about to take place. The X-2 uses a unique, two-stage driver design which allows a more compact and lower overall cost free piston compressor. The new facility has been constructed in order to examine the performance envelope of the two-stage driver and how well it couple to sub-orbital and super-orbital expansion tubes. Data obtained from these experiments will be used for the design of a much larger facility, X-3, utilizing the same free piston driver concept.

  9. Efficiency of primary care in rural Burkina Faso. A two-stage DEA analysis.

    Science.gov (United States)

    Marschall, Paul; Flessa, Steffen

    2011-07-20

    Providing health care services in Africa is hampered by severe scarcity of personnel, medical supplies and financial funds. Consequently, managers of health care institutions are called to measure and improve the efficiency of their facilities in order to provide the best possible services with their resources. However, very little is known about the efficiency of health care facilities in Africa and instruments of performance measurement are hardly applied in this context. This study determines the relative efficiency of primary care facilities in Nouna, a rural health district in Burkina Faso. Furthermore, it analyses the factors influencing the efficiency of these institutions. We apply a two-stage Data Envelopment Analysis (DEA) based on data from a comprehensive provider and household information system. In the first stage, the relative efficiency of each institution is calculated by a traditional DEA model. In the second stage, we identify the reasons for being inefficient by regression technique. The DEA projections suggest that inefficiency is mainly a result of poor utilization of health care facilities as they were either too big or the demand was too low. Regression results showed that distance is an important factor influencing the efficiency of a health care institution Compared to the findings of existing one-stage DEA analyses of health facilities in Africa, the share of relatively efficient units is slightly higher. The difference might be explained by a rather homogenous structure of the primary care facilities in the Burkina Faso sample. The study also indicates that improving the accessibility of primary care facilities will have a major impact on the efficiency of these institutions. Thus, health decision-makers are called to overcome the demand-side barriers in accessing health care.

  10. Bias due to two-stage residual-outcome regression analysis in genetic association studies.

    Science.gov (United States)

    Demissie, Serkalem; Cupples, L Adrienne

    2011-11-01

    Association studies of risk factors and complex diseases require careful assessment of potential confounding factors. Two-stage regression analysis, sometimes referred to as residual- or adjusted-outcome analysis, has been increasingly used in association studies of single nucleotide polymorphisms (SNPs) and quantitative traits. In this analysis, first, a residual-outcome is calculated from a regression of the outcome variable on covariates and then the relationship between the adjusted-outcome and the SNP is evaluated by a simple linear regression of the adjusted-outcome on the SNP. In this article, we examine the performance of this two-stage analysis as compared with multiple linear regression (MLR) analysis. Our findings show that when a SNP and a covariate are correlated, the two-stage approach results in biased genotypic effect and loss of power. Bias is always toward the null and increases with the squared-correlation between the SNP and the covariate (). For example, for , 0.1, and 0.5, two-stage analysis results in, respectively, 0, 10, and 50% attenuation in the SNP effect. As expected, MLR was always unbiased. Since individual SNPs often show little or no correlation with covariates, a two-stage analysis is expected to perform as well as MLR in many genetic studies; however, it produces considerably different results from MLR and may lead to incorrect conclusions when independent variables are highly correlated. While a useful alternative to MLR under , the two -stage approach has serious limitations. Its use as a simple substitute for MLR should be avoided. © 2011 Wiley Periodicals, Inc.

  11. Comparing the performance of cluster random sampling and integrated threshold mapping for targeting trachoma control, using computer simulation.

    Directory of Open Access Journals (Sweden)

    Jennifer L Smith

    Full Text Available Implementation of trachoma control strategies requires reliable district-level estimates of trachomatous inflammation-follicular (TF, generally collected using the recommended gold-standard cluster randomized surveys (CRS. Integrated Threshold Mapping (ITM has been proposed as an integrated and cost-effective means of rapidly surveying trachoma in order to classify districts according to treatment thresholds. ITM differs from CRS in a number of important ways, including the use of a school-based sampling platform for children aged 1-9 and a different age distribution of participants. This study uses computerised sampling simulations to compare the performance of these survey designs and evaluate the impact of varying key parameters.Realistic pseudo gold standard data for 100 districts were generated that maintained the relative risk of disease between important sub-groups and incorporated empirical estimates of disease clustering at the household, village and district level. To simulate the different sampling approaches, 20 clusters were selected from each district, with individuals sampled according to the protocol for ITM and CRS. Results showed that ITM generally under-estimated the true prevalence of TF over a range of epidemiological settings and introduced more district misclassification according to treatment thresholds than did CRS. However, the extent of underestimation and resulting misclassification was found to be dependent on three main factors: (i the district prevalence of TF; (ii the relative risk of TF between enrolled and non-enrolled children within clusters; and (iii the enrollment rate in schools.Although in some contexts the two methodologies may be equivalent, ITM can introduce a bias-dependent shift as prevalence of TF increases, resulting in a greater risk of misclassification around treatment thresholds. In addition to strengthening the evidence base around choice of trachoma survey methodologies, this study illustrates

  12. Kinetics analysis of two-stage austenitization in supermartensitic stainless steel

    DEFF Research Database (Denmark)

    Nießen, Frank; Villa, Matteo; Hald, John

    2017-01-01

    The martensite-to-austenite transformation in X4CrNiMo16-5-1 supermartensitic stainless steel was followed in-situ during isochronal heating at 2, 6 and 18 K min−1 applying energy-dispersive synchrotron X-ray diffraction at the BESSY II facility. Austenitization occurred in two stages, separated...... that the austenitization kinetics is governed by Ni-diffusion and that slow transformation kinetics separating the two stages is caused by soft impingement in the martensite phase. Increasing the lath width in the kinetics model had a similar effect on the austenitization kinetics as increasing the heating-rate....

  13. One-stage and two-stage penile buccal mucosa urethroplasty

    Directory of Open Access Journals (Sweden)

    G. Barbagli

    2016-03-01

    Full Text Available The paper provides the reader with the detailed description of current techniques of one-stage and two-stage penile buccal mucosa urethroplasty. The paper provides the reader with the preoperative patient evaluation paying attention to the use of diagnostic tools. The one-stage penile urethroplasty using buccal mucosa graft with the application of glue is preliminary showed and discussed. Two-stage penile urethroplasty is then reported. A detailed description of first-stage urethroplasty according Johanson technique is reported. A second-stage urethroplasty using buccal mucosa graft and glue is presented. Finally postoperative course and follow-up are addressed.

  14. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile

    DEFF Research Database (Denmark)

    Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.

    2013-01-01

    and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium...

  15. A two-stage biological gas to liquid transfer process to convert carbon dioxide into bioplastic

    KAUST Repository

    Al Rowaihi, Israa; Kick, Benjamin; Grö tzinger, Stefan W.; Burger, Christian; Karan, Ram; Weuster-Botz, Dirk; Eppinger, Jö rg; Arold, Stefan T.

    2018-01-01

    The fermentation of carbon dioxide (CO2) with hydrogen (H2) uses available low-cost gases to synthesis acetic acid. Here, we present a two-stage biological process that allows the gas to liquid transfer (Bio-GTL) of CO2 into the biopolymer

  16. Treatment of corn ethanol distillery wastewater using two-stage anaerobic digestion.

    Science.gov (United States)

    Ráduly, B; Gyenge, L; Szilveszter, Sz; Kedves, A; Crognale, S

    In this study the mesophilic two-stage anaerobic digestion (AD) of corn bioethanol distillery wastewater is investigated in laboratory-scale reactors. Two-stage AD technology separates the different sub-processes of the AD in two distinct reactors, enabling the use of optimal conditions for the different microbial consortia involved in the different process phases, and thus allowing for higher applicable organic loading rates (OLRs), shorter hydraulic retention times (HRTs) and better conversion rates of the organic matter, as well as higher methane content of the produced biogas. In our experiments the reactors have been operated in semi-continuous phase-separated mode. A specific methane production of 1,092 mL/(L·d) has been reached at an OLR of 6.5 g TCOD/(L·d) (TCOD: total chemical oxygen demand) and a total HRT of 21 days (5.7 days in the first-stage, and 15.3 days in the second-stage reactor). Nonetheless the methane concentration in the second-stage reactor was very high (78.9%); the two-stage AD outperformed the reference single-stage AD (conducted at the same reactor loading rate and retention time) by only a small margin in terms of volumetric methane production rate. This makes questionable whether the higher methane content of the biogas counterbalances the added complexity of the two-stage digestion.

  17. On response time and cycle time distributions in a two-stage cyclic queue

    NARCIS (Netherlands)

    Boxma, O.J.; Donk, P.

    1982-01-01

    We consider a two-stage closed cyclic queueing model. For the case of an exponential server at each queue we derive the joint distribution of the successive response times of a custumer at both queues, using a reversibility argument. This joint distribution turns out to have a product form. The

  18. Simultaneous versus sequential pharmacokinetic-pharmacodynamic population analysis using an iterative two-stage Bayesian technique

    NARCIS (Netherlands)

    Proost, Johannes H.; Schiere, Sjouke; Eleveld, Douglas J.; Wierda, J. Mark K. H.

    A method for simultaneous pharmacokinetic-pharmacodynamic (PK-PD) population analysis using an Iterative Two-Stage Bayesian (ITSB) algorithm was developed. The method was evaluated using clinical data and Monte Carlo simulations. Data from a clinical study with rocuronium in nine anesthetized

  19. One-stage and two-stage penile buccal mucosa urethroplasty

    African Journals Online (AJOL)

    G. Barbagli

    2015-12-02

    Dec 2, 2015 ... there also seems to be a trend of decreasing urethritis and an increase of instrumentation and catheter related strictures in these countries as well [4–6]. The repair of penile urethral strictures may require one- or two- stage urethroplasty [7–10]. Certainly, sexual function can be placed at risk by any surgery ...

  20. Numerical simulation of brain tumor growth model using two-stage ...

    African Journals Online (AJOL)

    In the recent years, the study of glioma growth to be an active field of research Mathematical models that describe the proliferation and diffusion properties of the growth have been developed by many researchers. In this work, the performance analysis of two-stage Gauss-Seidel (TSGS) method to solve the glioma growth ...

  1. An Efficient Robust Solution to the Two-Stage Stochastic Unit Commitment Problem

    DEFF Research Database (Denmark)

    Blanco, Ignacio; Morales González, Juan Miguel

    2017-01-01

    This paper proposes a reformulation of the scenario-based two-stage unitcommitment problem under uncertainty that allows finding unit-commitment plansthat perform reasonably well both in expectation and for the worst caserealization of the uncertainties. The proposed reformulation is based onpart...

  2. Two-stage bargaining with coverage extension in a dual labour market

    DEFF Research Database (Denmark)

    Roberts, Mark A.; Stæhr, Karsten; Tranæs, Torben

    2000-01-01

    This paper studies coverage extension in a simple general equilibrium model with a dual labour market. The union sector is characterized by two-stage bargaining whereas the firms set wages in the non-union sector. In this model firms and unions of the union sector have a commonality of interest...

  3. Design and construction of a two-stage centrifugal pump | Nordiana ...

    African Journals Online (AJOL)

    Centrifugal pumps are widely used in moving liquids from one location to another in homes, offices and industries. Due to the ever increasing demand for centrifugal pumps it became necessary to design and construction of a two-stage centrifugal pump. The pump consisted of an electric motor, a shaft, two rotating impellers ...

  4. Some design aspects of a two-stage rail-to-rail CMOS op amp

    NARCIS (Netherlands)

    Gierkink, Sander L.J.; Holzmann, Peter J.; Wiegerink, Remco J.; Wassenaar, R.F.

    1999-01-01

    A two-stage low-voltage CMOS op amp with rail-to-rail input and output voltage ranges is presented. The circuit uses complementary differential input pairs to achieve the rail-to-rail common-mode input voltage range. The differential pairs operate in strong inversion, and the constant

  5. Insufficient sensitivity of joint aspiration during the two-stage exchange of the hip with spacers.

    Science.gov (United States)

    Boelch, Sebastian Philipp; Weissenberger, Manuel; Spohn, Frederik; Rudert, Maximilian; Luedemann, Martin

    2018-01-10

    Evaluation of infection persistence during the two-stage exchange of the hip is challenging. Joint aspiration before reconstruction is supposed to rule out infection persistence. Sensitivity and specificity of synovial fluid culture and synovial leucocyte count for detecting infection persistence during the two-stage exchange of the hip were evaluated. Ninety-two aspirations before planned joint reconstruction during the two-stage exchange with spacers of the hip were retrospectively analyzed. The sensitivity and specificity of synovial fluid culture was 4.6 and 94.3%. The sensitivity and specificity of synovial leucocyte count at a cut-off value of 2000 cells/μl was 25.0 and 96.9%. C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) values were significantly higher before prosthesis removal and reconstruction or spacer exchange (p = 0.00; p = 0.013 and p = 0.039; p = 0.002) in the infection persistence group. Receiver operating characteristic area under the curve values before prosthesis removal and reconstruction or spacer exchange for ESR were lower (0.516 and 0.635) than for CRP (0.720 and 0.671). Synovial fluid culture and leucocyte count cannot rule out infection persistence during the two-stage exchange of the hip.

  6. Two-Stage Power Factor Corrected Power Supplies: The Low Component-Stress Approach

    DEFF Research Database (Denmark)

    Petersen, Lars; Andersen, Michael Andreas E.

    2002-01-01

    The discussion concerning the use of single-stage contra two-stage PFC solutions has been going on for the last decade and it continues. The purpose of this paper is to direct the focus back on how the power is processed and not so much as to the number of stages or the amount of power processed...

  7. A two-stage stochastic programming approach for operating multi-energy systems

    DEFF Research Database (Denmark)

    Zeng, Qing; Fang, Jiakun; Chen, Zhe

    2017-01-01

    This paper provides a two-stage stochastic programming approach for joint operating multi-energy systems under uncertainty. Simulation is carried out in a test system to demonstrate the feasibility and efficiency of the proposed approach. The test energy system includes a gas subsystem with a gas...

  8. EVALUATION OF A TWO-STAGE PASSIVE TREATMENT APPROACH FOR MINING INFLUENCE WATERS

    Science.gov (United States)

    A two-stage passive treatment approach was assessed at bench-scale using two Colorado Mining Influenced Waters (MIWs). The first-stage was a limestone drain with the purpose of removing iron and aluminum and mitigating the potential effects of mineral acidity. The second stage w...

  9. The RTD measurement of two stage anaerobic digester using radiotracer in WWTP

    International Nuclear Information System (INIS)

    Jin-Seop, Kim; Jong-Bum, Kim; Sung-Hee, Jung

    2006-01-01

    The aims of this study are to assess the existence and location of the stagnant zone by estimating the MRT (mean residence time) on the two stage anaerobic digester, with the results to be used as informative clue for its better operation

  10. A two-stage meta-analysis identifies several new loci for Parkinson's disease.

    NARCIS (Netherlands)

    Plagnol, V.; Nalls, M.A.; Bras, J.M.; Hernandez, D.; Sharma, M.; Sheerin, U.M.; Saad, M.; Simon-Sanchez, J.; Schulte, C.; Lesage, S.; Sveinbjornsdottir, S.; Amouyel, P.; Arepalli, S.; Band, G.; Barker, R.A.; Bellinguez, C.; Ben-Shlomo, Y.; Berendse, H.W.; Berg, D; Bhatia, K.P.; Bie, R.M. de; Biffi, A.; Bloem, B.R.; Bochdanovits, Z.; Bonin, M.; Brockmann, K.; Brooks, J.; Burn, D.J.; Charlesworth, G.; Chen, H.; Chinnery, P.F.; Chong, S.; Clarke, C.E.; Cookson, M.R.; Cooper, J.M.; Corvol, J.C.; Counsell, J.; Damier, P.; Dartigues, J.F.; Deloukas, P.; Deuschl, G.; Dexter, D.T.; Dijk, K.D. van; Dillman, A.; Durif, F.; Durr, A.; Edkins, S.; Evans, J.R.; Foltynie, T.; Freeman, C.; Gao, J.; Gardner, M.; Gibbs, J.R.; Goate, A.; Gray, E.; Guerreiro, R.; Gustafsson, O.; Harris, C.; Hellenthal, G.; Hilten, J.J. van; Hofman, A.; Hollenbeck, A.; Holton, J.L.; Hu, M.; Huang, X.; Huber, H; Hudson, G.; Hunt, S.E.; Huttenlocher, J.; Illig, T.; Jonsson, P.V.; Langford, C.; Lees, A.J.; Lichtner, P.; Limousin, P.; Lopez, G.; McNeill, A.; Moorby, C.; Moore, M.; Morris, H.A.; Morrison, K.E.; Mudanohwo, E.; O'Sullivan, S.S; Pearson, J.; Pearson, R.; Perlmutter, J.; Petursson, H.; Pirinen, M.; Polnak, P.; Post, B.; Potter, S.C.; Ravina, B.; Revesz, T.; Riess, O.; Rivadeneira, F.; Rizzu, P.; Ryten, M.; Sawcer, S.J.; Schapira, A.; Scheffer, H.; Shaw, K.; Shoulson, I.; Sidransky, E.; Silva, R. de; Smith, C.; Spencer, C.C.; Stefansson, H.; Steinberg, S.; Stockton, J.D.; Strange, A.; Su, Z.; Talbot, K.; Tanner, C.M.; Tashakkori-Ghanbaria, A.; Tison, F.; Trabzuni, D.; Traynor, B.J.; Uitterlinden, A.G.; Vandrovcova, J.; Velseboer, D.; Vidailhet, M.; Vukcevic, D.; Walker, R.; Warrenburg, B.P.C. van de; Weale, M.E.; Wickremaratchi, M.; Williams, N.; Williams-Gray, C.H.; Winder-Rhodes, S.; Stefansson, K.; Martinez, M.; Donnelly, P.; Singleton, A.B.; Hardy, J.; Heutink, P.; Brice, A.; Gasser, T.; Wood, N.W.

    2011-01-01

    A previous genome-wide association (GWA) meta-analysis of 12,386 PD cases and 21,026 controls conducted by the International Parkinson's Disease Genomics Consortium (IPDGC) discovered or confirmed 11 Parkinson's disease (PD) loci. This first analysis of the two-stage IPDGC study

  11. Two-Stage MAS Technique for Analysis of DRA Elements and Arrays on Finite Ground Planes

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2007-01-01

    A two-stage Method of Auxiliary Sources (MAS) technique is proposed for analysis of dielectric resonator antenna (DRA) elements and arrays on finite ground planes (FGPs). The problem is solved by first analysing the DRA on an infinite ground plane (IGP) and then using this solution to model the FGP...

  12. A Two-Stage Approach to Civil Conflict: Contested Incompatibilities and Armed Violence

    DEFF Research Database (Denmark)

    Bartusevicius, Henrikas; Gleditsch, Kristian Skrede

    2017-01-01

    conflict origination but have no clear effect on militarization, whereas other features emphasized as shaping the risk of civil war, such as refugee flows and soft state power, strongly influence militarization but not incompatibilities. We posit that a two-stage approach to conflict analysis can help...

  13. Wide-bandwidth bilateral control using two-stage actuator system

    International Nuclear Information System (INIS)

    Kokuryu, Saori; Izutsu, Masaki; Kamamichi, Norihiro; Ishikawa, Jun

    2015-01-01

    This paper proposes a two-stage actuator system that consists of a coarse actuator driven by a ball screw with an AC motor (the first stage) and a fine actuator driven by a voice coil motor (the second stage). The proposed two-stage actuator system is applied to make a wide-bandwidth bilateral control system without needing expensive high-performance actuators. In the proposed system, the first stage has a wide moving range with a narrow control bandwidth, and the second stage has a narrow moving range with a wide control bandwidth. By consolidating these two inexpensive actuators with different control bandwidths in a complementary manner, a wide bandwidth bilateral control system can be constructed based on a mechanical impedance control. To show the validity of the proposed method, a prototype of the two-stage actuator system has been developed and basic performance was evaluated by experiment. The experimental results showed that a light mechanical impedance with a mass of 10 g and a damping coefficient of 2.5 N/(m/s) that is an important factor to establish good transparency in bilateral control has been successfully achieved and also showed that a better force and position responses between a master and slave is achieved by using the proposed two-stage actuator system compared with a narrow bandwidth case using a single ball screw system. (author)

  14. Advancing early detection of autism spectrum disorder by applying an integrated two-stage screening approach

    NARCIS (Netherlands)

    Oosterling, Iris J.; Wensing, Michel; Swinkels, Sophie H.; van der Gaag, Rutger Jan; Visser, Janne C.; Woudenberg, Tim; Minderaa, Ruud; Steenhuis, Mark-Peter; Buitelaar, Jan K.

    Background: Few field trials exist on the impact of implementing guidelines for the early detection of autism spectrum disorders (ASD). The aims of the present study were to develop and evaluate a clinically relevant integrated early detection programme based on the two-stage screening approach of

  15. A Two-Stage Meta-Analysis Identifies Several New Loci for Parkinson's Disease

    NARCIS (Netherlands)

    Plagnol, Vincent; Nalls, Michael A.; Bras, Jose M.; Hernandez, Dena G.; Sharma, Manu; Sheerin, Una-Marie; Saad, Mohamad; Simon-Sanchez, Javier; Schulte, Claudia; Lesage, Suzanne; Sveinbjornsdottir, Sigurlaug; Amouyel, Philippe; Arepalli, Sampath; Band, Gavin; Barker, Roger A.; Bellinguez, Celine; Ben-Shlomo, Yoav; Berendse, Henk W.; Berg, Daniela; Bhatia, Kailash; de Bie, Rob M. A.; Biffi, Alessandro; Bloem, Bas; Bochdanovits, Zoltan; Bonin, Michael; Brockmann, Kathrin; Brooks, Janet; Burn, David J.; Charlesworth, Gavin; Chen, Honglei; Chinnery, Patrick F.; Chong, Sean; Clarke, Carl E.; Cookson, Mark R.; Cooper, J. Mark; Corvol, Jean Christophe; Counsell, Carl; Damier, Philippe; Dartigues, Jean-Francois; Deloukas, Panos; Deuschl, Guenther; Dexter, David T.; van Dijk, Karin D.; Dillman, Allissa; Durif, Frank; Duerr, Alexandra; Edkins, Sarah; Evans, Jonathan R.; Foltynie, Thomas; Freeman, Colin; Gao, Jianjun; Gardner, Michelle; Gibbs, J. Raphael; Goate, Alison; Gray, Emma; Guerreiro, Rita; Gustafsson, Omar; Harris, Clare; Hellenthal, Garrett; van Hilten, Jacobus J.; Hofman, Albert; Hollenbeck, Albert; Holton, Janice; Hu, Michele; Huang, Xuemei; Huber, Heiko; Hudson, Gavin; Hunt, Sarah E.; Huttenlocher, Johanna; Illig, Thomas; Jonsson, Palmi V.; Langford, Cordelia; Lees, Andrew; Lichtner, Peter; Limousin, Patricia; Lopez, Grisel; Lorenz, Delia; McNeill, Alisdair; Moorby, Catriona; Moore, Matthew; Morris, Huw; Morrison, Karen E.; Mudanohwo, Ese; O'Sullivan, Sean S.; Pearson, Justin; Pearson, Richard; Perlmutter, Joel S.; Petursson, Hjoervar; Pirinen, Matti; Pollak, Pierre; Post, Bart; Potter, Simon; Ravina, Bernard; Revesz, Tamas; Riess, Olaf; Rivadeneira, Fernando; Rizzu, Patrizia; Ryten, Mina; Sawcer, Stephen; Schapira, Anthony; Scheffer, Hans; Shaw, Karen; Shoulson, Ira; Sidransky, Ellen; de Silva, Rohan; Smith, Colin; Spencer, Chris C. A.; Stefansson, Hreinn; Steinberg, Stacy; Stockton, Joanna D.; Strange, Amy; Su, Zhan; Talbot, Kevin; Tanner, Carlie M.; Tashakkori-Ghanbaria, Avazeh; Tison, Francois; Trabzuni, Daniah; Traynor, Bryan J.; Uitterlinden, Andre G.; Vandrovcova, Jana; Velseboer, Daan; Vidailhet, Marie; Vukcevic, Damjan; Walker, Robert; van de Warrenburg, Bart; Weale, Michael E.; Wickremaratchi, Mirdhu; Williams, Nigel; Williams-Gray, Caroline H.; Winder-Rhodes, Sophie; Stefansson, Kari; Martinez, Maria; Donnelly, Peter; Singleton, Andrew B.; Hardy, John; Heutink, Peter; Brice, Alexis; Gasser, Thomas; Wood, Nicholas W.

    2011-01-01

    A previous genome-wide association (GWA) meta-analysis of 12,386 PD cases and 21,026 controls conducted by the International Parkinson's Disease Genomics Consortium (IPDGC) discovered or confirmed 11 Parkinson's disease (PD) loci. This first analysis of the two-stage IPDGC study focused on the set

  16. On A Two-Stage Supply Chain Model In The Manufacturing Industry ...

    African Journals Online (AJOL)

    We model a two-stage supply chain where the upstream stage (stage 2) always meet demand from the downstream stage (stage 1).Demand is stochastic hence shortages will occasionally occur at stage 2. Stage 2 must fill these shortages by expediting using overtime production and/or backordering. We derive optimal ...

  17. A two-stage approach for multi-objective decision making with applications to system reliability optimization

    International Nuclear Information System (INIS)

    Li Zhaojun; Liao Haitao; Coit, David W.

    2009-01-01

    This paper proposes a two-stage approach for solving multi-objective system reliability optimization problems. In this approach, a Pareto optimal solution set is initially identified at the first stage by applying a multiple objective evolutionary algorithm (MOEA). Quite often there are a large number of Pareto optimal solutions, and it is difficult, if not impossible, to effectively choose the representative solutions for the overall problem. To overcome this challenge, an integrated multiple objective selection optimization (MOSO) method is utilized at the second stage. Specifically, a self-organizing map (SOM), with the capability of preserving the topology of the data, is applied first to classify those Pareto optimal solutions into several clusters with similar properties. Then, within each cluster, the data envelopment analysis (DEA) is performed, by comparing the relative efficiency of those solutions, to determine the final representative solutions for the overall problem. Through this sequential solution identification and pruning process, the final recommended solutions to the multi-objective system reliability optimization problem can be easily determined in a more systematic and meaningful way.

  18. Time clustered sampling can inflate the inferred substitution rate in foot-and-mouth disease virus analyses

    DEFF Research Database (Denmark)

    Pedersen, Casper-Emil Tingskov; Frandsen, Peter; Wekesa, Sabenzia N.

    2015-01-01

    abundance of sequence data sampled under widely different schemes, an effort to keep results consistent and comparable is needed. This study emphasizes commonly disregarded problems in the inference of evolutionary rates in viral sequence data when sampling is unevenly distributed on a temporal scale...... through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer...... to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully...

  19. Diversity in the stellar velocity dispersion profiles of a large sample of brightest cluster galaxies z ≤ 0.3

    Science.gov (United States)

    Loubser, S. I.; Hoekstra, H.; Babul, A.; O'Sullivan, E.

    2018-06-01

    We analyse spatially resolved deep optical spectroscopy of brightestcluster galaxies (BCGs) located in 32 massive clusters with redshifts of 0.05 ≤ z ≤ 0.30 to investigate their velocity dispersion profiles. We compare these measurements to those of other massive early-type galaxies, as well as central group galaxies, where relevant. This unique, large sample extends to the most extreme of massive galaxies, spanning MK between -25.7 and -27.8 mag, and host cluster halo mass M500 up to 1.7 × 1015 M⊙. To compare the kinematic properties between brightest group and cluster members, we analyse similar spatially resolved long-slit spectroscopy for 23 nearby brightest group galaxies (BGGs) from the Complete Local-Volume Groups Sample. We find a surprisingly large variety in velocity dispersion slopes for BCGs, with a significantly larger fraction of positive slopes, unique compared to other (non-central) early-type galaxies as well as the majority of the brightest members of the groups. We find that the velocity dispersion slopes of the BCGs and BGGs correlate with the luminosity of the galaxies, and we quantify this correlation. It is not clear whether the full diversity in velocity dispersion slopes that we see is reproduced in simulations.

  20. Understanding the cluster randomised crossover design: a graphical illustraton of the components of variation and a sample size tutorial.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B

    2017-08-15

    In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of

  1. Health and human rights in eastern Myanmar after the political transition: a population-based assessment using multistaged household cluster sampling.

    Directory of Open Access Journals (Sweden)

    Parveen Kaur Parmar

    Full Text Available Myanmar transitioned to a nominally civilian parliamentary government in March 2011. Qualitative reports suggest that exposure to violence and displacement has declined while international assistance for health services has increased. An assessment of the impact of these changes on the health and human rights situation has not been published.Five community-based organizations conducted household surveys using two-stage cluster sampling in five states in eastern Myanmar from July 2013-September 2013. Data was collected from 6, 178 households on demographics, mortality, health outcomes, water and sanitation, food security and nutrition, malaria, and human rights violations (HRV. Among children aged 6-59 months screened, the prevalence of global acute malnutrition (representing moderate or severe malnutrition was 11.3% (8.0-14.7. A total of 250 deaths occurred during the year prior to the survey. Infant deaths accounted for 64 of these (IMR 94.2; 95% CI 66.5-133.5 and there were 94 child deaths (U5MR 141.9; 95% CI 94.8-189.0. 10.7% of households (95% CI 7.0-14.5 experienced at least one HRV in the past year, while four percent reported 2 or more HRVs. Household exposure to one or more HRVs was associated with moderate-severe malnutrition among children (14.9 vs. 6.8%; prevalence ratio 2.2, 95% CI 1.2-4.2. Household exposure to HRVs was associated with self-reported fair or poor health status among respondents (PR 1.3; 95% CI 1.1-1.5.This large survey of health and human rights demonstrates that two years after political transition, vulnerable populations of eastern Myanmar are less likely to experience human rights violations compared to previous surveys. However, access to health services remains constrained, and risk of disease and death remains higher than the country as a whole. Efforts to address these poor health indicators should prioritize support for populations that remain outside the scope of most formal government and donor programs.

  2. Health and human rights in eastern Myanmar after the political transition: a population-based assessment using multistaged household cluster sampling.

    Science.gov (United States)

    Parmar, Parveen Kaur; Barina, Charlene C; Low, Sharon; Tun, Kyaw Thura; Otterness, Conrad; Mhote, Pue P; Htoo, Saw Nay; Kyaw, Saw Win; Lwin, Nai Aye; Maung, Cynthia; Moo, Naw Merry; Oo, Eh Kalu Shwe; Reh, Daniel; Mon, Nai Chay; Singh, Nakul; Goyal, Ravi; Richards, Adam K

    2015-01-01

    Myanmar transitioned to a nominally civilian parliamentary government in March 2011. Qualitative reports suggest that exposure to violence and displacement has declined while international assistance for health services has increased. An assessment of the impact of these changes on the health and human rights situation has not been published. Five community-based organizations conducted household surveys using two-stage cluster sampling in five states in eastern Myanmar from July 2013-September 2013. Data was collected from 6, 178 households on demographics, mortality, health outcomes, water and sanitation, food security and nutrition, malaria, and human rights violations (HRV). Among children aged 6-59 months screened, the prevalence of global acute malnutrition (representing moderate or severe malnutrition) was 11.3% (8.0-14.7). A total of 250 deaths occurred during the year prior to the survey. Infant deaths accounted for 64 of these (IMR 94.2; 95% CI 66.5-133.5) and there were 94 child deaths (U5MR 141.9; 95% CI 94.8-189.0). 10.7% of households (95% CI 7.0-14.5) experienced at least one HRV in the past year, while four percent reported 2 or more HRVs. Household exposure to one or more HRVs was associated with moderate-severe malnutrition among children (14.9 vs. 6.8%; prevalence ratio 2.2, 95% CI 1.2-4.2). Household exposure to HRVs was associated with self-reported fair or poor health status among respondents (PR 1.3; 95% CI 1.1-1.5). This large survey of health and human rights demonstrates that two years after political transition, vulnerable populations of eastern Myanmar are less likely to experience human rights violations compared to previous surveys. However, access to health services remains constrained, and risk of disease and death remains higher than the country as a whole. Efforts to address these poor health indicators should prioritize support for populations that remain outside the scope of most formal government and donor programs.

  3. Combining evidence from multiple electronic health care databases: performances of one-stage and two-stage meta-analysis in matched case-control studies.

    Science.gov (United States)

    La Gamba, Fabiola; Corrao, Giovanni; Romio, Silvana; Sturkenboom, Miriam; Trifirò, Gianluca; Schink, Tania; de Ridder, Maria

    2017-10-01

    Clustering of patients in databases is usually ignored in one-stage meta-analysis of multi-database studies using matched case-control data. The aim of this study was to compare bias and efficiency of such a one-stage meta-analysis with a two-stage meta-analysis. First, we compared the approaches by generating matched case-control data under 5 simulated scenarios, built by varying: (1) the exposure-outcome association; (2) its variability among databases; (3) the confounding strength of one covariate on this association; (4) its variability; and (5) the (heterogeneous) confounding strength of two covariates. Second, we made the same comparison using empirical data from the ARITMO project, a multiple database study investigating the risk of ventricular arrhythmia following the use of medications with arrhythmogenic potential. In our study, we specifically investigated the effect of current use of promethazine. Bias increased for one-stage meta-analysis with increasing (1) between-database variance of exposure effect and (2) heterogeneous confounding generated by two covariates. The efficiency of one-stage meta-analysis was slightly lower than that of two-stage meta-analysis for the majority of investigated scenarios. Based on ARITMO data, there were no evident differences between one-stage (OR = 1.50, CI = [1.08; 2.08]) and two-stage (OR = 1.55, CI = [1.12; 2.16]) approaches. When the effect of interest is heterogeneous, a one-stage meta-analysis ignoring clustering gives biased estimates. Two-stage meta-analysis generates estimates at least as accurate and precise as one-stage meta-analysis. However, in a study using small databases and rare exposures and/or outcomes, a correct one-stage meta-analysis becomes essential. Copyright © 2017 John Wiley & Sons, Ltd.

  4. A two-stage procedure for determining unsaturated hydraulic characteristics using a syringe pump and outflow observations

    DEFF Research Database (Denmark)

    Wildenschild, Dorthe; Jensen, Karsten Høgh; Hollenbeck, Karl-Josef

    1997-01-01

    A fast two-stage methodology for determining unsaturated flow characteristics is presented. The procedure builds on direct measurement of the retention characteristic using a syringe pump technique, combined with inverse estimation of the hydraulic conductivity characteristic based on one......-step outflow experiments. The direct measurements are obtained with a commercial syringe pump, which continuously withdraws fluid from a soil sample at a very low and accurate how rate, thus providing the water content in the soil sample. The retention curve is then established by simultaneously monitoring......-step outflow data and the independently measured retention data are included in the objective function of a traditional least-squares minimization routine, providing unique estimates of the unsaturated hydraulic characteristics by means of numerical inversion of Richards equation. As opposed to what is often...

  5. CFD simulations of compressed air two stage rotary Wankel expander – Parametric analysis

    International Nuclear Information System (INIS)

    Sadiq, Ghada A.; Tozer, Gavin; Al-Dadah, Raya; Mahmoud, Saad

    2017-01-01

    Highlights: • CFD ANSYS-Fluent 3D simulation of Wankel expander is developed. • Single and two-stage expander’s performance is compared. • Inlet and outlet ports shape and configurations are investigated. • Isentropic efficiency of two stage Wankel expander of 91% is achieved. - Abstract: A small scale volumetric Wankel expander is a powerful device for small-scale power generation in compressed air energy storage (CAES) systems and Organic Rankine cycles powered by different heat sources such as, biomass, low temperature geothermal, solar and waste heat leading to significant reduction in CO_2 emissions. Wankel expanders outperform other types of expander due to their ability to produce two power pulses per revolution per chamber additional to higher compactness, lower noise and vibration and lower cost. In this paper, a computational fluid dynamics (CFD) model was developed using ANSYS 16.2 to simulate the flow dynamics for a single and two stage Wankel expanders and to investigate the effect of port configurations, including size and spacing, on the expander’s power output and isentropic efficiency. Also, single-stage and two-stage expanders were analysed with different operating conditions. Single-stage 3D CFD results were compared to published work showing close agreement. The CFD modelling was used to investigate the performance of the rotary device using air as an ideal gas with various port diameters ranging from 15 mm to 50 mm; port spacing varying from 28 mm to 66 mm; different Wankel expander sizes (r = 48, e = 6.6, b = 32) mm and (r = 58, e = 8, b = 40) mm both as single-stage and as two-stage expanders with different configurations and various operating conditions. Results showed that the best Wankel expander design for a single-stage was (r = 48, e = 6.6, b = 32) mm, with the port diameters 20 mm and port spacing equal to 50 mm. Moreover, combining two Wankel expanders horizontally, with a larger one at front, produced 8.52 kW compared

  6. Profile fitting and the two-stage method in neutron powder diffractometry for structure and texture analysis

    International Nuclear Information System (INIS)

    Jansen, E.; Schaefer, W.; Will, G.; Kernforschungsanlage Juelich G.m.b.H.

    1988-01-01

    An outline and an application of the two-stage method in neutron powder diffractometry are presented. Stage (1): Individual reflection data like position, half-width and integrated intensity are analysed by profile fitting. The profile analysis is based on an experimentally determined instrument function and can be applied without prior knowledge of a structural model. A mathematical procedure is described which results in a variance-covariance matrix containing standard deviations and correlations of the refined reflection parameters. Stage (2): The individual reflection data derived from the profile fitting procedure can be used for appropriate purposes either in structure determination or in texture and strain or stress analysis. The integrated intensities are used in the non-diagonal weighted least-squares routine POWLS for structure refinement. The weight matrix is given by the inverted variance-covariance matrix of stage (1). This procedure is the basis for reliable and real Bragg R values and for a realistic estimation of standard deviations of structural parameters. In the case of texture analysis the integrated intensities are compiled into pole figures representing the intensity distribution for all sample orientations of individual hkl. Various examples for the wide application of the two-stage method in structure and texture analysis are given: Structure refinement of a standard quartz specimen, magnetic ordering in the system Tb x Y 1-x Ag, preferred orientation effects in deformed marble and texture investigations of a triclinic plagioclase. (orig.)

  7. Synthesis of Programmable Main-chain Liquid-crystalline Elastomers Using a Two-stage Thiol-acrylate Reaction.

    Science.gov (United States)

    Saed, Mohand O; Torbati, Amir H; Nair, Devatha P; Yakacki, Christopher M

    2016-01-19

    This study presents a novel two-stage thiol-acrylate Michael addition-photopolymerization (TAMAP) reaction to prepare main-chain liquid-crystalline elastomers (LCEs) with facile control over network structure and programming of an aligned monodomain. Tailored LCE networks were synthesized using routine mixing of commercially available starting materials and pouring monomer solutions into molds to cure. An initial polydomain LCE network is formed via a self-limiting thiol-acrylate Michael-addition reaction. Strain-to-failure and glass transition behavior were investigated as a function of crosslinking monomer, pentaerythritol tetrakis(3-mercaptopropionate) (PETMP). An example non-stoichiometric system of 15 mol% PETMP thiol groups and an excess of 15 mol% acrylate groups was used to demonstrate the robust nature of the material. The LCE formed an aligned and transparent monodomain when stretched, with a maximum failure strain over 600%. Stretched LCE samples were able to demonstrate both stress-driven thermal actuation when held under a constant bias stress or the shape-memory effect when stretched and unloaded. A permanently programmed monodomain was achieved via a second-stage photopolymerization reaction of the excess acrylate groups when the sample was in the stretched state. LCE samples were photo-cured and programmed at 100%, 200%, 300%, and 400% strain, with all samples demonstrating over 90% shape fixity when unloaded. The magnitude of total stress-free actuation increased from 35% to 115% with increased programming strain. Overall, the two-stage TAMAP methodology is presented as a powerful tool to prepare main-chain LCE systems and explore structure-property-performance relationships in these fascinating stimuli-sensitive materials.

  8. Cluster-cluster clustering

    International Nuclear Information System (INIS)

    Barnes, J.; Dekel, A.; Efstathiou, G.; Frenk, C.S.; Yale Univ., New Haven, CT; California Univ., Santa Barbara; Cambridge Univ., England; Sussex Univ., Brighton, England)

    1985-01-01

    The cluster correlation function xi sub c(r) is compared with the particle correlation function, xi(r) in cosmological N-body simulations with a wide range of initial conditions. The experiments include scale-free initial conditions, pancake models with a coherence length in the initial density field, and hybrid models. Three N-body techniques and two cluster-finding algorithms are used. In scale-free models with white noise initial conditions, xi sub c and xi are essentially identical. In scale-free models with more power on large scales, it is found that the amplitude of xi sub c increases with cluster richness; in this case the clusters give a biased estimate of the particle correlations. In the pancake and hybrid models (with n = 0 or 1), xi sub c is steeper than xi, but the cluster correlation length exceeds that of the points by less than a factor of 2, independent of cluster richness. Thus the high amplitude of xi sub c found in studies of rich clusters of galaxies is inconsistent with white noise and pancake models and may indicate a primordial fluctuation spectrum with substantial power on large scales. 30 references

  9. Evaluation of immunization coverage by lot quality assurance sampling compared with 30-cluster sampling in a primary health centre in India.

    Science.gov (United States)

    Singh, J; Jain, D C; Sharma, R S; Verghese, T

    1996-01-01

    The immunization coverage of infants, children and women residing in a primary health centre (PHC) area in Rajasthan was evaluated both by lot quality assurance sampling (LQAS) and by the 30-cluster sampling method recommended by WHO's Expanded Programme on Immunization (EPI). The LQAS survey was used to classify 27 mutually exclusive subunits of the population, defined as residents in health subcentre areas, on the basis of acceptable or unacceptable levels of immunization coverage among infants and their mothers. The LQAS results from the 27 subcentres were also combined to obtain an overall estimate of coverage for the entire population of the primary health centre, and these results were compared with the EPI cluster survey results. The LQAS survey did not identify any subcentre with a level of immunization among infants high enough to be classified as acceptable; only three subcentres were classified as having acceptable levels of tetanus toxoid (TT) coverage among women. The estimated overall coverage in the PHC population from the combined LQAS results showed that a quarter of the infants were immunized appropriately for their ages and that 46% of their mothers had been adequately immunized with TT. Although the age groups and the periods of time during which the children were immunized differed for the LQAS and EPI survey populations, the characteristics of the mothers were largely similar. About 57% (95% CI, 46-67) of them were found to be fully immunized with TT by 30-cluster sampling, compared with 46% (95% CI, 41-51) by stratified random sampling. The difference was not statistically significant. The field work to collect LQAS data took about three times longer, and cost 60% more than the EPI survey. The apparently homogeneous and low level of immunization coverage in the 27 subcentres makes this an impractical situation in which to apply LQAS, and the results obtained were therefore not particularly useful. However, if LQAS had been applied by local

  10. A Two-stage Improvement Method for Robot Based 3D Surface Scanning

    Science.gov (United States)

    He, F. B.; Liang, Y. D.; Wang, R. F.; Lin, Y. S.

    2018-03-01

    As known that the surface of unknown object was difficult to measure or recognize precisely, hence the 3D laser scanning technology was introduced and used properly in surface reconstruction. Usually, the surface scanning speed was slower and the scanning quality would be better, while the speed was faster and the quality would be worse. In this case, the paper presented a new two-stage scanning method in order to pursuit the quality of surface scanning in a faster speed. The first stage was rough scanning to get general point cloud data of object’s surface, and then the second stage was specific scanning to repair missing regions which were determined by chord length discrete method. Meanwhile, a system containing a robotic manipulator and a handy scanner was also developed to implement the two-stage scanning method, and relevant paths were planned according to minimum enclosing ball and regional coverage theories.

  11. An adaptive two-stage dose-response design method for establishing proof of concept.

    Science.gov (United States)

    Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R

    2013-01-01

    We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.

  12. Target tracking system based on preliminary and precise two-stage compound cameras

    Science.gov (United States)

    Shen, Yiyan; Hu, Ruolan; She, Jun; Luo, Yiming; Zhou, Jie

    2018-02-01

    Early detection of goals and high-precision of target tracking is two important performance indicators which need to be balanced in actual target search tracking system. This paper proposed a target tracking system with preliminary and precise two - stage compound. This system using a large field of view to achieve the target search. After the target was searched and confirmed, switch into a small field of view for two field of view target tracking. In this system, an appropriate filed switching strategy is the key to achieve tracking. At the same time, two groups PID parameters are add into the system to reduce tracking error. This combination way with preliminary and precise two-stage compound can extend the scope of the target and improve the target tracking accuracy and this method has practical value.

  13. A Two Stage Solution Procedure for Production Planning System with Advance Demand Information

    Science.gov (United States)

    Ueno, Nobuyuki; Kadomoto, Kiyotaka; Hasuike, Takashi; Okuhara, Koji

    We model for ‘Naiji System’ which is a unique corporation technique between a manufacturer and suppliers in Japan. We propose a two stage solution procedure for a production planning problem with advance demand information, which is called ‘Naiji’. Under demand uncertainty, this model is formulated as a nonlinear stochastic programming problem which minimizes the sum of production cost and inventory holding cost subject to a probabilistic constraint and some linear production constraints. By the convexity and the special structure of correlation matrix in the problem where inventory for different periods is not independent, we propose a solution procedure with two stages which are named Mass Customization Production Planning & Management System (MCPS) and Variable Mesh Neighborhood Search (VMNS) based on meta-heuristics. It is shown that the proposed solution procedure is available to get a near optimal solution efficiently and practical for making a good master production schedule in the suppliers.

  14. Gas pollutants removal in a single- and two-stage ejector-venturi scrubber.

    Science.gov (United States)

    Gamisans, Xavier; Sarrà, Montserrrat; Lafuente, F Javier

    2002-03-29

    The absorption of SO(2) and NH(3) from the flue gas into NaOH and H(2)SO(4) solutions, respectively has been studied using an industrial scale ejector-venturi scrubber. A statistical methodology is presented to characterise the performance of the scrubber by varying several factors such as gas pollutant concentration, air flowrate and absorbing solution flowrate. Some types of venturi tube constructions were assessed, including the use of a two-stage venturi tube. The results showed a strong influence of the liquid scrubbing flowrate on pollutant removal efficiency. The initial pollutant concentration and the gas flowrate had a slight influence. The use of a two-stage venturi tube considerably improved the absorption efficiency, although it increased energy consumption. The results of this study will be applicable to the optimal design of venturi-based absorbers for gaseous pollution control or chemical reactors.

  15. Influence of capacity- and time-constrained intermediate storage in two-stage food production systems

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter; Gaalman, Gerard

    2007-01-01

    In food processing, two-stage production systems with a batch processor in the first stage and packaging lines in the second stage are common and mostly separated by capacity- and time-constrained intermediate storage. This combination of constraints is common in practice, but the literature hardly...... of systems like this. Contrary to the common sense in operations management, the LPT rule is able to maximize the total production volume per day. Furthermore, we show that adding one tank has considerable effects. Finally, we conclude that the optimal setup frequency for batches in the first stage...... pays any attention to this. In this paper, we show how various capacity and time constraints influence the performance of a specific two-stage system. We study the effects of several basic scheduling and sequencing rules in the presence of these constraints in order to learn the characteristics...

  16. A simple two stage optimization algorithm for constrained power economic dispatch

    International Nuclear Information System (INIS)

    Huang, G.; Song, K.

    1994-01-01

    A simple two stage optimization algorithm is proposed and investigated for fast computation of constrained power economic dispatch control problems. The method is a simple demonstration of the hierarchical aggregation-disaggregation (HAD) concept. The algorithm first solves an aggregated problem to obtain an initial solution. This aggregated problem turns out to be classical economic dispatch formulation, and it can be solved in 1% of overall computation time. In the second stage, linear programming method finds optimal solution which satisfies power balance constraints, generation and transmission inequality constraints and security constraints. Implementation of the algorithm for IEEE systems and EPRI Scenario systems shows that the two stage method obtains average speedup ratio 10.64 as compared to classical LP-based method

  17. Two-stage combustion for reducing pollutant emissions from gas turbine combustors

    Science.gov (United States)

    Clayton, R. M.; Lewis, D. H.

    1981-01-01

    Combustion and emission results are presented for a premix combustor fueled with admixtures of JP5 with neat H2 and of JP5 with simulated partial-oxidation product gas. The combustor was operated with inlet-air state conditions typical of cruise power for high performance aviation engines. Ultralow NOx, CO and HC emissions and extended lean burning limits were achieved simultaneously. Laboratory scale studies of the non-catalyzed rich-burning characteristics of several paraffin-series hydrocarbon fuels and of JP5 showed sooting limits at equivalence ratios of about 2.0 and that in order to achieve very rich sootless burning it is necessary to premix the reactants thoroughly and to use high levels of air preheat. The application of two-stage combustion for the reduction of fuel NOx was reviewed. An experimental combustor designed and constructed for two-stage combustion experiments is described.

  18. A Sensorless Power Reserve Control Strategy for Two-Stage Grid-Connected PV Systems

    DEFF Research Database (Denmark)

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2017-01-01

    Due to the still increasing penetration of grid-connected Photovoltaic (PV) systems, advanced active power control functionalities have been introduced in grid regulations. A power reserve control, where namely the active power from the PV panels is reserved during operation, is required for grid...... support. In this paper, a cost-effective solution to realize the power reserve for two-stage grid-connected PV systems is proposed. The proposed solution routinely employs a Maximum Power Point Tracking (MPPT) control to estimate the available PV power and a Constant Power Generation (CPG) control...... performed on a 3-kW two-stage single-phase grid-connected PV system, where the power reserve control is achieved upon demands....

  19. A two staged condensation of vapors of an isobutane tower in installations for sulfuric acid alkylation

    Energy Technology Data Exchange (ETDEWEB)

    Smirnov, N.P.; Feyzkhanov, R.I.; Idrisov, A.D.; Navalikhin, P.G.; Sakharov, V.D.

    1983-01-01

    In order to increase the concentration of isobutane to greater than 72 to 76 percent in an installation for sulfuric acid alkylation, a system of two staged condensation of vapors from an isobutane tower is placed into operation. The first stage condenses the heavier part of the upper distillate of the tower, which is achieved through somewhat of an increase in the condensate temperature. The product which is condensed in the first stage is completely returned to the tower as a live irrigation. The vapors of the isobutane fraction which did not condense in the first stage are sent to two newly installed condensers, from which the product after condensation passes through intermediate tanks to further depropanization. The two staged condensation of vapors of the isobutane tower reduces the content of the inert diluents, the propane and n-butane in the upper distillate of the isobutane tower and creates more favorable conditions for the operation of the isobutane and propane tower.

  20. Optimising the refrigeration cycle with a two-stage centrifugal compressor and a flash intercooler

    Energy Technology Data Exchange (ETDEWEB)

    Roeyttae, Pekka; Turunen-Saaresti, Teemu; Honkatukia, Juha [Lappeenranta University of Technology, Laboratory of Energy and Environmental Technology, PO Box 20, 53851 Lappeenranta (Finland)

    2009-09-15

    The optimisation of a refrigeration process with a two-stage centrifugal compressor and flash intercooler is presented in this paper. The two-stage centrifugal compressor stages are on the same shaft and the electric motor is cooled with the refrigerant. The performance of the centrifugal compressor is evaluated based on semi-empirical specific-speed curves and the effect of the Reynolds number, surface roughness and tip clearance have also been taken into account. The thermodynamic and transport properties of the working fluids are modelled with a real-gas model. The condensing and evaporation temperatures, the temperature after the flash intercooler, and cooling power have been chosen as fixed values in the process. The aim is to gain a maximum coefficient of performance (COP). The method of optimisation, the operation of the compressor and flash intercooler, and the method for estimating the electric motor cooling are also discussed in the article. (author)

  1. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    OpenAIRE

    He, Xinhua; Hu, Wenfa

    2017-01-01

    Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total c...

  2. Generation of dense, pulsed beams of refractory metal atoms using two-stage laser ablation

    International Nuclear Information System (INIS)

    Kadar-Kallen, M.A.; Bonin, K.D.

    1994-01-01

    We report a technique for generating a dense, pulsed beam of refractory metal atoms using two-stage laser ablation. An atomic beam of uranium was produced with a peak, ground-state number density of 1x10 12 cm -3 at a distance of z=27 cm from the source. This density can be scaled as 1/z 3 to estimate the density at other distances which are also far from the source

  3. Two-stage hepatectomy: who will not jump over the second hurdle?

    Science.gov (United States)

    Turrini, O; Ewald, J; Viret, F; Sarran, A; Goncalves, A; Delpero, J-R

    2012-03-01

    Two-stage hepatectomy uses compensatory liver regeneration after a first noncurative hepatectomy to enable a second curative resection in patients with bilobar colorectal liver metastasis (CLM). To determine the predictive factors of failure of two-stage hepatectomy. Between 2000 and 2010, 48 patients with irresectable CLM were eligible for two-stage hepatectomy. The planned strategy was a) cleaning of the left hepatic lobe (first hepatectomy), b) right portal vein embolisation and c) right hepatectomy (second hepatectomy). Six patients had occult CLM (n = 5) or extra-hepatic disease (n = 1), which was discovered during the first hepatectomy. Thus, 42 patients completed the first hepatectomy and underwent portal vein embolisation in order to receive the second hepatectomy. Eight patients did not undergo a second hepatectomy due to disease progression. Upon univariate analysis, two factors were identified that precluded patients from having the second hepatectomy: the combined resection of a primary tumour during the first hepatectomy (p = 0.01) and administration of chemotherapy between the two hepatectomies (p = 0.03). An independent association with impairment to perform the two-stage strategy was demonstrated by multivariate analysis for only the combined resection of the primary colorectal cancer during the first hepatectomy (p = 0.04). Due to the small number of patients and the absence of equivalent conclusions in other studies, we cannot recommend performance of an isolated colorectal resection prior to chemotherapy. However, resection of an asymptomatic primary tumour before chemotherapy should not be considered as an outdated procedure. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Single-stage-to-orbit versus two-stage-two-orbit: A cost perspective

    Science.gov (United States)

    Hamaker, Joseph W.

    1996-03-01

    This paper considers the possible life-cycle costs of single-stage-to-orbit (SSTO) and two-stage-to-orbit (TSTO) reusable launch vehicles (RLV's). The analysis parametrically addresses the issue such that the preferred economic choice comes down to the relative complexity of the TSTO compared to the SSTO. The analysis defines the boundary complexity conditions at which the two configurations have equal life-cycle costs, and finally, makes a case for the economic preference of SSTO over TSTO.

  5. Exergy analysis of vapor compression refrigeration cycle with two-stage and intercooler

    Energy Technology Data Exchange (ETDEWEB)

    Kilic, Bayram [Mehmet Akif Ersoy University, Bucak Emin Guelmez Vocational School, Bucak, Burdur (Turkey)

    2012-07-15

    In this study, exergy analyses of vapor compression refrigeration cycle with two-stage and intercooler using refrigerants R507, R407c, R404a were carried out. The necessary thermodynamic values for analyses were calculated by Solkane program. The coefficient of performance, exergetic efficiency and total irreversibility rate of the system in the different operating conditions for these refrigerants were investigated. The coefficient of performance, exergetic efficiency and total irreversibility rate for alternative refrigerants were compared. (orig.)

  6. Control strategy research of two stage topology for pulsed power supply

    International Nuclear Information System (INIS)

    Shi Chunfeng; Wang Rongkun; Huang Yuzhen; Chen Youxin; Yan Hongbin; Gao Daqing

    2013-01-01

    A kind of pulsed power supply of HIRFL-CSR was introduced, the ripple and the current error of the topological structure of the power in the operation process were analyzed, and two stage topology of pulsed power supply was given. The control strategy was simulated and the experiment was done in digital power platform. The results show that the main circuit structure and control method are feasible. (authors)

  7. A novel flow sensor based on resonant sensing with two-stage microleverage mechanism

    Science.gov (United States)

    Yang, B.; Guo, X.; Wang, Q. H.; Lu, C. F.; Hu, D.

    2018-04-01

    The design, simulation, fabrication, and experiments of a novel flow sensor based on resonant sensing with a two-stage microleverage mechanism are presented in this paper. Different from the conventional detection methods for flow sensors, two differential resonators are adopted to implement air flow rate transformation through two-stage leverage magnification. The proposed flow sensor has a high sensitivity since the adopted two-stage microleverage mechanism possesses a higher amplification factor than a single-stage microleverage mechanism. The modal distribution and geometric dimension of the two-stage leverage mechanism and hair are analyzed and optimized by Ansys simulation. A digital closed-loop driving technique with a phase frequency detector-based coordinate rotation digital computer algorithm is implemented for the detection and locking of resonance frequency. The sensor fabricated by the standard deep dry silicon on a glass process has a device dimension of 5100 μm (length) × 5100 μm (width) × 100 μm (height) with a hair diameter of 1000 μm. The preliminary experimental results demonstrate that the maximal mechanical sensitivity of the flow sensor is approximately 7.41 Hz/(m/s)2 at a resonant frequency of 22 kHz for the hair height of 9 mm and increases by 2.42 times as hair height extends from 3 mm to 9 mm. Simultaneously, a detection-limit of 3.23 mm/s air flow amplitude at 60 Hz is confirmed. The proposed flow sensor has great application prospects in the micro-autonomous system and technology, self-stabilizing micro-air vehicles, and environmental monitoring.

  8. Two Stage Fuzzy Methodology to Evaluate the Credit Risks of Investment Projects

    OpenAIRE

    O. Badagadze; G. Sirbiladze; I. Khutsishvili

    2014-01-01

    The work proposes a decision support methodology for the credit risk minimization in selection of investment projects. The methodology provides two stages of projects’ evaluation. Preliminary selection of projects with minor credit risks is made using the Expertons Method. The second stage makes ranking of chosen projects using the Possibilistic Discrimination Analysis Method. The latter is a new modification of a well-known Method of Fuzzy Discrimination Analysis.

  9. A Two-Stage Rural Household Demand Analysis: Microdata Evidence from Jiangsu Province, China

    OpenAIRE

    X.M. Gao; Eric J. Wailes; Gail L. Cramer

    1996-01-01

    In this paper we evaluate economic and demographic effects on China's rural household demand for nine food commodities: vegetables, pork, beef and lamb, poultry, eggs, fish, sugar, fruit, and grain; and five nonfood commodity groups: clothing, fuel, stimulants, housing, and durables. A two-stage budgeting allocation procedure is used to obtain an empirically tractable amalgamative demand system for food commodities which combine an upper-level AIDS model and a lower-level GLES as a modeling f...

  10. Latent Inhibition as a Function of US Intensity in a Two-Stage CER Procedure

    Science.gov (United States)

    Rodriguez, Gabriel; Alonso, Gumersinda

    2004-01-01

    An experiment is reported in which the effect of unconditioned stimulus (US) intensity on latent inhibition (LI) was examined, using a two-stage conditioned emotional response (CER) procedure in rats. A tone was used as the pre-exposed and conditioned stimulus (CS), and a foot-shock of either a low (0.3 mA) or high (0.7 mA) intensity was used as…

  11. Two-stage meta-analysis of survival data from individual participants using percentile ratios

    Science.gov (United States)

    Barrett, Jessica K; Farewell, Vern T; Siannis, Fotios; Tierney, Jayne; Higgins, Julian P T

    2012-01-01

    Methods for individual participant data meta-analysis of survival outcomes commonly focus on the hazard ratio as a measure of treatment effect. Recently, Siannis et al. (2010, Statistics in Medicine 29:3030–3045) proposed the use of percentile ratios as an alternative to hazard ratios. We describe a novel two-stage method for the meta-analysis of percentile ratios that avoids distributional assumptions at the study level. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22825835

  12. Two-staged management for all types of congenital pouch colon

    Directory of Open Access Journals (Sweden)

    Rajendra K Ghritlaharey

    2013-01-01

    Full Text Available Background: The aim of this study was to review our experience with two-staged management for all types of congenital pouch colon (CPC. Patients and Methods: This retrospective study included CPC cases that were managed with two-staged procedures in the Department of Paediatric Surgery, over a period of 12 years from 1 January 2000 to 31 December 2011. Results: CPC comprised of 13.71% (97 of 707 of all anorectal malformations (ARM and 28.19% (97 of 344 of high ARM. Eleven CPC cases (all males were managed with two-staged procedures. Distribution of cases (Narsimha Rao et al.′s classification into types I, II, III, and IV were 1, 2, 6, and 2, respectively. Initial operative procedures performed were window colostomy (n = 6, colostomy proximal to pouch (n = 4, and ligation of colovesical fistula and end colostomy (n = 1. As definitive procedures, pouch excision with abdomino-perineal pull through (APPT of colon in eight, and pouch excision with APPT of ileum in three were performed. The mean age at the time of definitive procedures was 15.6 months (ranges from 3 to 53 months and the mean weight was 7.5 kg (ranges from 4 to 11 kg. Good fecal continence was observed in six and fair in two cases in follow-up periods, while three of our cases lost to follow up. There was no mortality following definitive procedures amongst above 11 cases. Conclusions: Two-staged procedures for all types of CPC can also be performed safely with good results. The most important fact that the definitive procedure is being done without protective stoma and therefore, it avoids stoma closure, stoma-related complications, related cost of stoma closure and hospital stay.

  13. Modelling of an air-cooled two-stage Rankine cycle for electricity production

    International Nuclear Information System (INIS)

    Liu, Bo

    2014-01-01

    This work considers a two stage Rankine cycle architecture slightly different from a standard Rankine cycle for electricity generation. Instead of expanding the steam to extremely low pressure, the vapor leaves the turbine at a higher pressure then having a much smaller specific volume. It is thus possible to greatly reduce the size of the steam turbine. The remaining energy is recovered by a bottoming cycle using a working fluid which has a much higher density than the water steam. Thus, the turbines and heat exchangers are more compact; the turbine exhaust velocity loss is lower. This configuration enables to largely reduce the global size of the steam water turbine and facilitate the use of a dry cooling system. The main advantage of such an air cooled two stage Rankine cycle is the possibility to choose the installation site of a large or medium power plant without the need of a large and constantly available water source; in addition, as compared to water cooled cycles, the risk regarding future operations is reduced (climate conditions may affect water availability or temperature, and imply changes in the water supply regulatory rules). The concept has been investigated by EDF R and D. A 22 MW prototype was developed in the 1970's using ammonia as the working fluid of the bottoming cycle for its high density and high latent heat. However, this fluid is toxic. In order to search more suitable working fluids for the two stage Rankine cycle application and to identify the optimal cycle configuration, we have established a working fluid selection methodology. Some potential candidates have been identified. We have evaluated the performances of the two stage Rankine cycles operating with different working fluids in both design and off design conditions. For the most acceptable working fluids, components of the cycle have been sized. The power plant concept can then be evaluated on a life cycle cost basis. (author)

  14. A Sensorless Power Reserve Control Strategy for Two-Stage Grid-Connected PV Systems

    OpenAIRE

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2017-01-01

    Due to the still increasing penetration of grid-connected Photovoltaic (PV) systems, advanced active power control functionalities have been introduced in grid regulations. A power reserve control, where namely the active power from the PV panels is reserved during operation, is required for grid support. In this paper, a cost-effective solution to realize the power reserve for two-stage grid-connected PV systems is proposed. The proposed solution routinely employs a Maximum Power Point Track...

  15. Actuator Fault Diagnosis in a Boeing 747 Model via Adaptive Modified Two-Stage Kalman Filter

    Directory of Open Access Journals (Sweden)

    Fikret Caliskan

    2014-01-01

    Full Text Available An adaptive modified two-stage linear Kalman filtering algorithm is utilized to identify the loss of control effectiveness and the magnitude of low degree of stuck faults in a closed-loop nonlinear B747 aircraft. Control effectiveness factors and stuck magnitudes are used to quantify faults entering control systems through actuators. Pseudorandom excitation inputs are used to help distinguish partial loss and stuck faults. The partial loss and stuck faults in the stabilizer are isolated and identified successfully.

  16. Two-stage energy storage equalization system for lithium-ion battery pack

    Science.gov (United States)

    Chen, W.; Yang, Z. X.; Dong, G. Q.; Li, Y. B.; He, Q. Y.

    2017-11-01

    How to raise the efficiency of energy storage and maximize storage capacity is a core problem in current energy storage management. For that, two-stage energy storage equalization system which contains two-stage equalization topology and control strategy based on a symmetric multi-winding transformer and DC-DC (direct current-direct current) converter is proposed with bidirectional active equalization theory, in order to realize the objectives of consistent lithium-ion battery packs voltages and cells voltages inside packs by using a method of the Range. Modeling analysis demonstrates that the voltage dispersion of lithium-ion battery packs and cells inside packs can be kept within 2 percent during charging and discharging. Equalization time was 0.5 ms, which shortened equalization time of 33.3 percent compared with DC-DC converter. Therefore, the proposed two-stage lithium-ion battery equalization system can achieve maximum storage capacity between lithium-ion battery packs and cells inside packs, meanwhile efficiency of energy storage is significantly improved.

  17. Two-stage residual inclusion estimation: addressing endogeneity in health econometric modeling.

    Science.gov (United States)

    Terza, Joseph V; Basu, Anirban; Rathouz, Paul J

    2008-05-01

    The paper focuses on two estimation methods that have been widely used to address endogeneity in empirical research in health economics and health services research-two-stage predictor substitution (2SPS) and two-stage residual inclusion (2SRI). 2SPS is the rote extension (to nonlinear models) of the popular linear two-stage least squares estimator. The 2SRI estimator is similar except that in the second-stage regression, the endogenous variables are not replaced by first-stage predictors. Instead, first-stage residuals are included as additional regressors. In a generic parametric framework, we show that 2SRI is consistent and 2SPS is not. Results from a simulation study and an illustrative example also recommend against 2SPS and favor 2SRI. Our findings are important given that there are many prominent examples of the application of inconsistent 2SPS in the recent literature. This study can be used as a guide by future researchers in health economics who are confronted with endogeneity in their empirical work.

  18. Production of endo-pectate lyase by two stage cultivation of Erwinia carotovora

    Energy Technology Data Exchange (ETDEWEB)

    Fukuoka, Satoshi; Kobayashi, Yoshiaki

    1987-02-26

    The productivity of endo-pectate lyase from Erwinia carotovora GIR 1044 was found to be greatly improved by two stage cultivation: in the first stage the bacterium was grown with an inducing carbon source, e.g., pectin, and in the second stage it was cultivated with glycerol, xylose, or fructose with the addition of monosodium L-glutamate as nitrogen source. In the two stage cultivation using pectin or glycerol as the carbon source the enzyme activity reached 400 units/ml, almost 3 times as much as that of one stage cultivation in a 10 liter fermentor. Using two stage cultivation in the 200 liter fermentor improved enzyme productivity over that in the 10 liter fermentor, with 500 units/ml of activity. Compared with the cultivation in Erlenmeyer flasks, fermentor cultivation improved enzyme productivity. The optimum cultivating conditions were agitation of 480 rpm with aeration of 0.5 vvm at 28 /sup 0/C. (4 figs, 4 tabs, 14 refs)

  19. Assessing efficiency and effectiveness of Malaysian Islamic banks: A two stage DEA analysis

    Science.gov (United States)

    Kamarudin, Norbaizura; Ismail, Wan Rosmanira; Mohd, Muhammad Azri

    2014-06-01

    Islamic banks in Malaysia are indispensable players in the financial industry with the growing needs for syariah compliance system. In the banking industry, most recent studies concerned only on operational efficiency. However rarely on the operational effectiveness. Since the production process of banking industry can be described as a two-stage process, two-stage Data Envelopment Analysis (DEA) can be applied to measure the bank performance. This study was designed to measure the overall performance in terms of efficiency and effectiveness of Islamic banks in Malaysia using Two-Stage DEA approach. This paper presents analysis of a DEA model which split the efficiency and effectiveness in order to evaluate the performance of ten selected Islamic Banks in Malaysia for the financial year period ended 2011. The analysis shows average efficient score is more than average effectiveness score thus we can say that Malaysian Islamic banks were more efficient rather than effective. Furthermore, none of the bank exhibit best practice in both stages as we can say that a bank with better efficiency does not always mean having better effectiveness at the same time.

  20. A two-stage extraction procedure for insensitive munition (IM) explosive compounds in soils.

    Science.gov (United States)

    Felt, Deborah; Gurtowski, Luke; Nestler, Catherine C; Johnson, Jared; Larson, Steven

    2016-12-01

    The Department of Defense (DoD) is developing a new category of insensitive munitions (IMs) that are more resistant to detonation or promulgation from external stimuli than traditional munition formulations. The new explosive constituent compounds are 2,4-dinitroanisole (DNAN), nitroguanidine (NQ), and nitrotriazolone (NTO). The production and use of IM formulations may result in interaction of IM component compounds with soil. The chemical properties of these IM compounds present unique challenges for extraction from environmental matrices such as soil. A two-stage extraction procedure was developed and tested using several soil types amended with known concentrations of IM compounds. This procedure incorporates both an acidified phase and an organic phase to account for the chemical properties of the IM compounds. The method detection limits (MDLs) for all IM compounds in all soil types were regulatory risk-based Regional Screening Level (RSL) criteria for soil proposed by the U.S. Army Public Health Center. At defined environmentally relevant concentrations, the average recovery of each IM compound in each soil type was consistent and greater than 85%. The two-stage extraction method decreased the influence of soil composition on IM compound recovery. UV analysis of NTO established an isosbestic point based on varied pH at a detection wavelength of 341 nm. The two-stage soil extraction method is equally effective for traditional munition compounds, a potentially important point when examining soils exposed to both traditional and insensitive munitions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Two-stage solar concentrators based on parabolic troughs: asymmetric versus symmetric designs.

    Science.gov (United States)

    Schmitz, Max; Cooper, Thomas; Ambrosetti, Gianluca; Steinfeld, Aldo

    2015-11-20

    While nonimaging concentrators can approach the thermodynamic limit of concentration, they generally suffer from poor compactness when designed for small acceptance angles, e.g., to capture direct solar irradiation. Symmetric two-stage systems utilizing an image-forming primary parabolic concentrator in tandem with a nonimaging secondary concentrator partially overcome this compactness problem, but their achievable concentration ratio is ultimately limited by the central obstruction caused by the secondary. Significant improvements can be realized by two-stage systems having asymmetric cross-sections, particularly for 2D line-focus trough designs. We therefore present a detailed analysis of two-stage line-focus asymmetric concentrators for flat receiver geometries and compare them to their symmetric counterparts. Exemplary designs are examined in terms of the key optical performance metrics, namely, geometric concentration ratio, acceptance angle, concentration-acceptance product, aspect ratio, active area fraction, and average number of reflections. Notably, we show that asymmetric designs can achieve significantly higher overall concentrations and are always more compact than symmetric systems designed for the same concentration ratio. Using this analysis as a basis, we develop novel asymmetric designs, including two-wing and nested configurations, which surpass the optical performance of two-mirror aplanats and are comparable with the best reported 2D simultaneous multiple surface designs for both hollow and dielectric-filled secondaries.

  2. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    International Nuclear Information System (INIS)

    Yang, Won Sik; Lin, C. S.; Hader, J. S.; Park, T. K.; Deng, P.; Yang, G.; Jung, Y. S.; Kim, T. K.; Stauff, N. E.

    2016-01-01

    This report presents the performance characteristics of two ''two-stage'' fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  3. Application of two-stage biofilter system for the removal of odorous compounds.

    Science.gov (United States)

    Jeong, Gwi-Taek; Park, Don-Hee; Lee, Gwang-Yeon; Cha, Jin-Myeong

    2006-01-01

    Biofiltration is a biological process which is considered to be one of the more successful examples of biotechnological applications to environmental engineering, and is most commonly used in the removal of odoriferous compounds. In this study, we have attempted to assess the efficiency with which both single and complex odoriferous compounds could be removed, using one- or two-stage biofiltration systems. The tested single odor gases, limonene, alpha-pinene, and iso-butyl alcohol, were separately evaluated in the biofilters. Both limonene and alpha-pinene were removed by 90% or more EC (elimination capacity), 364 g/m3/h and 321 g/m3/h, respectively, at an input concentration of 50 ppm and a retention time of 30 s. The iso-butyl alcohol was maintained with an effective removal yield of more than 90% (EC 375 g/m3/h) at an input concentration of 100 ppm. The complex gas removal scheme was applied with a 200 ppm inlet concentration of ethanol, 70 ppm of acetaldehyde, and 70 ppm of toluene with residence time of 45 s in a one- or two-stage biofiltration system. The removal yield of toluene was determined to be lower than that of the other gases in the one-stage biofilter. Otherwise, the complex gases were sufficiently eliminated by the two-stage biofiltration system.

  4. Effects of earthworm casts and zeolite on the two-stage composting of green waste

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lu, E-mail: zhanglu1211@gmail.com; Sun, Xiangyang, E-mail: xysunbjfu@gmail.com

    2015-05-15

    Highlights: • Earthworm casts (EWCs) and clinoptilolite (CL) were used in green waste composting. • Addition of EWCs + CL improved physico-chemical and microbiological properties. • Addition of EWCs + CL extended the duration of thermophilic periods during composting. • Addition of EWCs + CL enhanced humification, cellulose degradation, and nutrients. • Combined addition of 0.30% EWCs + 25% CL reduced composting time to 21 days. - Abstract: Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21 days with the optimized two-stage composting method rather than in the 90–270 days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL.

  5. Effects of earthworm casts and zeolite on the two-stage composting of green waste

    International Nuclear Information System (INIS)

    Zhang, Lu; Sun, Xiangyang

    2015-01-01

    Highlights: • Earthworm casts (EWCs) and clinoptilolite (CL) were used in green waste composting. • Addition of EWCs + CL improved physico-chemical and microbiological properties. • Addition of EWCs + CL extended the duration of thermophilic periods during composting. • Addition of EWCs + CL enhanced humification, cellulose degradation, and nutrients. • Combined addition of 0.30% EWCs + 25% CL reduced composting time to 21 days. - Abstract: Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21 days with the optimized two-stage composting method rather than in the 90–270 days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL

  6. Is the continuous two-stage anaerobic digestion process well suited for all substrates?

    Science.gov (United States)

    Lindner, Jonas; Zielonka, Simon; Oechsner, Hans; Lemmer, Andreas

    2016-01-01

    Two-stage anaerobic digestion systems are often considered to be advantageous compared to one-stage processes. Although process conditions and fermenter setups are well examined, overall substrate degradation in these systems is controversially discussed. Therefore, the aim of this study was to investigate how substrates with different fibre and sugar contents (hay/straw, maize silage, sugar beet) influence the degradation rate and methane production. Intermediates and gas compositions, as well as methane yields and VS-degradation degrees were recorded. The sugar beet substrate lead to a higher pH-value drop 5.67 in the acidification reactor, which resulted in a six time higher hydrogen production in comparison to the hay/straw substrate (pH-value drop 5.34). As the achieved yields in the two-stage system showed a difference of 70.6% for the hay/straw substrate, and only 7.8% for the sugar beet substrate. Therefore two-stage systems seem to be only recommendable for digesting sugar rich substrates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Study on the Control Algorithm of Two-Stage DC-DC Converter for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Changhao Piao

    2014-01-01

    Full Text Available The fast response, high efficiency, and good reliability are very important characteristics to electric vehicles (EVs dc/dc converters. Two-stage dc-dc converter is a kind of dc-dc topologies that can offer those characteristics to EVs. Presently, nonlinear control is an active area of research in the field of the control algorithm of dc-dc converters. However, very few papers research on two-stage converter for EVs. In this paper, a fixed switching frequency sliding mode (FSFSM controller and double-integral sliding mode (DISM controller for two-stage dc-dc converter are proposed. And a conventional linear control (lag is chosen as the comparison. The performances of the proposed FSFSM controller are compared with those obtained by the lag controller. In consequence, the satisfactory simulation and experiment results show that the FSFSM controller is capable of offering good large-signal operations with fast dynamical responses to the converter. At last, some other simulation results are presented to prove that the DISM controller is a promising method for the converter to eliminate the steady-state error.

  8. Two-stage commercial evaluation of engineering systems production projects for high-rise buildings

    Science.gov (United States)

    Bril, Aleksander; Kalinina, Olga; Levina, Anastasia

    2018-03-01

    The paper is devoted to the current and debatable problem of methodology of choosing the effective innovative enterprises for venture financing. A two-stage system of commercial innovation evaluation based on the UNIDO methodology is proposed. Engineering systems account for 25 to 40% of the cost of high-rise residential buildings. This proportion increases with the use of new construction technologies. Analysis of the construction market in Russia showed that the production of internal engineering systems elements based on innovative technologies has a growth trend. The production of simple elements is organized in small enterprises on the basis of new technologies. The most attractive for development is the use of venture financing of small innovative business. To improve the efficiency of these operations, the paper proposes a methodology for a two-stage evaluation of small business development projects. A two-stage system of commercial evaluation of innovative projects allows creating an information base for informed and coordinated decision-making on venture financing of enterprises that produce engineering systems elements for the construction business.

  9. Two-stage commercial evaluation of engineering systems production projects for high-rise buildings

    Directory of Open Access Journals (Sweden)

    Bril Aleksander

    2018-01-01

    Full Text Available The paper is devoted to the current and debatable problem of methodology of choosing the effective innovative enterprises for venture financing. A two-stage system of commercial innovation evaluation based on the UNIDO methodology is proposed. Engineering systems account for 25 to 40% of the cost of high-rise residential buildings. This proportion increases with the use of new construction technologies. Analysis of the construction market in Russia showed that the production of internal engineering systems elements based on innovative technologies has a growth trend. The production of simple elements is organized in small enterprises on the basis of new technologies. The most attractive for development is the use of venture financing of small innovative business. To improve the efficiency of these operations, the paper proposes a methodology for a two-stage evaluation of small business development projects. A two-stage system of commercial evaluation of innovative projects allows creating an information base for informed and coordinated decision-making on venture financing of enterprises that produce engineering systems elements for the construction business.

  10. Two-Stage Liver Transplantation with Temporary Porto-Middle Hepatic Vein Shunt

    Directory of Open Access Journals (Sweden)

    Giovanni Varotti

    2010-01-01

    Full Text Available Two-stage liver transplantation (LT has been reported for cases of fulminant liver failure that can lead to toxic hepatic syndrome, or massive hemorrhages resulting in uncontrollable bleeding. Technically, the first stage of the procedure consists of a total hepatectomy with preservation of the recipient's inferior vena cava (IVC, followed by the creation of a temporary end-to-side porto-caval shunt (TPCS. The second stage consists of removing the TPCS and implanting a liver graft when one becomes available. We report a case of a two-stage total hepatectomy and LT in which a temporary end-to-end anastomosis between the portal vein and the middle hepatic vein (TPMHV was performed as an alternative to the classic end-to-end TPCS. The creation of a TPMHV proved technically feasible and showed some advantages compared to the standard TPCS. In cases in which a two-stage LT with side-to-side caval reconstruction is utilized, TPMHV can be considered as a safe and effective alternative to standard TPCS.

  11. Empirical study of classification process for two-stage turbo air classifier in series

    Science.gov (United States)

    Yu, Yuan; Liu, Jiaxiang; Li, Gang

    2013-05-01

    The suitable process parameters for a two-stage turbo air classifier are important for obtaining the ultrafine powder that has a narrow particle-size distribution, however little has been published internationally on the classification process for the two-stage turbo air classifier in series. The influence of the process parameters of a two-stage turbo air classifier in series on classification performance is empirically studied by using aluminum oxide powders as the experimental material. The experimental results show the following: 1) When the rotor cage rotary speed of the first-stage classifier is increased from 2 300 r/min to 2 500 r/min with a constant rotor cage rotary speed of the second-stage classifier, classification precision is increased from 0.64 to 0.67. However, in this case, the final ultrafine powder yield is decreased from 79% to 74%, which means the classification precision and the final ultrafine powder yield can be regulated through adjusting the rotor cage rotary speed of the first-stage classifier. 2) When the rotor cage rotary speed of the second-stage classifier is increased from 2 500 r/min to 3 100 r/min with a constant rotor cage rotary speed of the first-stage classifier, the cut size is decreased from 13.16 μm to 8.76 μm, which means the cut size of the ultrafine powder can be regulated through adjusting the rotor cage rotary speed of the second-stage classifier. 3) When the feeding speed is increased from 35 kg/h to 50 kg/h, the "fish-hook" effect is strengthened, which makes the ultrafine powder yield decrease. 4) To weaken the "fish-hook" effect, the equalization of the two-stage wind speeds or the combination of a high first-stage wind speed with a low second-stage wind speed should be selected. This empirical study provides a criterion of process parameter configurations for a two-stage or multi-stage classifier in series, which offers a theoretical basis for practical production.

  12. Development and testing of a two stage granular filter to improve collection efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Rangan, R.S.; Prakash, S.G.; Chakravarti, S.; Rao, S.R.

    1999-07-01

    A circulating bed granular filter (CBGF) with a single filtration stage was tested with a PFB combustor in the Coal Research Facility of BHEL R and D in Hyderabad during the years 1993--95. Filter outlet dust loading varied between 20--50 mg/Nm{sup 3} for an inlet dust loading of 5--8 gms/Nm{sup 3}. The results were reported in Fluidized Bed Combustion-Volume 2, ASME 1995. Though the outlet consists of predominantly fine particulates below 2 microns, it is still beyond present day gas turbine specifications for particulate concentration. In order to enhance the collection efficiency, a two-stage granular filtration concept was evolved, wherein the filter depth is divided between two stages, accommodated in two separate vertically mounted units. The design also incorporates BHEL's scale-up concept of multiple parallel stages. The two-stage concept minimizes reentrainment of captured dust by providing clean granules in the upper stage, from where gases finally exit the filter. The design ensures that dusty gases come in contact with granules having a higher dust concentration at the bottom of the two-stage unit, where most of the cleaning is completed. A second filtration stage of cleaned granules is provided in the top unit (where the granules are returned to the system after dedusting) minimizing reentrainment. Tests were conducted to determine the optimum granule to dust ratio (G/D ratio) which decides the granule circulation rate required for the desired collection efficiency. The data brings out the importance of pre-separation and the limitation on inlet dust loading for any continuous system of granular filtration. Collection efficiencies obtained were much higher (outlet dust being 3--9 mg/Nm{sub 3}) than in the single stage filter tested earlier for similar dust loading at the inlet. The results indicate that two-stage granular filtration has a high potential for HTHT application with fewer risks as compared to other systems under development.

  13. Cluster analysis in kinetic modelling of the brain: A noninvasive alternative to arterial sampling

    DEFF Research Database (Denmark)

    Liptrot, Matthew George; Adams, K.H.; Martiny, L.

    2004-01-01

    In emission tomography, quantification of brain tracer uptake, metabolism or binding requires knowledge of the cerebral input function. Traditionally, this is achieved with arterial blood sampling. We propose a noninvasive alternative via the use of a blood vessel time-activity curve (TAC....... © 2003 Elsevier Inc. All rights reserved....

  14. Two-Stage Method Based on Local Polynomial Fitting for a Linear Heteroscedastic Regression Model and Its Application in Economics

    Directory of Open Access Journals (Sweden)

    Liyun Su

    2012-01-01

    Full Text Available We introduce the extension of local polynomial fitting to the linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to nonparametric technique of local polynomial estimation, we do not need to know the heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we focus on comparison of parameters and reach an optimal fitting. Besides, we verify the asymptotic normality of parameters based on numerical simulations. Finally, this approach is applied to a case of economics, and it indicates that our method is surely effective in finite-sample situations.

  15. THE ATACAMA COSMOLOGY TELESCOPE: DYNAMICAL MASSES AND SCALING RELATIONS FOR A SAMPLE OF MASSIVE SUNYAEV-ZEL'DOVICH EFFECT SELECTED GALAXY CLUSTERS ,

    International Nuclear Information System (INIS)

    Sifón, Cristóbal; Barrientos, L. Felipe; González, Jorge; Infante, Leopoldo; Dünner, Rolando; Menanteau, Felipe; Hughes, John P.; Baker, Andrew J.; Hasselfield, Matthew; Marriage, Tobias A.; Crichton, Devin; Gralla, Megan B.; Addison, Graeme E.; Dunkley, Joanna; Battaglia, Nick; Bond, J. Richard; Hajian, Amir; Das, Sudeep; Devlin, Mark J.; Hilton, Matt

    2013-01-01

    We present the first dynamical mass estimates and scaling relations for a sample of Sunyaev-Zel'dovich effect (SZE) selected galaxy clusters. The sample consists of 16 massive clusters detected with the Atacama Cosmology Telescope (ACT) over a 455 deg 2 area of the southern sky. Deep multi-object spectroscopic observations were taken to secure intermediate-resolution (R ∼ 700-800) spectra and redshifts for ≈60 member galaxies on average per cluster. The dynamical masses M 200c of the clusters have been calculated using simulation-based scaling relations between velocity dispersion and mass. The sample has a median redshift z = 0.50 and a median mass M 200c ≅12×10 14 h 70 -1 M sun with a lower limit M 200c ≅6×10 14 h 70 -1 M sun , consistent with the expectations for the ACT southern sky survey. These masses are compared to the ACT SZE properties of the sample, specifically, the match-filtered central SZE amplitude y 0 -tilde, the central Compton parameter y 0 , and the integrated Compton signal Y 200c , which we use to derive SZE-mass scaling relations. All SZE estimators correlate with dynamical mass with low intrinsic scatter (∼< 20%), in agreement with numerical simulations. We explore the effects of various systematic effects on these scaling relations, including the correlation between observables and the influence of dynamically disturbed clusters. Using the three-dimensional information available, we divide the sample into relaxed and disturbed clusters and find that ∼50% of the clusters are disturbed. There are hints that disturbed systems might bias the scaling relations, but given the current sample sizes, these differences are not significant; further studies including more clusters are required to assess the impact of these clusters on the scaling relations

  16. The Atacama Cosmology Telescope: Physical Properties and Purity of a Galaxy Cluster Sample Selected Via the Sunyaev-Zel'Dovich Effect

    Science.gov (United States)

    Menanteau, Felipe; Gonzalez, Jorge; Juin, Jean-Baptiste; Marriage, Tobias; Reese, Erik D.; Acquaviva, Viviana; Aguirre, Paula; Appel, John Willam; Baker, Andrew J.; Barrientos, L. Felipe; hide

    2010-01-01

    We present optical and X-ray properties for the first confirmed galaxy cluster sample selected by the Sunyaev-Zel'dovich Effect from 148 GHz maps over 455 square degrees of sky made with the Atacama Cosmology Telescope. These maps. coupled with multi-band imaging on 4-meter-class optical telescopes, have yielded a sample of 23 galaxy clusters with redshifts between 0.118 and 1.066. Of these 23 clusters, 10 are newly discovered. The selection of this sample is approximately mass limited and essentially independent of redshift. We provide optical positions, images, redshifts and X-ray fluxes and luminosities for the full sample, and X-ray temperatures of an important subset. The mass limit of the full sample is around 8.0 x 10(exp 14) Stellar Mass. with a number distribution that peaks around a redshift of 0.4. For the 10 highest significance SZE-selected cluster candidates, all of which are optically confirmed, the mass threshold is 1 x 10(exp 15) Stellar Mass and the redshift range is 0.167 to 1.066. Archival observations from Chandra, XMM-Newton. and ROSAT provide X-ray luminosities and temperatures that are broadly consistent with this mass threshold. Our optical follow-up procedure also allowed us to assess the purity of the ACT cluster sample. Eighty (one hundred) percent of the 148 GHz candidates with signal-to-noise ratios greater than 5.1 (5.7) are confirmed as massive clusters. The reported sample represents one of the largest SZE-selected sample of massive clusters over all redshifts within a cosmologically-significant survey volume, which will enable cosmological studies as well as future studies on the evolution, morphology, and stellar populations in the most massive clusters in the Universe.

  17. Time Clustered Sampling Can Inflate the Inferred Substitution Rate in Foot-And-Mouth Disease Virus Analyses.

    Science.gov (United States)

    Pedersen, Casper-Emil T; Frandsen, Peter; Wekesa, Sabenzia N; Heller, Rasmus; Sangula, Abraham K; Wadsworth, Jemma; Knowles, Nick J; Muwanika, Vincent B; Siegismund, Hans R

    2015-01-01

    With the emergence of analytical software for the inference of viral evolution, a number of studies have focused on estimating important parameters such as the substitution rate and the time to the most recent common ancestor (tMRCA) for rapidly evolving viruses. Coupled with an increasing abundance of sequence data sampled under widely different schemes, an effort to keep results consistent and comparable is needed. This study emphasizes commonly disregarded problems in the inference of evolutionary rates in viral sequence data when sampling is unevenly distributed on a temporal scale through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully consider how samples are combined.

  18. Diagnosis Of Persistent Infection In Prosthetic Two-Stage Exchange: PCR analysis of Sonication fluid From Bone Cement Spacers.

    Science.gov (United States)

    Mariaux, Sandrine; Tafin, Ulrika Furustrand; Borens, Olivier

    2017-01-01

    Introduction: When treating periprosthetic joint infections with a two-stage procedure, antibiotic-impregnated spacers are used in the interval between removal of prosthesis and reimplantation. According to our experience, cultures of sonicated spacers are most often negative. The objective of our study was to investigate whether PCR analysis would improve the detection of bacteria in the spacer sonication fluid. Methods: A prospective monocentric study was performed from September 2014 to January 2016. Inclusion criteria were two-stage procedure for prosthetic infection and agreement of the patient to participate in the study. Beside tissues samples and sonication, broad range bacterial PCRs, specific S. aureus PCRs and Unyvero-multiplex PCRs were performed on the sonicated spacer fluid. Results: 30 patients were identified (15 hip, 14 knee and 1 ankle replacements). At reimplantation, cultures of tissue samples and spacer sonication fluid were all negative. Broad range PCRs were all negative. Specific S. aureus PCRs were positive in 5 cases. We had two persistent infections and four cases of infection recurrence were observed, with bacteria different than for the initial infection in three cases. Conclusion: The three different types of PCRs did not detect any bacteria in spacer sonication fluid that was culture-negative. In our study, PCR did not improve the bacterial detection and did not help to predict whether the patient will present a persistent or recurrent infection. Prosthetic 2-stage exchange with short interval and antibiotic-impregnated spacer is an efficient treatment to eradicate infection as both culture- and molecular-based methods were unable to detect bacteria in spacer sonication fluid after reimplantation.

  19. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    Science.gov (United States)

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  20. The experimental study of a two-stage photovoltaic thermal system based on solar trough concentration

    International Nuclear Information System (INIS)

    Tan, Lijun; Ji, Xu; Li, Ming; Leng, Congbin; Luo, Xi; Li, Haili

    2014-01-01

    Highlights: • A two-stage photovoltaic thermal system based on solar trough concentration. • Maximum cell efficiency of 5.21% with the mirror opening width of 57 cm. • With single cycle, maximum temperatures rise in the heating stage is 12.06 °C. • With 30 min multiple cycles, working medium temperature 62.8 °C, increased 28.7 °C. - Abstract: A two-stage photovoltaic thermal system based on solar trough concentration is proposed, in which the metal cavity heating stage is added on the basis of the PV/T stage, and thermal energy with higher temperature is output while electric energy is output. With the 1.8 m 2 mirror PV/T system, the characteristic parameters of the space solar cell under non-concentrating solar radiation and concentrating solar radiation are respectively tested experimentally, and the solar cell output characteristics at different opening widths of concentrating mirror of the PV/T stage under condensation are also tested experimentally. When the mirror opening width was 57 cm, the solar cell efficiency reached maximum value of 5.21%. The experimental platform of the two-stage photovoltaic thermal system was established, with a 1.8 m 2 mirror PV/T stage and a 15 m 2 mirror heating stage, or a 1.8 m 2 mirror PV/T stage and a 30 m 2 mirror heating stage. The results showed that with single cycle, the long metal cavity heating stage would bring lower thermal efficiency, but temperature rise of the working medium is higher, up to 12.06 °C with only single cycle. With 30 min closed multiple cycles, the temperature of the working medium in the water tank was 62.8 °C, with an increase of 28.7 °C, and thermal energy with higher temperature could be output

  1. Comparisons of single-stage and two-stage approaches to genomic selection.

    Science.gov (United States)

    Schulz-Streeck, Torben; Ogutu, Joseph O; Piepho, Hans-Peter

    2013-01-01

    Genomic selection (GS) is a method for predicting breeding values of plants or animals using many molecular markers that is commonly implemented in two stages. In plant breeding the first stage usually involves computation of adjusted means for genotypes which are then used to predict genomic breeding values in the second stage. We compared two classical stage-wise approaches, which either ignore or approximate correlations among the means by a diagonal matrix, and a new method, to a single-stage analysis for GS using ridge regression best linear unbiased prediction (RR-BLUP). The new stage-wise method rotates (orthogonalizes) the adjusted means from the first stage before submitting them to the second stage. This makes the errors approximately independently and identically normally distributed, which is a prerequisite for many procedures that are potentially useful for GS such as machine learning methods (e.g. boosting) and regularized regression methods (e.g. lasso). This is illustrated in this paper using componentwise boosting. The componentwise boosting method minimizes squared error loss using least squares and iteratively and automatically selects markers that are most predictive of genomic breeding values. Results are compared with those of RR-BLUP using fivefold cross-validation. The new stage-wise approach with rotated means was slightly more similar to the single-stage analysis than the classical two-stage approaches based on non-rotated means for two unbalanced datasets. This suggests that rotation is a worthwhile pre-processing step in GS for the two-stage approaches for unbalanced datasets. Moreover, the predictive accuracy of stage-wise RR-BLUP was higher (5.0-6.1%) than that of componentwise boosting.

  2. Clinical evaluation of two-stage mandibular wisdom tooth extraction method to avoid mental nerve paresthesia

    International Nuclear Information System (INIS)

    Nozoe, Etsuro; Nakamura, Yasunori; Okawachi, Takako; Ishihata, Kiyohide; Shinnakasu, Mana; Nakamura, Norifumi

    2011-01-01

    Clinical courses following two-stage mandibular wisdom tooth extraction (TMWTE) carried out for preventing postoperative mental nerve paresthesia (MNP) were analyzed. When panoramic X-ray showed overlapping of wisdom tooth root on the superior 1/2 or more of the mandibular canal, interruption of the white line of the superior wall of the canal, or diversion of the canal, CT examination was facilitated. In cases where contact between the tooth root and canal was demonstrated in CT examination, TMWTE was then selected after gaining the patient's consent. TMWTE consisted of removing more than a half of the tooth crown and tooth root extraction at the second step after 2-3 months. The clinical features of wisdom teeth extracted and postoperative courses including tooth movement and occurrence of MNP during two-stage MWTE were evaluated. TMWTE was carried out for 40 teeth among 811 wisdom teeth (4.9%) that were extracted from 2007 to 2009. Among them, complete procedures were accomplished in 39 teeth, and crown removal was performed insufficiently at the first-stage operation in one tooth. Tooth movement was detected in 37 of 40 cases (92.5%). No postoperative MNP was observed in cases in which complete two-stage MWTE was carried out, but one case with insufficient crown removal was complicated by postoperative MNP. Seven mild complications (dehiscence, cold sensitivity, etc.) were noted after the first-stage operation. Therefore, we conclude that TMWTE for high-risk cases assessed by X-ray findings is useful to avoid MNP after MWTE. (author)

  3. Recent developments of a two-stage light gas gun for pellet injection

    International Nuclear Information System (INIS)

    Reggiori, A.

    1984-01-01

    A report is given on a two-stage pneumatic gun operated with ambient air as first stage driver which has been built and tested. Cylindrical polyethylene pellets of 1 mm diameter and 1 mm length have been launched at velocities up to 1800 m/s, with divergence angles of the pellet trajectory less than 1 0 . It is possible to optimize the pressure pulse for pellets of different masses, simply changing the mass of the piston and/or the initial pressures in the second stage. (author)

  4. Grids heat loading of an ion source in two-stage acceleration system

    International Nuclear Information System (INIS)

    Okumura, Yoshikazu; Ohara, Yoshihiro; Ohga, Tokumichi

    1978-05-01

    Heat loading of the extraction grids, which is one of the critical problems limiting the beam pulse duration at high power level, has been investigated experimentally, with an ion source in a two-stage acceleration system of four multi-aperture grids. The loading of each grid depends largely on extraction current and grid gap pressures; it decreases with improvement of the beam optics and with decrease of the pressures. In optimum operating modes, its level is typically less than -- 2% of the total beam power or -- 200 W/cm 2 at beam energies of 50 - 70 kV. (auth.)

  5. Two-Stage Maximum Likelihood Estimation (TSMLE for MT-CDMA Signals in the Indoor Environment

    Directory of Open Access Journals (Sweden)

    Sesay Abu B

    2004-01-01

    Full Text Available This paper proposes a two-stage maximum likelihood estimation (TSMLE technique suited for multitone code division multiple access (MT-CDMA system. Here, an analytical framework is presented in the indoor environment for determining the average bit error rate (BER of the system, over Rayleigh and Ricean fading channels. The analytical model is derived for quadrature phase shift keying (QPSK modulation technique by taking into account the number of tones, signal bandwidth (BW, bit rate, and transmission power. Numerical results are presented to validate the analysis, and to justify the approximations made therein. Moreover, these results are shown to agree completely with those obtained by simulation.

  6. The global stability of a delayed predator-prey system with two stage-structure

    International Nuclear Information System (INIS)

    Wang Fengyan; Pang Guoping

    2009-01-01

    Based on the classical delayed stage-structured model and Lotka-Volterra predator-prey model, we introduce and study a delayed predator-prey system, where prey and predator have two stages, an immature stage and a mature stage. The time delays are the time lengths between the immature's birth and maturity of prey and predator species. Results on global asymptotic stability of nonnegative equilibria of the delay system are given, which generalize and suggest that good continuity exists between the predator-prey system and its corresponding stage-structured system.

  7. Forecasting long memory series subject to structural change: A two-stage approach

    DEFF Research Database (Denmark)

    Papailias, Fotis; Dias, Gustavo Fruet

    2015-01-01

    A two-stage forecasting approach for long memory time series is introduced. In the first step, we estimate the fractional exponent and, by applying the fractional differencing operator, obtain the underlying weakly dependent series. In the second step, we produce multi-step-ahead forecasts...... for the weakly dependent series and obtain their long memory counterparts by applying the fractional cumulation operator. The methodology applies to both stationary and nonstationary cases. Simulations and an application to seven time series provide evidence that the new methodology is more robust to structural...

  8. Two-Stage Load Shedding for Secondary Control in Hierarchical Operation of Islanded Microgrids

    DEFF Research Database (Denmark)

    Zhou, Quan; Li, Zhiyi; Wu, Qiuwei

    2018-01-01

    A two-stage load shedding scheme is presented to cope with the severe power deficit caused by microgrid islanding. Coordinated with the fast response of inverter-based distributed energy resources (DERs), load shedding at each stage and the resulting power flow redistribution are estimated....... The first stage of load shedding will cease rapid frequency decline in which the measured frequency deviation is employed to guide the load shedding level and process. Once a new steady-state is reached, the second stage is activated, which performs load shedding according to the priorities of loads...

  9. The rearrangement process in a two-stage broadcast switching network

    DEFF Research Database (Denmark)

    Jacobsen, Søren B.

    1988-01-01

    The rearrangement process in the two-stage broadcast switching network presented by F.K. Hwang and G.W. Richards (ibid., vol.COM-33, no.10, p.1025-1035, Oct. 1985) is considered. By defining a certain function it is possible to calculate an upper bound on the number of connections to be moved...... during a rearrangement. When each inlet channel appears twice, the maximum number of connections to be moved is found. For a special class of inlet assignment patterns in the case of which each inlet channel appears three times, the maximum number of connections to be moved is also found. In the general...

  10. Risk-Averse Suppliers’ Optimal Pricing Strategies in a Two-Stage Supply Chain

    Directory of Open Access Journals (Sweden)

    Rui Shen

    2013-01-01

    Full Text Available Risk-averse suppliers’ optimal pricing strategies in two-stage supply chains under competitive environment are discussed. The suppliers in this paper focus more on losses as compared to profits, and they care their long-term relationship with their customers. We introduce for the suppliers a loss function, which covers both current loss and future loss. The optimal wholesale price is solved under situations of risk neutral, risk averse, and a combination of minimizing loss and controlling risk, respectively. Besides, some properties of and relations among these optimal wholesale prices are given as well. A numerical example is given to illustrate the performance of the proposed method.

  11. Modelling of Two-Stage Methane Digestion With Pretreatment of Biomass

    Science.gov (United States)

    Dychko, A.; Remez, N.; Opolinskyi, I.; Kraychuk, S.; Ostapchuk, N.; Yevtieieva, L.

    2018-04-01

    Systems of anaerobic digestion should be used for processing of organic waste. Managing the process of anaerobic recycling of organic waste requires reliable predicting of biogas production. Development of mathematical model of process of organic waste digestion allows determining the rate of biogas output at the two-stage process of anaerobic digestion considering the first stage. Verification of Konto's model, based on the studied anaerobic processing of organic waste, is implemented. The dependencies of biogas output and its rate from time are set and may be used to predict the process of anaerobic processing of organic waste.

  12. Simple Digital Control of a Two-Stage PFC Converter Using DSPIC30F Microprocessor

    DEFF Research Database (Denmark)

    Török, Lajos; Munk-Nielsen, Stig

    2010-01-01

    The use of dsPIC digital signal controllers (DSC) in Switch Mode Power Supply (SMPS) applications opens new perspectives for cheap and flexible digital control solutions. This paper presents the digital control of a two stage power factor corrector (PFC) converter. The PFC circuit is designed...... and built for 70W rated output power. Average current mode control for boost converter and current programmed control for forward converter are implemented on a dsPIC30F1010. Pulse Width Modulation (PWM) technique is used to drive the switching MOSFETs. Results show that digital solutions with ds...

  13. A comprehensive review on two-stage integrative schemes for the valorization of dark fermentative effluents.

    Science.gov (United States)

    Sivagurunathan, Periyasamy; Kuppam, Chandrasekhar; Mudhoo, Ackmez; Saratale, Ganesh D; Kadier, Abudukeremu; Zhen, Guangyin; Chatellard, Lucile; Trably, Eric; Kumar, Gopalakrishnan

    2017-12-21

    This review provides the alternative routes towards the valorization of dark H 2 fermentation effluents that are mainly rich in volatile fatty acids such as acetate and butyrate. Various enhancement and alternative routes such as photo fermentation, anaerobic digestion, utilization of microbial electrochemical systems, and algal system towards the generation of bioenergy and electricity and also for efficient organic matter utilization are highlighted. What is more, various integration schemes and two-stage fermentation for the possible scale up are reviewed. Moreover, recent progress for enhanced performance towards waste stabilization and overall utilization of useful and higher COD present in the organic source into value-added products are extensively discussed.

  14. An Investigation on the Formation of Carbon Nanotubes by Two-Stage Chemical Vapor Deposition

    Directory of Open Access Journals (Sweden)

    M. S. Shamsudin

    2012-01-01

    Full Text Available High density of carbon nanotubes (CNTs has been synthesized from agricultural hydrocarbon: camphor oil using a one-hour synthesis time and a titanium dioxide sol gel catalyst. The pyrolysis temperature is studied in the range of 700–900°C at increments of 50°C. The synthesis process is done using a custom-made two-stage catalytic chemical vapor deposition apparatus. The CNT characteristics are investigated by field emission scanning electron microscopy and micro-Raman spectroscopy. The experimental results showed that structural properties of CNT are highly dependent on pyrolysis temperature changes.

  15. A Novel Two-Stage Dynamic Spectrum Sharing Scheme in Cognitive Radio Networks

    Institute of Scientific and Technical Information of China (English)

    Guodong Zhang; Wei Heng; Tian Liang; Chao Meng; Jinming Hu

    2016-01-01

    In order to enhance the efficiency of spectrum utilization and reduce communication overhead in spectrum sharing process,we propose a two-stage dynamic spectrum sharing scheme in which cooperative and noncooperative modes are analyzed in both stages.In particular,the existence and the uniqueness of Nash Equilibrium (NE) strategies for noncooperative mode are proved.In addition,a distributed iterative algorithm is proposed to obtain the optimal solutions of the scheme.Simulation studies are carried out to show the performance comparison between two modes as well as the system revenue improvement of the proposed scheme compared with a conventional scheme without a virtual price control factor.

  16. The Design, Construction and Operation of a 75 kW Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Henriksen, Ulrik Birk; Ahrenfeldt, Jesper; Jensen, Torben Kvist

    2003-01-01

    The Two-Stage Gasifier was operated for several weeks (465 hours) and of these 190 hours continuously. The gasifier is operated automatically unattended day and night, and only small adjustments of the feeding rate were necessary once or twice a day. The operation was successful, and the output...... as expected. The engine operated well on the produced gas, and no deposits were observed in the engine afterwards. The bag house filter was an excellent and well operating gas cleaning system. Small amounts of deposits consisting of salts and carbonates were observed in the hot gas heat exchangers. The top...

  17. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts

    OpenAIRE

    Chien, Chia-Chang; Huang, Shu-Fen; Lung, For-Wey

    2009-01-01

    Chia-Chang Chien1, Shu-Fen Huang1,2,3,4, For-Wey Lung1,2,3,41Department of Psychiatry, Kaohsiung Armed Forces General Hospital, Kaohsiung, Taiwan; 2Graduate Institute of Behavioral Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan; 3Department of Psychiatry, National Defense Medical Center, Taipei, Taiwan; 4Calo Psychiatric Center, Pingtung County, TaiwanObjective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of militar...

  18. High-speed pellet injection with a two-stage pneumatic gun

    International Nuclear Information System (INIS)

    Reggiori, A.; Carlevaro, R.; Riva, G.; Daminelli, G.B.; Scaramuzzi, F.; Frattolillo, A.; Martinis, L.; Cardoni, P.; Mori, L.

    1988-01-01

    The injection of pellets of frozen hydrogen isotopes into fusion plasmas is envisioned as a fueling technique for future fusion reactors. Research is underway to obtain high injection speeds for solid H 2 and D 2 pellets. The optimization of a two-stage light gas gun is being pursued by the Milano group; the search for a convenient method of creating pellets with good mechanical properties and a secure attachment to the cold surface on which they are formed is carried out in Frascati. Velocities >2000 m/s have been obtained, but reproducibility is not yet satisfactory

  19. Artificial immune system and sheep flock algorithms for two-stage fixed-charge transportation problem

    DEFF Research Database (Denmark)

    Kannan, Devika; Govindan, Kannan; Soleimani, Hamed

    2014-01-01

    In this paper, we cope with a two-stage distribution planning problem of supply chain regarding fixed charges. The focus of the paper is on developing efficient solution methodologies of the selected NP-hard problem. Based on computational limitations, common exact and approximation solution...... approaches are unable to solve real-world instances of such NP-hard problems in a reasonable time. These approaches involve cumbersome computational steps in real-size cases. In order to solve the mixed integer linear programming model, we develop an artificial immune system and a sheep flock algorithm...

  20. A cross-sectional, randomized cluster sample survey of household vulnerability to extreme heat among slum dwellers in ahmedabad, india.

    Science.gov (United States)

    Tran, Kathy V; Azhar, Gulrez S; Nair, Rajesh; Knowlton, Kim; Jaiswal, Anjali; Sheffield, Perry; Mavalankar, Dileep; Hess, Jeremy

    2013-06-18

    Extreme heat is a significant public health concern in India; extreme heat hazards are projected to increase in frequency and severity with climate change. Few of the factors driving population heat vulnerability are documented, though poverty is a presumed risk factor. To facilitate public health preparedness, an assessment of factors affecting vulnerability among slum dwellers was conducted in summer 2011 in Ahmedabad, Gujarat, India. Indicators of heat exposure, susceptibility to heat illness, and adaptive capacity, all of which feed into heat vulnerability, was assessed through a cross-sectional household survey using randomized multistage cluster sampling. Associations between heat-related morbidity and vulnerability factors were identified using multivariate logistic regression with generalized estimating equations to account for clustering effects. Age, preexisting medical conditions, work location, and access to health information and resources were associated with self-reported heat illness. Several of these variables were unique to this study. As sociodemographics, occupational heat exposure, and access to resources were shown to increase vulnerability, future interventions (e.g., health education) might target specific populations among Ahmedabad urban slum dwellers to reduce vulnerability to extreme heat. Surveillance and evaluations of future interventions may also be worthwhile.

  1. Employing post-DEA cross-evaluation and cluster analysis in a sample of Greek NHS hospitals.

    Science.gov (United States)

    Flokou, Angeliki; Kontodimopoulos, Nick; Niakas, Dimitris

    2011-10-01

    To increase Data Envelopment Analysis (DEA) discrimination of efficient Decision Making Units (DMUs), by complementing "self-evaluated" efficiencies with "peer-evaluated" cross-efficiencies and, based on these results, to classify the DMUs using cluster analysis. Healthcare, which is deprived of such studies, was chosen as the study area. The sample consisted of 27 small- to medium-sized (70-500 beds) NHS general hospitals distributed throughout Greece, in areas where they are the sole NHS representatives. DEA was performed on 2005 data collected from the Ministry of Health and the General Secretariat of the National Statistical Service. Three inputs -hospital beds, physicians and other health professionals- and three outputs -case-mix adjusted hospitalized cases, surgeries and outpatient visits- were included in input-oriented, constant-returns-to-scale (CRS) and variable-returns-to-scale (VRS) models. In a second stage (post-DEA), aggressive and benevolent cross-efficiency formulations and clustering were employed, to validate (or not) the initial DEA scores. The "maverick index" was used to sort the peer-appraised hospitals. All analyses were performed using custom-made software. Ten benchmark hospitals were identified by DEA, but using the aggressive and benevolent formulations showed that two and four of them respectively were at the lower end of the maverick index list. On the other hand, only one 100% efficient (self-appraised) hospital was at the higher end of the list, using either formulation. Cluster analysis produced a hierarchical "tree" structure which dichotomized the hospitals in accordance to the cross-evaluation results, and provided insight on the two-dimensional path to improving efficiency. This is, to our awareness, the first study in the healthcare domain to employ both of these post-DEA techniques (cross efficiency and clustering) at the hospital (i.e. micro) level. The potential benefit for decision-makers is the capability to examine high

  2. Development of an innovative two-stage process, a combination of acidogenic hydrogenesis and methanogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Han, S.K.; Shin, H.S. [Korea Advanced Inst. of Science and Technology, Daejeon (Korea, Republic of). Dept. of Civil and Enviromental Engineering

    2004-07-01

    Hydrogen produced from waste by means of fermentative bacteria is an attractive way to produce this fuel as an alternative to fossil fuels. It also helps treat the associated waste. The authors have undertaken to optimize acidogenic hydrogenesis and methanogenesis. Building on this, they then developed a two-stage process that produces both hydrogen and methane. Acidogenic hydrogenesis of food waste was investigated using a leaching bed reactor. The dilution rate was varied in order to maximize efficiency which was as high as 70.8 per cent. Further to this, an upflow anaerobic sludge blanket reactor converted the wastewater from acidogenic hydrogenesis into methane. Chemical oxygen demand (COD) removal rates exceeded 96 per cent up to a COD loading of 12.9 COD/l/d. After this, the authors devised a new two-stage process based on a combination of acidogenic hydrogenesis and methanogenesis. The authors report on results for this process using food waste as feedstock. 5 refs., 5 figs.

  3. Fueling of magnetically confined plasmas by single- and two-stage repeating pneumatic pellet injectors

    International Nuclear Information System (INIS)

    Gouge, M.J.; Combs, S.K.; Foust, C.R.; Milora, S.L.

    1990-01-01

    Advanced plasma fueling systems for magnetic fusion confinement experiments are under development at Oak Ridge National Laboratory (ORNL). The general approach is that of producing and accelerating frozen hydrogenic pellets to speeds in the kilometer-per-second range using single shot and repetitive pneumatic (light-gas gun) pellet injectors. The millimeter-to-centimeter size pellets enter the plasma and continuously ablate because of the plasma electron heat flux, depositing fuel atoms along the pellet trajectory. This fueling method allows direct fueling in the interior of the hot plasma and is more efficient than the alternative method of injecting room temperature fuel gas at the wall of the plasma vacuum chamber. Single-stage pneumatic injectors based on the light-gas gun concept have provided hydrogenic fuel pellets in the speed range of 1--2 km/s in single-shot injector designs. Repetition rates up to 5 Hz have been demonstrated in repetitive injector designs. Future fusion reactor-scale devices may need higher pellet velocities because of the larger plasma size and higher plasma temperatures. Repetitive two-stage pneumatic injectors are under development at ORNL to provide long-pulse plasma fueling in the 3--5 km/s speed range. Recently, a repeating, two-stage light-gas gun achieved repetitive operation at 1 Hz with speeds in the range of 2--3 km/s

  4. Plant specification of a generic human-error data through a two-stage Bayesian approach

    International Nuclear Information System (INIS)

    Heising, C.D.; Patterson, E.I.

    1984-01-01

    Expert judgement concerning human performance in nuclear power plants is quantitatively coupled with actuarial data on such performance in order to derive plant-specific human-error rate probability distributions. The coupling procedure consists of a two-stage application of Bayes' theorem to information which is grouped by type. The first information type contains expert judgement concerning human performance at nuclear power plants in general. Data collected on human performance at a group of similar plants forms the second information type. The third information type consists of data on human performance in a specific plant which has the same characteristics as the group members. The first and second information types are coupled in the first application of Bayes' theorem to derive a probability distribution for population performance. This distribution is then combined with the third information type in a second application of Bayes' theorem to determine a plant-specific human-error rate probability distribution. The two stage Bayesian procedure thus provides a means to quantitatively couple sparse data with expert judgement in order to obtain a human performance probability distribution based upon available information. Example calculations for a group of like reactors are also given. (author)

  5. A two-stage stochastic programming model for the optimal design of distributed energy systems

    International Nuclear Information System (INIS)

    Zhou, Zhe; Zhang, Jianyun; Liu, Pei; Li, Zheng; Georgiadis, Michael C.; Pistikopoulos, Efstratios N.

    2013-01-01

    Highlights: ► The optimal design of distributed energy systems under uncertainty is studied. ► A stochastic model is developed using genetic algorithm and Monte Carlo method. ► The proposed system possesses inherent robustness under uncertainty. ► The inherent robustness is due to energy storage facilities and grid connection. -- Abstract: A distributed energy system is a multi-input and multi-output energy system with substantial energy, economic and environmental benefits. The optimal design of such a complex system under energy demand and supply uncertainty poses significant challenges in terms of both modelling and corresponding solution strategies. This paper proposes a two-stage stochastic programming model for the optimal design of distributed energy systems. A two-stage decomposition based solution strategy is used to solve the optimization problem with genetic algorithm performing the search on the first stage variables and a Monte Carlo method dealing with uncertainty in the second stage. The model is applied to the planning of a distributed energy system in a hotel. Detailed computational results are presented and compared with those generated by a deterministic model. The impacts of demand and supply uncertainty on the optimal design of distributed energy systems are systematically investigated using proposed modelling framework and solution approach.

  6. Multiobjective Two-Stage Stochastic Programming Problems with Interval Discrete Random Variables

    Directory of Open Access Journals (Sweden)

    S. K. Barik

    2012-01-01

    Full Text Available Most of the real-life decision-making problems have more than one conflicting and incommensurable objective functions. In this paper, we present a multiobjective two-stage stochastic linear programming problem considering some parameters of the linear constraints as interval type discrete random variables with known probability distribution. Randomness of the discrete intervals are considered for the model parameters. Further, the concepts of best optimum and worst optimum solution are analyzed in two-stage stochastic programming. To solve the stated problem, first we remove the randomness of the problem and formulate an equivalent deterministic linear programming model with multiobjective interval coefficients. Then the deterministic multiobjective model is solved using weighting method, where we apply the solution procedure of interval linear programming technique. We obtain the upper and lower bound of the objective function as the best and the worst value, respectively. It highlights the possible risk involved in the decision-making tool. A numerical example is presented to demonstrate the proposed solution procedure.

  7. Two stage bioethanol refining with multi litre stacked microbial fuel cell and microbial electrolysis cell.

    Science.gov (United States)

    Sugnaux, Marc; Happe, Manuel; Cachelin, Christian Pierre; Gloriod, Olivier; Huguenin, Gérald; Blatter, Maxime; Fischer, Fabian

    2016-12-01

    Ethanol, electricity, hydrogen and methane were produced in a two stage bioethanol refinery setup based on a 10L microbial fuel cell (MFC) and a 33L microbial electrolysis cell (MEC). The MFC was a triple stack for ethanol and electricity co-generation. The stack configuration produced more ethanol with faster glucose consumption the higher the stack potential. Under electrolytic conditions ethanol productivity outperformed standard conditions and reached 96.3% of the theoretically best case. At lower external loads currents and working potentials oscillated in a self-synchronized manner over all three MFC units in the stack. In the second refining stage, fermentation waste was converted into methane, using the scale up MEC stack. The bioelectric methanisation reached 91% efficiency at room temperature with an applied voltage of 1.5V using nickel cathodes. The two stage bioethanol refining process employing bioelectrochemical reactors produces more energy vectors than is possible with today's ethanol distilleries. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Two staged incentive contract focused on efficiency and innovation matching in critical chain project management

    Directory of Open Access Journals (Sweden)

    Min Zhang

    2014-09-01

    Full Text Available Purpose: The purpose of this paper is to define the relative optimal incentive contract to effectively encourage employees to improve work efficiency while actively implementing innovative behavior. Design/methodology/approach: This paper analyzes a two staged incentive contract coordinated with efficiency and innovation in Critical Chain Project Management using learning real options, based on principle-agent theory. The situational experiment is used to analyze the validity of the basic model. Finding: The two staged incentive scheme is more suitable for employees to create and implement learning real options, which will throw themselves into innovation process efficiently in Critical Chain Project Management. We prove that the combination of tolerance for early failure and reward for long-term success is effective in motivating innovation. Research limitations/implications: We do not include the individual characteristics of uncertain perception, which might affect the consistency of external validity. The basic model and the experiment design need to improve. Practical Implications: The project managers should pay closer attention to early innovation behavior and monitoring feedback of competition time in the implementation of Critical Chain Project Management. Originality/value: The central contribution of this paper is the theoretical and experimental analysis of incentive schemes for innovation in Critical Chain Project Management using the principal-agent theory, to encourage the completion of CCPM methods as well as imitative free-riding on the creative ideas of other members in the team.

  9. Two-stage acid saccharification of fractionated Gelidium amansii minimizing the sugar decomposition.

    Science.gov (United States)

    Jeong, Tae Su; Kim, Young Soo; Oh, Kyeong Keun

    2011-11-01

    Two-stage acid hydrolysis was conducted on easy reacting cellulose and resistant reacting cellulose of fractionated Gelidium amansii (f-GA). Acid hydrolysis of f-GA was performed at between 170 and 200 °C for a period of 0-5 min, and an acid concentration of 2-5% (w/v, H2SO4) to determine the optimal conditions for acid hydrolysis. In the first stage of the acid hydrolysis, an optimum glucose yield of 33.7% was obtained at a reaction temperature of 190 °C, an acid concentration of 3.0%, and a reaction time of 3 min. In the second stage, a glucose yield of 34.2%, on the basis the amount of residual cellulose from the f-GA, was obtained at a temperature of 190 °C, a sulfuric acid concentration of 4.0%, and a reaction time 3.7 min. Finally, 68.58% of the cellulose derived from f-GA was converted into glucose through two-stage acid saccharification under aforementioned conditions. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Use of a two-stage light-gas gun as an injector for electromagnetic railguns

    International Nuclear Information System (INIS)

    Shahinpoor, M.

    1989-01-01

    Ablation of wall materials is known to be a major factor limiting the performance of railguns. To minimize this effect, it is desirable too inject projectiles into railgun at velocities greater than the ablation threshold velocity (6-8 km/s for copper rails). Because two-stage light-gas guns are capable of achieving such velocities, a program was initiated to design, build and evaluate the performance of a two-stage light gas gun, utilizing hydrogen gas, for use as an injector to an electromagnetic railgun. This effort is part of a project to develop a hypervelocity electromagnetic launcher (HELEOS) for use in equation-of-state studies. In this paper, the specific design features that enhance compatibility of the injector with the railgun, including a slip-joint between the injector launch tube and the coupling section to the railgun are described. The operational capabilities for using all major projectile velocity measuring techniques, such as in-bore pressure gauges, laser and CW x-ray interrupt techniques, flash x-ray and continuous in-bore velocity measurements using VISAR interferometry are also discussed. Finally an internal ballistics code for optimizing gun performance has been utilized to interpret performance data of the gun

  11. Study on a high capacity two-stage free piston Stirling cryocooler working around 30 K

    Science.gov (United States)

    Wang, Xiaotao; Zhu, Jian; Chen, Shuai; Dai, Wei; Li, Ke; Pang, Xiaomin; Yu, Guoyao; Luo, Ercang

    2016-12-01

    This paper presents a two-stage high-capacity free-piston Stirling cryocooler driven by a linear compressor to meet the requirement of the high temperature superconductor (HTS) motor applications. The cryocooler system comprises a single piston linear compressor, a two-stage free piston Stirling cryocooler and a passive oscillator. A single stepped displacer configuration was adopted. A numerical model based on the thermoacoustic theory was used to optimize the system operating and structure parameters. Distributions of pressure wave, phase differences between the pressure wave and the volume flow rate and different energy flows are presented for a better understanding of the system. Some characterizing experimental results are presented. Thus far, the cryocooler has reached a lowest cold-head temperature of 27.6 K and achieved a cooling power of 78 W at 40 K with an input electric power of 3.2 kW, which indicates a relative Carnot efficiency of 14.8%. When the cold-head temperature increased to 77 K, the cooling power reached 284 W with a relative Carnot efficiency of 25.9%. The influences of different parameters such as mean pressure, input electric power and cold-head temperature are also investigated.

  12. Two-stage heterotrophic and phototrophic culture strategy for algal biomass and lipid production.

    Science.gov (United States)

    Zheng, Yubin; Chi, Zhanyou; Lucker, Ben; Chen, Shulin

    2012-01-01

    A two-stage heterotrophic and phototrophic culture strategy for algal biomass and lipid production was studied, wherein high density heterotrophic cultures of Chlorellasorokiniana serve as seed for subsequent phototrophic growth. The data showed growth rate, cell density and productivity of heterotrophic C.sorokiniana were 3.0, 3.3 and 7.4 times higher than phototrophic counterpart, respectively. Hetero- and phototrophic algal seeds had similar biomass/lipid production and fatty acid profile when inoculated into phototrophic culture system. To expand the application, food waste and wastewater were tested as feedstock for heterotrophic growth, and supported cell growth successfully. These results demonstrated the advantages of using heterotrophic algae cells as seeds for open algae culture system. Additionally, high inoculation rate of heterotrophic algal seed can be utilized as an effective method for contamination control. This two-stage heterotrophic phototrophic process is promising to provide a more efficient way for large scale production of algal biomass and biofuels. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Two-stage single-volume exchange transfusion in severe hemolytic disease of the newborn.

    Science.gov (United States)

    Abbas, Wael; Attia, Nayera I; Hassanein, Sahar M A

    2012-07-01

    Evaluation of two-stage single-volume exchange transfusion (TSSV-ET) in decreasing the post-exchange rebound increase in serum bilirubin level, with subsequent reduction of the need for repeated exchange transfusions. The study included 104 neonates with hyperbilirubinemia needing exchange transfusion. They were randomly enrolled into two equal groups, each group comprised 52 neonates. TSSV-ET was performed for the 52 neonates and the traditional single-stage double-volume exchange transfusion (SSDV-ET) was performed to 52 neonates. TSSV-ET significantly lowered rebound serum bilirubin level (12.7 ± 1.1 mg/dL), compared to SSDV-ET (17.3 ± 1.7 mg/dL), p < 0.001. Need for repeated exchange transfusions was significantly lower in TSSV-ET group (13.5%), compared to 32.7% in SSDV-ET group, p < 0.05. No significant difference was found between the two groups as regards the morbidity (11.5% and 9.6%, respectively) and the mortality (1.9% for both groups). Two-stage single-volume exchange transfusion proved to be more effective in reducing rebound serum bilirubin level post-exchange and in decreasing the need for repeated exchange transfusions.

  14. QUICKGUN: An algorithm for estimating the performance of two-stage light gas guns

    International Nuclear Information System (INIS)

    Milora, S.L.; Combs, S.K.; Gouge, M.J.; Kincaid, R.W.

    1990-09-01

    An approximate method is described for solving the equation of motion of a projectile accelerated by a two-stage light gas gun that uses high-pressure (<100-bar) gas from a storage reservoir to drive a piston to moderate speed (<400 m/s) for the purpose of compressing the low molecular weight propellant gas (hydrogen or helium) to high pressure (1000 to 10,000 bar) and temperature (1000 to 10,000 K). Zero-dimensional, adiabatic (isentropic) processes are used to describe the time dependence of the ideal gas thermodynamic properties of the storage reservoir and the first and second stages of the system. A one-dimensional model based on an approximate method of characteristics, or wave diagram analysis, for flow with friction (nonisentropic) is used to describe the nonsteady compressible flow processes in the launch tube. Linear approximations are used for the characteristic and fluid particle trajectories by averaging the values of the flow parameters at the breech and at the base of the projectile. An assumed functional form for the Mach number at the breech provides the necessary boundary condition. Results of the calculation are compared with data obtained from two-stage light gas gun experiments at Oak Ridge National Laboratory for solid deuterium and nylon projectiles with masses ranging from 10 to 35 mg and for projectile speeds between 1.6 and 4.5 km/s. The predicted and measured velocities generally agree to within 15%. 19 refs., 3 figs., 2 tabs

  15. Fate of dissolved organic nitrogen in two stage trickling filter process.

    Science.gov (United States)

    Simsek, Halis; Kasi, Murthy; Wadhawan, Tanush; Bye, Christopher; Blonigen, Mark; Khan, Eakalak

    2012-10-15

    Dissolved organic nitrogen (DON) represents a significant portion of nitrogen in the final effluent of wastewater treatment plants (WWTPs). Biodegradable portion of DON (BDON) can support algal growth and/or consume dissolved oxygen in the receiving waters. The fate of DON and BDON has not been studied for trickling filter WWTPs. DON and BDON data were collected along the treatment train of a WWTP with a two-stage trickling filter process. DON concentrations in the influent and effluent were 27% and 14% of total dissolved nitrogen (TDN). The plant removed about 62% and 72% of the influent DON and BDON mainly by the trickling filters. The final effluent BDON values averaged 1.8 mg/L. BDON was found to be between 51% and 69% of the DON in raw wastewater and after various treatment units. The fate of DON and BDON through the two-stage trickling filter treatment plant was modeled. The BioWin v3.1 model was successfully applied to simulate ammonia, nitrite, nitrate, TDN, DON and BDON concentrations along the treatment train. The maximum growth rates for ammonia oxidizing bacteria (AOB) and nitrite oxidizing bacteria, and AOB half saturation constant influenced ammonia and nitrate output results. Hydrolysis and ammonification rates influenced all of the nitrogen species in the model output, including BDON. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Two stage heterotrophy/photoinduction culture of Scenedesmus incrassatulus: potential for lutein production.

    Science.gov (United States)

    Flórez-Miranda, Liliana; Cañizares-Villanueva, Rosa Olivia; Melchy-Antonio, Orlando; Martínez-Jerónimo, Fernando; Flores-Ortíz, Cesar Mateo

    2017-11-20

    A biomass production process including two stages, heterotrophy/photoinduction (TSHP), was developed to improve biomass and lutein production by the green microalgae Scenedesmus incrassatulus. To determine the effects of different nitrogen sources (yeast extract and urea) and temperature in the heterotrophic stage, experiments using shake flask cultures with glucose as the carbon source were carried out. The highest biomass productivity and specific pigment concentrations were reached using urea+vitamins (U+V) at 30°C. The first stage of the TSHP process was done in a 6L bioreactor, and the inductions in a 3L airlift photobioreactor. At the end of the heterotrophic stage, S. incrassatulus achieved the maximal biomass concentration, increasing from 7.22gL -1 to 17.98gL -1 with an increase in initial glucose concentration from 10.6gL -1 to 30.3gL -1 . However, the higher initial glucose concentration resulted in a lower specific growth rate (μ) and lower cell yield (Y x/s ), possibly due to substrate inhibition. After 24h of photoinduction, lutein content in S. incrassatulus biomass was 7 times higher than that obtained at the end of heterotrophic cultivation, and the lutein productivity was 1.6 times higher compared with autotrophic culture of this microalga. Hence, the two-stage heterotrophy/photoinduction culture is an effective strategy for high cell density and lutein production in S. incrassatulus. Copyright © 2017. Published by Elsevier B.V.

  17. Optimization of Two-Stage Peltier Modules: Structure and Exergetic Efficiency

    Directory of Open Access Journals (Sweden)

    Cesar Ramirez-Lopez

    2012-08-01

    Full Text Available In this paper we undertake the theoretical analysis of a two-stage semiconductor thermoelectric module (TEM which contains an arbitrary and different number of thermocouples, n1 and n2, in each stage (pyramid-styled TEM. The analysis is based on a dimensionless entropy balance set of equations. We study the effects of n1 and n2, the flowing electric currents through each stage, the applied temperatures and the thermoelectric properties of the semiconductor materials on the exergetic efficiency. Our main result implies that the electric currents flowing in each stage must necessarily be different with a ratio about 4.3 if the best thermal performance and the highest temperature difference possible between the cold and hot side of the device are pursued. This fact had not been pointed out before for pyramid-styled two stage TEM. The ratio n1/n2 should be about 8.

  18. Hydrodeoxygenation of oils from cellulose in single and two-stage hydropyrolysis

    Energy Technology Data Exchange (ETDEWEB)

    Rocha, J.D.; Snape, C.E. [Strathclyde Univ., Glasgow (United Kingdom); Luengo, C.A. [Universidade Estadual de Campinas, SP (Brazil). Dept. de Fisica Aplicada

    1996-09-01

    To investigate the removal of oxygen (hydrodeoxygenation) during the hydropyrolysis of cellulose, single and two-stage experiments on pure cellulose have been carried out using hydrogen pressures up to 10 MPa and temperatures over the range 300-520{sup o}C. Carbon, oxygen and aromaticity balances have been determined from the product yields and compositions. For the two-stage tests, the primary oils were passed through a bed of commercial Ni/Mo {gamma}-alumina-supported catalyst (Criterion 424, presulphided) at 400{sup o}C. Raising the hydrogen pressure from atmospheric to 10 MPa increased the carbon conversion by 10 mole % which was roughly equally divided between the oil and hydrocarbon gases. The oxygen content of the primary oil was reduced by over 10% to below 20% w/w. The addition of a dispersed iron sulphide catalyst further increased the oil yield at 10 MPa and reduces the oxygen content of the oil by a further 10%. The effect of hydrogen pressure on oil yields was most pronounced at low flow rates where it is beneficial in helping to overcome diffusional resistances. Unlike the dispersed iron sulphide in the first stage, the use of the Ni-Mo catalyst in the second stage reduced both the oxygen content and aromaticity of the oils. (Author)

  19. Two-stage stochastic programming model for the regional-scale electricity planning under demand uncertainty

    International Nuclear Information System (INIS)

    Huang, Yun-Hsun; Wu, Jung-Hua; Hsu, Yu-Ju

    2016-01-01

    Traditional electricity supply planning models regard the electricity demand as a deterministic parameter and require the total power output to satisfy the aggregate electricity demand. But in today's world, the electric system planners are facing tremendously complex environments full of uncertainties, where electricity demand is a key source of uncertainty. In addition, electricity demand patterns are considerably different for different regions. This paper developed a multi-region optimization model based on two-stage stochastic programming framework to incorporate the demand uncertainty. Furthermore, the decision tree method and Monte Carlo simulation approach are integrated into the model to simplify electricity demands in the form of nodes and determine the values and probabilities. The proposed model was successfully applied to a real case study (i.e. Taiwan's electricity sector) to show its applicability. Detail simulation results were presented and compared with those generated by a deterministic model. Finally, the long-term electricity development roadmap at a regional level could be provided on the basis of our simulation results. - Highlights: • A multi-region, two-stage stochastic programming model has been developed. • The decision tree and Monte Carlo simulation are integrated into the framework. • Taiwan's electricity sector is used to illustrate the applicability of the model. • The results under deterministic and stochastic cases are shown for comparison. • Optimal portfolios of regional generation technologies can be identified.

  20. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Won Sik [Purdue Univ., West Lafayette, IN (United States); Lin, C. S. [Purdue Univ., West Lafayette, IN (United States); Hader, J. S. [Purdue Univ., West Lafayette, IN (United States); Park, T. K. [Purdue Univ., West Lafayette, IN (United States); Deng, P. [Purdue Univ., West Lafayette, IN (United States); Yang, G. [Purdue Univ., West Lafayette, IN (United States); Jung, Y. S. [Purdue Univ., West Lafayette, IN (United States); Kim, T. K. [Argonne National Lab. (ANL), Argonne, IL (United States); Stauff, N. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-30

    This report presents the performance characteristics of two “two-stage” fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  1. A two-stage inexact joint-probabilistic programming method for air quality management under uncertainty.

    Science.gov (United States)

    Lv, Y; Huang, G H; Li, Y P; Yang, Z F; Sun, W

    2011-03-01

    A two-stage inexact joint-probabilistic programming (TIJP) method is developed for planning a regional air quality management system with multiple pollutants and multiple sources. The TIJP method incorporates the techniques of two-stage stochastic programming, joint-probabilistic constraint programming and interval mathematical programming, where uncertainties expressed as probability distributions and interval values can be addressed. Moreover, it can not only examine the risk of violating joint-probability constraints, but also account for economic penalties as corrective measures against any infeasibility. The developed TIJP method is applied to a case study of a regional air pollution control problem, where the air quality index (AQI) is introduced for evaluation of the integrated air quality management system associated with multiple pollutants. The joint-probability exists in the environmental constraints for AQI, such that individual probabilistic constraints for each pollutant can be efficiently incorporated within the TIJP model. The results indicate that useful solutions for air quality management practices have been generated; they can help decision makers to identify desired pollution abatement strategies with minimized system cost and maximized environmental efficiency. Copyright © 2010 Elsevier Ltd. All rights reserved.

  2. Many-Objective Particle Swarm Optimization Using Two-Stage Strategy and Parallel Cell Coordinate System.

    Science.gov (United States)

    Hu, Wang; Yen, Gary G; Luo, Guangchun

    2017-06-01

    It is a daunting challenge to balance the convergence and diversity of an approximate Pareto front in a many-objective optimization evolutionary algorithm. A novel algorithm, named many-objective particle swarm optimization with the two-stage strategy and parallel cell coordinate system (PCCS), is proposed in this paper to improve the comprehensive performance in terms of the convergence and diversity. In the proposed two-stage strategy, the convergence and diversity are separately emphasized at different stages by a single-objective optimizer and a many-objective optimizer, respectively. A PCCS is exploited to manage the diversity, such as maintaining a diverse archive, identifying the dominance resistant solutions, and selecting the diversified solutions. In addition, a leader group is used for selecting the global best solutions to balance the exploitation and exploration of a population. The experimental results illustrate that the proposed algorithm outperforms six chosen state-of-the-art designs in terms of the inverted generational distance and hypervolume over the DTLZ test suite.

  3. Two stage, low temperature, catalyzed fluidized bed incineration with in situ neutralization for radioactive mixed wastes

    International Nuclear Information System (INIS)

    Wade, J.F.; Williams, P.M.

    1995-01-01

    A two stage, low temperature, catalyzed fluidized bed incineration process is proving successful at incinerating hazardous wastes containing nuclear material. The process operates at 550 degrees C and 650 degrees C in its two stages. Acid gas neutralization takes place in situ using sodium carbonate as a sorbent in the first stage bed. The feed material to the incinerator is hazardous waste-as defined by the Resource Conservation and Recovery Act-mixed with radioactive materials. The radioactive materials are plutonium, uranium, and americium that are byproducts of nuclear weapons production. Despite its low temperature operation, this system successfully destroyed poly-chlorinated biphenyls at a 99.99992% destruction and removal efficiency. Radionuclides and volatile heavy metals leave the fluidized beds and enter the air pollution control system in minimal amounts. Recently collected modeling and experimental data show the process minimizes dioxin and furan production. The report also discusses air pollution, ash solidification, and other data collected from pilot- and demonstration-scale testing. The testing took place at Rocky Flats Environmental Technology Site, a US Department of Energy facility, in the 1970s, 1980s, and 1990s

  4. On bi-criteria two-stage transportation problem: a case study

    Directory of Open Access Journals (Sweden)

    Ahmad MURAD

    2010-01-01

    Full Text Available The study of the optimum distribution of goods between sources and destinations is one of the important topics in projects economics. This importance comes as a result of minimizing the transportation cost, deterioration, time, etc. The classical transportation problem constitutes one of the major areas of application for linear programming. The aim of this problem is to obtain the optimum distribution of goods from different sources to different destinations which minimizes the total transportation cost. From the practical point of view, the transportation problems may differ from the classical form. It may contain one or more objective function, one or more stage to transport, one or more type of commodity with one or more means of transport. The aim of this paper is to construct an optimization model for transportation problem for one of mill-stones companies. The model is formulated as a bi-criteria two-stage transportation problem with a special structure depending on the capacities of suppliers, warehouses and requirements of the destinations. A solution algorithm is introduced to solve this class of bi-criteria two-stage transportation problem to obtain the set of non-dominated extreme points and the efficient solutions accompanied with each one that enables the decision maker to choose the best one. The solution algorithm mainly based on the fruitful application of the methods for treating transportation problems, theory of duality of linear programming and the methods of solving bi-criteria linear programming problems.

  5. Two-stage effects of awareness cascade on epidemic spreading in multiplex networks

    Science.gov (United States)

    Guo, Quantong; Jiang, Xin; Lei, Yanjun; Li, Meng; Ma, Yifang; Zheng, Zhiming

    2015-01-01

    Human awareness plays an important role in the spread of infectious diseases and the control of propagation patterns. The dynamic process with human awareness is called awareness cascade, during which individuals exhibit herd-like behavior because they are making decisions based on the actions of other individuals [Borge-Holthoefer et al., J. Complex Networks 1, 3 (2013), 10.1093/comnet/cnt006]. In this paper, to investigate the epidemic spreading with awareness cascade, we propose a local awareness controlled contagion spreading model on multiplex networks. By theoretical analysis using a microscopic Markov chain approach and numerical simulations, we find the emergence of an abrupt transition of epidemic threshold βc with the local awareness ratio α approximating 0.5 , which induces two-stage effects on epidemic threshold and the final epidemic size. These findings indicate that the increase of α can accelerate the outbreak of epidemics. Furthermore, a simple 1D lattice model is investigated to illustrate the two-stage-like sharp transition at αc≈0.5 . The results can give us a better understanding of why some epidemics cannot break out in reality and also provide a potential access to suppressing and controlling the awareness cascading systems.

  6. A preventive maintenance policy based on dependent two-stage deterioration and external shocks

    International Nuclear Information System (INIS)

    Yang, Li; Ma, Xiaobing; Peng, Rui; Zhai, Qingqing; Zhao, Yu

    2017-01-01

    This paper proposes a preventive maintenance policy for a single-unit system whose failure has two competing and dependent causes, i.e., internal deterioration and sudden shocks. The internal failure process is divided into two stages, i.e. normal and defective. Shocks arrive according to a non-homogeneous Poisson process (NHPP), leading to the failure of the system immediately. The occurrence rate of a shock is affected by the state of the system. Both an age-based replacement and finite number of periodic inspections are schemed simultaneously to deal with the competing failures. The objective of this study is to determine the optimal preventive replacement interval, inspection interval and number of inspections such that the expected cost per unit time is minimized. A case study on oil pipeline maintenance is presented to illustrate the maintenance policy. - Highlights: • A maintenance model based on two-stage deterioration and sudden shocks is developed. • The impact of internal system state on external shock process is studied. • A new preventive maintenance strategy combining age-based replacements and periodic inspections is proposed. • Postponed replacement of a defective system is provided by restricting the number of inspections.

  7. Two-stage supercharging of a passenger car diesel engine; Zweistufige Aufladung eines Pkw-Dieselmotors

    Energy Technology Data Exchange (ETDEWEB)

    Wittmer, A.; Albrecht, P.; Becker, B.; Vogt, G.; Fischer, R. [Erphi Elektronik GmbH, Holzkirchen (Germany)

    2004-07-01

    Two-stage supercharging of internal combustion engines with specific capacities beyond 70 kW/l opens up new options for smaller charge volumes. A low-pressure and a high-pressure supercharger are connected in series, with by-passes. The control strategy is described in this contribution using a model of exhaust counterpressure. The potential of a two-stage supercharged diesel engine with CR injection was proved in two engines and in dynamic driving tests. The new concept offers optimum chances for downsizing provided that the driving performance is not affected. (orig.) [German] Die zweistufige Aufladung von Verbrennungsmotoren eroeffnet mit spezifischen Leistungen jenseits von 70 kW/l weitere Moeglichkeiten der Hubraumverkleinerung. Dabei werden ein Niederdruck- und ein Hochdrucklader mit Umgehungsleitungen in Reihe geschaltet. Die erforderliche Regelungsstrategie zum kontrollierten Uebergang von einer Stufe auf die naechste erfolgt in dem hier vorliegenden Beitrag anhand eines Modells fuer den Abgasgegendruck. Hierbei wird das Regelorgan so angesteuert, dass sich der gewuenschte Druck vor den Turbinen einstellt. An zwei Motoren konnten stationaere Ergebnisse das Leistungspotential eines zweistufig aufgeladenen Dieselmotors mit 'Common Rail' Einspritzung nachgewiesen werden. Die dynamischen Fahrversuche belegen eindrucksvoll den schnellen Ladedruckaufbau auch aus tiefen Drehzahlbereichen bei gleichzeitig gutem Uebergangsverhalten von der Hochdruck- auf die Niederdruckstufe. Damit bietet der zweistufig aufgeladene Dieselmotor mit dem hier dargestellten Regelungsverfahren optimale Voraussetzungen fuer 'Downsizing' unter der Randbedingung, dass moeglichst keine Einbussen bei den Fahrleistungen hinzunehmen sind. (orig.)

  8. A low-voltage sense amplifier with two-stage operational amplifier clamping for flash memory

    Science.gov (United States)

    Guo, Jiarong

    2017-04-01

    A low-voltage sense amplifier with reference current generator utilizing two-stage operational amplifier clamp structure for flash memory is presented in this paper, capable of operating with minimum supply voltage at 1 V. A new reference current generation circuit composed of a reference cell and a two-stage operational amplifier clamping the drain pole of the reference cell is used to generate the reference current, which avoids the threshold limitation caused by current mirror transistor in the traditional sense amplifier. A novel reference voltage generation circuit using dummy bit-line structure without pull-down current is also adopted, which not only improves the sense window enhancing read precision but also saves power consumption. The sense amplifier was implemented in a flash realized in 90 nm flash technology. Experimental results show the access time is 14.7 ns with power supply of 1.2 V and slow corner at 125 °C. Project supported by the National Natural Science Fundation of China (No. 61376028).

  9. Robust Frequency-Domain Constrained Feedback Design via a Two-Stage Heuristic Approach.

    Science.gov (United States)

    Li, Xianwei; Gao, Huijun

    2015-10-01

    Based on a two-stage heuristic method, this paper is concerned with the design of robust feedback controllers with restricted frequency-domain specifications (RFDSs) for uncertain linear discrete-time systems. Polytopic uncertainties are assumed to enter all the system matrices, while RFDSs are motivated by the fact that practical design specifications are often described in restricted finite frequency ranges. Dilated multipliers are first introduced to relax the generalized Kalman-Yakubovich-Popov lemma for output feedback controller synthesis and robust performance analysis. Then a two-stage approach to output feedback controller synthesis is proposed: at the first stage, a robust full-information (FI) controller is designed, which is used to construct a required output feedback controller at the second stage. To improve the solvability of the synthesis method, heuristic iterative algorithms are further formulated for exploring the feedback gain and optimizing the initial FI controller at the individual stage. The effectiveness of the proposed design method is finally demonstrated by the application to active control of suspension systems.

  10. Two-stage agglomeration of fine-grained herbal nettle waste

    Science.gov (United States)

    Obidziński, Sławomir; Joka, Magdalena; Fijoł, Olga

    2017-10-01

    This paper compares the densification work necessary for the pressure agglomeration of fine-grained dusty nettle waste, with the densification work involved in two-stage agglomeration of the same material. In the first stage, the material was pre-densified through coating with a binder material in the form of a 5% potato starch solution, and then subjected to pressure agglomeration. A number of tests were conducted to determine the effect of the moisture content in the nettle waste (15, 18 and 21%), as well as the process temperature (50, 70, 90°C) on the values of densification work and the density of the obtained pellets. For pre-densified pellets from a mixture of nettle waste and a starch solution, the conducted tests determined the effect of pellet particle size (1, 2, and 3 mm) and the process temperature (50, 70, 90°C) on the same values. On the basis of the tests, we concluded that the introduction of a binder material and the use of two-stage agglomeration in nettle waste densification resulted in increased densification work (as compared to the densification of nettle waste alone) and increased pellet density.

  11. A two-stage heating scheme for heat assisted magnetic recording

    Science.gov (United States)

    Xiong, Shaomin; Kim, Jeongmin; Wang, Yuan; Zhang, Xiang; Bogy, David

    2014-05-01

    Heat Assisted Magnetic Recording (HAMR) has been proposed to extend the storage areal density beyond 1 Tb/in.2 for the next generation magnetic storage. A near field transducer (NFT) is widely used in HAMR systems to locally heat the magnetic disk during the writing process. However, much of the laser power is absorbed around the NFT, which causes overheating of the NFT and reduces its reliability. In this work, a two-stage heating scheme is proposed to reduce the thermal load by separating the NFT heating process into two individual heating stages from an optical waveguide and a NFT, respectively. As the first stage, the optical waveguide is placed in front of the NFT and delivers part of laser energy directly onto the disk surface to heat it up to a peak temperature somewhat lower than the Curie temperature of the magnetic material. Then, the NFT works as the second heating stage to heat a smaller area inside the waveguide heated area further to reach the Curie point. The energy applied to the NFT in the second heating stage is reduced compared with a typical single stage NFT heating system. With this reduced thermal load to the NFT by the two-stage heating scheme, the lifetime of the NFT can be extended orders longer under the cyclic load condition.

  12. Experiences from the full-scale implementation of a new two-stage vertical flow constructed wetland design.

    Science.gov (United States)

    Langergraber, Guenter; Pressl, Alexander; Haberl, Raimund

    2014-01-01

    This paper describes the results of the first full-scale implementation of a two-stage vertical flow constructed wetland (CW) system developed to increase nitrogen removal. The full-scale system was constructed for the Bärenkogelhaus, which is located in Styria at the top of a mountain, 1,168 m above sea level. The Bärenkogelhaus has a restaurant with 70 seats, 16 rooms for overnight guests and is a popular site for day visits, especially during weekends and public holidays. The CW treatment system was designed for a hydraulic load of 2,500 L.d(-1) with a specific surface area requirement of 2.7 m(2) per person equivalent (PE). It was built in fall 2009 and started operation in April 2010 when the restaurant was re-opened. Samples were taken between July 2010 and June 2013 and were analysed in the laboratory of the Institute of Sanitary Engineering at BOKU University using standard methods. During 2010 the restaurant at Bärenkogelhaus was open 5 days a week whereas from 2011 the Bärenkogelhaus was open only on demand for events. This resulted in decreased organic loads of the system in the later period. In general, the measured effluent concentrations were low and the removal efficiencies high. During the whole period the ammonia nitrogen effluent concentration was below 1 mg/L even at effluent water temperatures below 3 °C. Investigations during high-load periods, i.e. events like weddings and festivals at weekends, with more than 100 visitors, showed a very robust treatment performance of the two-stage CW system. Effluent concentrations of chemical oxygen demand and NH4-N were not affected by these events with high hydraulic loads.

  13. The Effect Of Two-Stage Age Hardening Treatment Combined With Shot Peening On Stress Distribution In The Surface Layer Of 7075 Aluminum Alloy

    Directory of Open Access Journals (Sweden)

    Kaczmarek Ł.

    2015-09-01

    Full Text Available The article present the results of the study on the improvement of mechanical properties of the surface layer of 7075 aluminum alloy via two-stage aging combined with shot peening. The experiments proved that thermo-mechanical treatment may significantly improve hardness and stress distribution in the surface layer. Compressive stresses of 226 MPa±5.5 MPa and hardness of 210±2 HV were obtained for selected samples.

  14. Distribution of extracellular potassium and electrophysiologic changes during two-stage coronary ligation in the isolated, perfused canine heart

    NARCIS (Netherlands)

    Coronel, R.; Fiolet, J. W.; Wilms-Schopman, J. G.; Opthof, T.; Schaapherder, A. F.; Janse, M. J.

    1989-01-01

    We studied the relation between [K+]o and the electrophysiologic changes during a "Harris two-stage ligation," which is an occlusion of a coronary artery, preceded by a 30-minute period of 50% reduction of flow through the artery. This two-stage ligation has been reported to be antiarrhythmic. Local

  15. Performance of an iterative two-stage bayesian technique for population pharmacokinetic analysis of rich data sets

    NARCIS (Netherlands)

    Proost, Johannes H.; Eleveld, Douglas J.

    2006-01-01

    Purpose. To test the suitability of an Iterative Two-Stage Bayesian (ITSB) technique for population pharmacokinetic analysis of rich data sets, and to compare ITSB with Standard Two-Stage (STS) analysis and nonlinear Mixed Effect Modeling (MEM). Materials and Methods. Data from a clinical study with

  16. Rapid Two-stage Versus One-stage Surgical Repair of Interrupted Aortic Arch with Ventricular Septal Defect in Neonates

    Directory of Open Access Journals (Sweden)

    Meng-Lin Lee

    2008-11-01

    Conclusion: The outcome of rapid two-stage repair is comparable to that of one-stage repair. Rapid two-stage repair has the advantages of significantly shorter cardiopulmonary bypass duration and AXC time, and avoids deep hypothermic circulatory arrest. LVOTO remains an unresolved issue, and postoperative aortic arch restenosis can be dilated effectively by percutaneous balloon angioplasty.

  17. X-Ray Temperatures, Luminosities, and Masses from XMM-Newton Follow-up of the First Shear-selected Galaxy Cluster Sample

    Energy Technology Data Exchange (ETDEWEB)

    Deshpande, Amruta J.; Hughes, John P. [Department of Physics and Astronomy, Rutgers the State University of New Jersey, 136 Frelinghuysen Road, Piscataway, NJ 08854 (United States); Wittman, David, E-mail: amrejd@physics.rutgers.edu, E-mail: jph@physics.rutgers.edu, E-mail: dwittman@physics.ucdavis.edu [Department of Physics, University of California, Davis, One Shields Avenue, Davis, CA 95616 (United States)

    2017-04-20

    We continue the study of the first sample of shear-selected clusters from the initial 8.6 square degrees of the Deep Lens Survey (DLS); a sample with well-defined selection criteria corresponding to the highest ranked shear peaks in the survey area. We aim to characterize the weak lensing selection by examining the sample’s X-ray properties. There are multiple X-ray clusters associated with nearly all the shear peaks: 14 X-ray clusters corresponding to seven DLS shear peaks. An additional three X-ray clusters cannot be definitively associated with shear peaks, mainly due to large positional offsets between the X-ray centroid and the shear peak. Here we report on the XMM-Newton properties of the 17 X-ray clusters. The X-ray clusters display a wide range of luminosities and temperatures; the L {sub X} − T {sub X} relation we determine for the shear-associated X-ray clusters is consistent with X-ray cluster samples selected without regard to dynamical state, while it is inconsistent with self-similarity. For a subset of the sample, we measure X-ray masses using temperature as a proxy, and compare to weak lensing masses determined by the DLS team. The resulting mass comparison is consistent with equality. The X-ray and weak lensing masses show considerable intrinsic scatter (∼48%), which is consistent with X-ray selected samples when their X-ray and weak lensing masses are independently determined.

  18. Two stages of Kondo effect and competition between RKKY and Kondo in Gd-based intermetallic compound

    Energy Technology Data Exchange (ETDEWEB)

    Vaezzadeh, Mehdi [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of)]. E-mail: mehdi@kntu.ac.ir; Yazdani, Ahmad [Tarbiat Modares University, P.O. Box 14155-4838, Tehran (Iran, Islamic Republic of); Vaezzadeh, Majid [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of); Daneshmand, Gissoo [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of); Kanzeghi, Ali [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of)

    2006-05-01

    The magnetic behavior of Gd-based intermetallic compound (Gd{sub 2}Al{sub (1-x)}Au{sub x}) in the form of the powder and needle, is investigated. All the samples are an orthorhombic crystal structure. Only the compound with x=0.4 shows the Kondo effect (other compounds have a normal behavior). Although, for the compound in the form of powder, with x=0.4, the susceptibility measurement {chi}(T) shows two different stages. Moreover for (T>T{sub K2}) a fall of the value of {chi}(T) is observable, which indicates a weak presence of ferromagnetic phase. About the two stages of Kondo effect, we observe at the first (T{sub K1}) an increase of {chi}(T) and in the second stage (T{sub K2}) a new remarkable decrease of {chi}(T) (T{sub K1}>T{sub K2}). For the sample in the form of needles, the first stage is observable only under high magnetic field. This first stage could be corresponds to a narrow resonance between Kondo cloud and itinerant electron. The second stage, which is remarkably visible for the sample in the form of the powder, can be attribute to a complete polarization of Kondo cloud. Observation of these two Kondo stages could be due to the weak presence of RKKY contribution.

  19. Two stages of Kondo effect and competition between RKKY and Kondo in Gd-based intermetallic compound

    International Nuclear Information System (INIS)

    Vaezzadeh, Mehdi; Yazdani, Ahmad; Vaezzadeh, Majid; Daneshmand, Gissoo; Kanzeghi, Ali

    2006-01-01

    The magnetic behavior of Gd-based intermetallic compound (Gd 2 Al (1-x) Au x ) in the form of the powder and needle, is investigated. All the samples are an orthorhombic crystal structure. Only the compound with x=0.4 shows the Kondo effect (other compounds have a normal behavior). Although, for the compound in the form of powder, with x=0.4, the susceptibility measurement χ(T) shows two different stages. Moreover for (T>T K2 ) a fall of the value of χ(T) is observable, which indicates a weak presence of ferromagnetic phase. About the two stages of Kondo effect, we observe at the first (T K1 ) an increase of χ(T) and in the second stage (T K2 ) a new remarkable decrease of χ(T) (T K1 >T K2 ). For the sample in the form of needles, the first stage is observable only under high magnetic field. This first stage could be corresponds to a narrow resonance between Kondo cloud and itinerant electron. The second stage, which is remarkably visible for the sample in the form of the powder, can be attribute to a complete polarization of Kondo cloud. Observation of these two Kondo stages could be due to the weak presence of RKKY contribution

  20. Two-stage atlas subset selection in multi-atlas based image segmentation.

    Science.gov (United States)

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas

  1. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  2. Two-stage atlas subset selection in multi-atlas based image segmentation

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2015-01-01

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  3. Two stages and three components of the postural preparation to action.

    Science.gov (United States)

    Krishnan, Vennila; Aruin, Alexander S; Latash, Mark L

    2011-07-01

    Previous studies of postural preparation to action/perturbation have primarily focused on anticipatory postural adjustments (APAs), the changes in muscle activation levels resulting in the production of net forces and moments of force. We hypothesized that postural preparation to action consists of two stages: (1) Early postural adjustments (EPAs), seen a few hundred ms prior to an expected external perturbation and (2) APAs seen about 100 ms prior to the perturbation. We also hypothesized that each stage consists of three components, anticipatory synergy adjustments seen as changes in covariation of the magnitudes of commands to muscle groups (M-modes), changes in averaged across trials levels of muscle activation, and mechanical effects such as shifts of the center of pressure. Nine healthy participants were subjected to external perturbations created by a swinging pendulum while standing in a semi-squatting posture. Electrical activity of twelve trunk and leg muscles and displacements of the center of pressure were recorded and analyzed. Principal component analysis was used to identify four M-modes within the space of muscle activations using indices of integrated muscle activation. This analysis was performed twice, over two phases, 400-700 ms prior to the perturbation and over 200 ms just prior to the perturbation. Similar robust results were obtained using the data from both phases. An index of a multi-M-mode synergy stabilizing the center of pressure displacement was computed using the framework of the uncontrolled manifold hypothesis. The results showed high synergy indices during quiet stance. Each of the two stages started with a drop in the synergy index followed by a change in the averaged across trials activation levels in postural muscles. There was a very long electromechanical delay during the early postural adjustments and a much shorter delay during the APAs. Overall, the results support our main hypothesis on the two stages and three components

  4. Evaluation of a modified two-stage inferior alveolar nerve block technique: A preliminary investigation

    Directory of Open Access Journals (Sweden)

    Ashwin Rao

    2017-01-01

    Full Text Available Introduction: The two-stage technique of inferior alveolar nerve block (IANB administration does not address the pain associated with “needle insertion” and “local anesthetic solution deposition” in the “first stage” of the injection. This study evaluated a “modified two stage technique” to the reaction of children during “needle insertion” and “local anesthetic solution deposition” during the “first stage” and compared it to the “first phase” of the IANB administered with the standard one-stage technique. Materials and Methods: This was a parallel, single-blinded comparative study. A total of 34 children (between 6 and 10 years of age were randomly divided into two groups to receive an IANB either through the modified two-stage technique (MTST (Group A; 15 children or the standard one-stage technique (SOST (Group B; 19 children. The evaluation was done using the Face Legs Activity Cry Consolability (FLACC; which is an objective scale based on the expressions of the child scale. The obtained data was analyzed using Fishers Exact test with the P value set at <0.05 as level of significance. Results: 73.7% of children in Group B indicated moderate pain during the “first phase” of SOST and no children indicated such in the “first stage” of group A. Group A had 33.3% children who scored “0” indicating relaxed/comfortable children compared to 0% in Group B. In Group A, 66.7% of children scored between 1–3 indicating mild discomfort compared to 26.3% in group B. The difference in the scores between the two groups in each category (relaxed/comfortable, mild discomfort, moderate pain was highly significant (P < 0.001. Conclusion: Reaction of children in Group A during “needle insertion” and “local anesthetic solution deposition” in the “first stage” of MTST was significantly lower than that of Group B during the “first phase” of the SOST.

  5. Product prioritization in a two-stage food production system with intermediate storage

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter

    2007-01-01

    In the food-processing industry, usually a limited number of storage tanks for intermediate storage is available, which are used for different products. The market sometimes requires extremely short lead times for some products, leading to prioritization of these products, partly through...... the performance improvements for the prioritized product, as well as the negative effects for the other products. We also show how the effect decreases with more storage tanks, and increases with more products....... the dedication of a storage tank. This type of situation has hardly been investigated, although planners struggle with it in practice. This paper aims at investigating the fundamental effect of prioritization and dedicated storage in a two-stage production system, for various product mixes. We show...

  6. Hugoniot measurements in vanadium using the LNL two-stage light-gas gun

    International Nuclear Information System (INIS)

    Gathers, G.R.; Mitchell, A.C.; Holmes, N.C.

    1983-01-01

    Hugoniot measurements on vanadium have been made using the LLNL two-stage light-gas gun. The direct collision method with electrical pins and a tantalum flyer accelerated to 6.28 km/s was used. Alt'shuler, et. al., have reported Hugoniot measurements in vanadium using explosives and the impedance match method. They reported a kink in the U/sub s/ - U/sub p/ relationship at 183 GPa, and attribute it to electronic transitions. The upper portion of their curve is based on a single point at 339 GPa. The present work was performed to further investigate the equation-of-state in the high-pressure range

  7. Two-stage multilevel en bloc spondylectomy with resection and replacement of the aorta.

    Science.gov (United States)

    Gösling, Thomas; Pichlmaier, Maximilian A; Länger, Florian; Krettek, Christian; Hüfner, Tobias

    2013-05-01

    We report a case of multilevel spondylectomy in which resection and replacement of the adjacent aorta were done. Although spondylectomy is nowadays an established technique, no report on a combined aortic resection and replacement has been reported so far. The case of a 43-year-old man with a primary chondrosarcoma of the thoracic spine is presented. The local pathology necessitated resection of the aorta. We did a two-stage procedure with resection and replacement of the aorta using a heart-lung machine followed by secondary tumor resection and spinal reconstruction. The procedure was successful. A tumor-free margin was achieved. The patient is free of disease 48 months after surgery. En bloc spondylectomy in combination with aortic resection is feasible and might expand the possibility of producing tumor-free margins in special situations.

  8. Integrated Circuit Design of 3 Electrode Sensing System Using Two-Stage Operational Amplifier

    Science.gov (United States)

    Rani, S.; Abdullah, W. F. H.; Zain, Z. M.; N, Aqmar N. Z.

    2018-03-01

    This paper presents the design of a two-stage operational amplifier(op amp) for 3-electrode sensing system readout circuits. The designs have been simulated using 0.13μm CMOS technology from Silterra (Malaysia) with Mentor graphics tools. The purpose of this projects is mainly to design a miniature interfacing circuit to detect the redox reaction in the form of current using standard analog modules. The potentiostat consists of several op amps combined together in order to analyse the signal coming from the 3-electrode sensing system. This op amp design will be used in potentiostat circuit device and to analyse the functionality for each module of the system.

  9. Design of a Two-stage High-capacity Stirling Cryocooler Operating below 30K

    Science.gov (United States)

    Wang, Xiaotao; Dai, Wei; Zhu, Jian; Chen, Shuai; Li, Haibing; Luo, Ercang

    The high capacity cryocooler working below 30K can find many applications such as superconducting motors, superconducting cables and cryopump. Compared to the GM cryocooler, the Stirling cryocooler can achieve higher efficiency and more compact structure. Because of these obvious advantages, we have designed a two stage free piston Stirling cryocooler system, which is driven by a moving magnet linear compressor with an operating frequency of 40 Hz and a maximum 5 kW input electric power. The first stage of the cryocooler is designed to operate in the liquid nitrogen temperature and output a cooling power of 100 W. And the second stage is expected to simultaneously provide a cooling power of 50 W below the temperature of 30 K. In order to achieve the best system efficiency, a numerical model based on the thermoacoustic model was developed to optimize the system operating and structure parameters.

  10. Two-stage autotransplantation of human submandibular gland: a novel approach to treat postradiogenic xerostomia.

    Science.gov (United States)

    Hagen, Rudolf; Scheich, Matthias; Kleinsasser, Norbert; Burghartz, Marc

    2016-08-01

    Xerostomia is a persistent side effect of radiotherapy (RT), which severely reduces the quality of life of the patients affected. Besides drug treatment and new irradiation strategies, surgical procedures aim for tissue protection of the submandibular gland. Using a new surgical approach, the submandibular gland was autotransplanted in 6 patients to the patient's forearm for the period of RT and reimplanted into the floor of the mouth 2-3 months after completion of RT. Saxon's test was performed during different time points to evaluate patient's saliva production. Furthermore patients had to answer EORTC QLQ-HN35 questionnaire and visual analog scale. Following this two-stage autotransplantation, xerostomia in the patients was markedly reduced due to improved saliva production of the reimplanted gland. Whether this promising novel approach is a reliable treatment option for RT patients in general should be evaluated in further studies.

  11. Compact high-flux two-stage solar collectors based on tailored edge-ray concentrators

    Science.gov (United States)

    Friedman, Robert P.; Gordon, Jeffrey M.; Ries, Harald

    1995-08-01

    Using the recently-invented tailored edge-ray concentrator (TERC) approach for the design of compact two-stage high-flux solar collectors--a focusing primary reflector and a nonimaging TERC secondary reflector--we present: 1) a new primary reflector shape based on the TERC approach and a secondary TERC tailored to its particular flux map, such that more compact concentrators emerge at flux concentration levels in excess of 90% of the thermodynamic limit; and 2) calculations and raytrace simulations result which demonstrate the V-cone approximations to a wide variety of TERCs attain the concentration of the TERC to within a few percent, and hence represent practical secondary concentrators that may be superior to corresponding compound parabolic concentrator or trumpet secondaries.

  12. On the optimal use of a slow server in two-stage queueing systems

    Science.gov (United States)

    Papachristos, Ioannis; Pandelis, Dimitrios G.

    2017-07-01

    We consider two-stage tandem queueing systems with a dedicated server in each queue and a slower flexible server that can attend both queues. We assume Poisson arrivals and exponential service times, and linear holding costs for jobs present in the system. We study the optimal dynamic assignment of servers to jobs assuming that two servers cannot collaborate to work on the same job and preemptions are not allowed. We formulate the problem as a Markov decision process and derive properties of the optimal allocation for the dedicated (fast) servers. Specifically, we show that the one downstream should not idle, and the same is true for the one upstream when holding costs are larger there. The optimal allocation of the slow server is investigated through extensive numerical experiments that lead to conjectures on the structure of the optimal policy.

  13. Sensorless Reserved Power Control Strategy for Two-Stage Grid-Connected Photovoltaic Systems

    DEFF Research Database (Denmark)

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2016-01-01

    Due to still increasing penetration level of grid-connected Photovoltaic (PV) systems, advanced active power control functionalities have been introduced in grid regulations. A reserved power control, where the active power from the PV panels is reserved during operation, is required for grid...... support. In this paper, a cost-effective solution to realize the reserved power control for grid-connected PV systems is proposed. The proposed solution routinely employs a Maximum Power Point Tracking (MPPT) control to estimate the available PV power and a Constant Power Generation (CPG) control...... to achieve the power reserve. In this method, the irradiance measurements that have been used in conventional control schemes to estimate the available PV power are not required, and thereby being a sensorless solution. Simulations and experimental tests have been performed on a 3-kW two-stage single...

  14. Two-Stage Optimal Scheduling of Electric Vehicle Charging based on Transactive Control

    DEFF Research Database (Denmark)

    Liu, Zhaoxi; Wu, Qiuwei; Ma, Kang

    2018-01-01

    In this paper, a two-stage optimal charging scheme based on transactive control is proposed for the aggregator to manage day-ahead electricity procurement and real-time EV charging management in order to minimize its total operating cost. The day-ahead electricity procurement considers both the day......-ahead energy cost and expected real-time operation cost. In the real-time charging management, the cost of employing the charging flexibility from the EV owners is explicitly modelled. The aggregator uses a transactive market to manage the real-time charging demand to provide the regulating power. A model...... predictive control (MPC) based method is proposed for the aggregator to clear the transactive market. The realtime charging decisions of the EVs are determined by the clearing of the proposed transactive market according to the realtime requests and preferences of the EV owners. As such, the aggregators...

  15. Determination Bounds for Intermediate Products in a Two-Stage Network DEA

    Directory of Open Access Journals (Sweden)

    Hadi Bagherzadeh Valami

    2016-03-01

    Full Text Available The internal structure of decision making unit (DMU is the key element at extension of network DEA. In general considering internal performance evaluation of system is a better criterion than the conventional DEA-models, essentially based on the initial inputs and final outputs of the system. The internal performance of a system is dependent on the relation between sub-DMUs and intermediate products. Since the intermediate measures are consumed by some sub-DMUs produced by the others, it is possible to produce systems; the role of intermediate production is twice output and input. That's why they can be analyzed based on conventional mathematical modeling. In this paper we introduce a new method for determining bounds for intermediate product in a two stage network DEA structure.

  16. Two-Stage Electric Vehicle Charging Coordination in Low Voltage Distribution Grids

    DEFF Research Database (Denmark)

    Bhattarai, Bishnu Prasad; Bak-Jensen, Birgitte; Pillai, Jayakrishnan Radhakrishna

    2014-01-01

    ). Being a sizable rated element, electric vehicles (EVs) can offer a great deal of demand flexibility in future intelligent grids. This paper first investigates and analyzes driving pattern and charging requirements of EVs. Secondly, a two-stage charging algorithm, namely local adaptive control...... encompassed by a central coordinative control, is proposed to realize the flexibility offered by EV. The local control enables adaptive charging; whereas the central coordinative control prepares optimized charging schedules. Results from various scenarios show that the proposed algorithm enables significant......Increased environmental awareness in the recent years has encouraged rapid growth of renewable energy sources (RESs); especially solar PV and wind. One of the effective solutions to compensate intermittencies in generation from the RESs is to enable consumer participation in demand response (DR...

  17. An X-ray Experiment with Two-Stage Korean Sounding Rocket

    Directory of Open Access Journals (Sweden)

    Uk-Won Nam

    1998-12-01

    Full Text Available The test result of the X-ray observation system is presented which have been developed at Korea Astronomy Observatory for 3 years (1995-1997. The instrument, which is composed of detector and signal processing parts, is designed for the future observations of compact X-ray sources. The performance of the instrument was tested by mounting on the two-stage Korean Sounding Rocket, which was launched from Taean rocket flight center on June 11 at 10:00 KST 1998. Telemetry data were received from individual parts of the instrument for 32 and 55.7 sec, respectively, since the launch of the rocket. In this paper, the result of the data analysis based on the telemetry data and discussion about the performance of the instrument is reported.

  18. Mediastinal Bronchogenic Cyst With Acute Cardiac Dysfunction: Two-Stage Surgical Approach.

    Science.gov (United States)

    Smail, Hassiba; Baste, Jean Marc; Melki, Jean; Peillon, Christophe

    2015-10-01

    We describe a two-stage surgical approach in a patient with cardiac dysfunction and hemodynamic compromise resulting from a massive and compressive mediastinal bronchogenic cyst. To drain this cyst, video-assisted mediastinoscopy was performed as an emergency procedure, which immediately improved the patient's cardiac function. Five days later and under video thoracoscopy, resection of the cyst margins was impossible because the cyst was tightly adherent to the left atrium. We performed deroofing of this cyst through a right thoracotomy. The patient had an uncomplicated postoperative recovery, and no recurrence was observed at the long-term follow-up visit. Copyright © 2015 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  19. Two-stage triolein breath test differentiates pancreatic insufficiency from other causes of malabsorption

    International Nuclear Information System (INIS)

    Goff, J.S.

    1982-01-01

    In 24 patients with malabsorption, [ 14 C]triolein breath tests were conducted before and together with the administration of pancreatic enzymes (Pancrease, Johnson and Johnson, Skillman, N.J.). Eleven patients with pancreatic insufficiency had a significant rise in peak percent dose per hour 14 CO 2 excretion after Pancrease, whereas 13 patients with other causes of malabsorption had no increase in 14 CO 2 excretion (2.61 +/- 0.96 vs. 0.15 +/- 0.45, p less than 0.001). The two-stage [ 14 C]triolein breath test appears to be an accurate and simple noninvasive test of fat malabsorption that differentiates steatorrhea secondary to pancreatic insufficiency from other causes of steatorrhea

  20. Two-Stage Surgery for a Large Cervical Dumbbell Tumour in Neurofibromatosis 1: A Case Report

    Directory of Open Access Journals (Sweden)

    Mohd Ariff S

    2011-11-01

    Full Text Available Spinal neurofibromas occur sporadically and typically occur in association with neurofibromatosis 1. Patients afflicted with neurofibromatosis 1 usually present with involvement of several nerve roots. This report describes the case of a 14- year-old child with a large intraspinal, but extradural tumour with paraspinal extension, dumbbell neurofibroma of the cervical region extending from the C2 to C4 vertebrae. The lesions were readily detected by MR imaging and were successfully resected in a two-stage surgery. The time interval between the first and second surgery was one month. We provide a brief review of the literature regarding various surgical approaches, emphasising the utility of anterior and posterior approaches.

  1. Effekt of a two-stage nursing assesment and intervention - a randomized intervention study

    DEFF Research Database (Denmark)

    Rosted, Elizabeth Emilie; Poulsen, Ingrid; Hendriksen, Carsten

    % of geriatric patients have complex and often unresolved caring needs. The objective was to examine the effect of a two-stage nursing assessment and intervention to address the patients uncompensated problems given just after discharge from ED and one and six months after. Method: We conducted a prospective...... nursing assessment comprising a checklist of 10 physical, mental, medical and social items. The focus was on unresolved problems which require medical intervention, new or different home care services, or comprehensive geriatric assessment. Following this the nurses made relevant referrals...... to the geriatric outpatient clinic, community health centre, primary physician or arrangements with next-of-kin. Findings: Primary endpoints will be presented as unplanned readmission to ED; admission to nursing home; and death. Secondary endpoints will be presented as physical function; depressive symptoms...

  2. Two-stage SQUID systems and transducers development for MiniGRAIL

    International Nuclear Information System (INIS)

    Gottardi, L; Podt, M; Bassan, M; Flokstra, J; Karbalai-Sadegh, A; Minenkov, Y; Reinke, W; Shumack, A; Srinivas, S; Waard, A de; Frossati, G

    2004-01-01

    We present measurements on a two-stage SQUID system based on a dc-SQUID as a sensor and a DROS as an amplifier. We measured the intrinsic noise of the dc-SQUID at 4.2 K. A new dc-SQUID has been fabricated. It was specially designed to be used with MiniGRAIL transducers. Cooling fins have been added in order to improve the cooling of the SQUID and the design is optimized to achieve the quantum limit of the sensor SQUID at temperatures above 100 mK. In this paper we also report the effect of the deposition of a Nb film on the quality factor of a small mass Al5056 resonator. Finally, the results of Q-factor measurements on a capacitive transducer for the current MiniGRAIL run are presented

  3. A Two-Stage Diagnosis Framework for Wind Turbine Gearbox Condition Monitoring

    Directory of Open Access Journals (Sweden)

    Janet M. Twomey

    2013-01-01

    Full Text Available Advances in high performance sensing technologies enable the development of wind turbine condition monitoring system to diagnose and predict the system-wide effects of failure events. This paper presents a vibration-based two stage fault detection framework for failure diagnosis of rotating components in wind turbines. The proposed framework integrates an analytical defect detection method and a graphical verification method together to ensure the diagnosis efficiency and accuracy. The efficacy of the proposed methodology is demonstrated with a case study with the gearbox condition monitoring Round Robin study dataset provided by the National Renewable Energy Laboratory (NREL. The developed methodology successfully picked five faults out of seven in total with accurate severity levels without producing any false alarm in the blind analysis. The case study results indicated that the developed fault detection framework is effective for analyzing gear and bearing faults in wind turbine drive train system based upon system vibration characteristics.

  4. Development of advanced air-blown entrained-flow two-stage bituminous coal IGCC gasifier

    Directory of Open Access Journals (Sweden)

    Abaimov Nikolay A.

    2017-01-01

    Full Text Available Integrated gasification combined cycle (IGCC technology has two main advantages: high efficiency, and low levels of harmful emissions. Key element of IGCC is gasifier, which converts solid fuel into a combustible synthesis gas. One of the most promising gasifiers is air-blown entrained-flow two-stage bituminous coal gasifier developed by Mitsubishi Heavy Industries (MHI. The most obvious way to develop advanced gasifier is improvement of commercial-scale 1700 t/d MHI gasifier using the computational fluid dynamics (CFD method. Modernization of commercial-scale 1700 t/d MHI gasifier is made by changing the regime parameters in order to improve its cold gas efficiency (CGE and environmental performance, namely H2/CO ratio. The first change is supply of high temperature (900°C steam in gasifier second stage. And the second change is additional heating of blast air to 900°C.

  5. A Two-Stage Foot Repair in a 55-Year-Old Man with Poliomyelitis.

    Science.gov (United States)

    Pollack, Daniel

    2018-01-01

    A 55-year-old man with poliomyelitis presented with a plantarflexed foot and painful ulceration of the sub-first metatarsophalangeal joint present for many years. A two-stage procedure was performed to bring the foot to 90°, perpendicular to the leg, and resolve the ulceration. The first stage corrected only soft-tissue components. It involved using a hydrosurgery system to debride and prepare the ulcer, a unilobed rotational skin plasty to close the ulcer, and a tendo Achillis lengthening to decrease forefoot pressure. The second stage corrected the osseous deformity with a dorsiflexory wedge osteotomy of the first metatarsal. The ulceration has remained closed since the procedures, with complete resolution of pain.

  6. The Sources of Efficiency of the Nigerian Banking Industry: A Two- Stage Approach

    Directory of Open Access Journals (Sweden)

    Frances Obafemi

    2013-11-01

    Full Text Available The paper employed a two-stage Data Envelopment Analysis (DEA approach to examine the sources oftechnical efficiency in the Nigerian banking sub-sector. Using a cross sectionof commercial and merchant banks, the study showed that the Nigerian bankingindustry was not efficient both in the pre-and-post-liberalization era. Thestudy further revealed that market share was the strongest determinant oftechnical efficiency in the Nigerian banking Industry. Thus, appropriatemacroeconomic policy, institutional development and structural reforms mustaccompany financial liberalization to create the stable environment requiredfor it to succeed. Hence, the present bank consolidation and reforms by theCentral Bank of Nigeria, which started with Soludo and continued with Sanusi,are considered necessary, especially in the areas of e banking and reorganizingthe management of banks.

  7. A Two-stage DC-DC Converter for the Fuel Cell-Supercapacitor Hybrid System

    DEFF Research Database (Denmark)

    Zhang, Zhe; Thomsen, Ole Cornelius; Andersen, Michael A. E.

    2009-01-01

    A wide input range multi-stage converter is proposed with the fuel cells and supercapacitors as a hybrid system. The front-end two-phase boost converter is used to optimize the output power and to reduce the current ripple of fuel cells. The supercapacitor power module is connected by push...... and designed. A 1kW prototype controlled by TMS320F2808 DSP is built in the lab. Simulation and experimental results confirm the feasibility of the proposed two stage dc-dc converter system.......-pull-forward half bridge (PPFHB) converter with coupled inductors in the second stage to handle the slow transient response of the fuel cells and realize the bidirectional power flow control. Moreover, this cascaded structure simplifies the power management. The control strategy for the whole system is analyzed...

  8. Improvement of two-stage GM refrigerator performance using a hybrid regenerator

    International Nuclear Information System (INIS)

    Ke, G.; Makuuchi, H.; Hashimoto, T.; Onishi, A.; Li, R.; Satoh, T.; Kanazawa, Y.

    1994-01-01

    To improve the performance of two-stage GM refrigerators, a hybrid regenerator with magnetic materials of Er 3 Ni and ErNi 0.9 Co 0.1 was used in the 2nd stage regenerator because of its large heat exchange capacity. The largest refrigeration capacity achieved with the hybrid regenerator was 0.95W at helium liquefied temperature of 4.2K. This capacity is 15.9% greater than the 0.82W refrigerator with only Er 3 Ni as the 2nd regenerator material. Use of the hybrid regenerator not only increases the refrigeration capacity at 4.2K, but also allows the 4K GM refrigerator to be used with large 1st stage refrigeration capacity, thus making it more practical

  9. Hydrogen and methane production from household solid waste in the two-stage fermentation process

    DEFF Research Database (Denmark)

    Lui, D.; Liu, D.; Zeng, Raymond Jianxiong

    2006-01-01

    A two-stage process combined hydrogen and methane production from household solid waste was demonstrated working successfully. The yield of 43 mL H-2/g volatile solid (VS) added was generated in the first hydrogen production stage and the methane production in the second stage was 500 mL CH4/g VS...... added. This figure was 21% higher than the methane yield from the one-stage process, which was run as control. Sparging of the hydrogen reactor with methane gas resulted in doubling of the hydrogen production. PH was observed as a key factor affecting fermentation pathway in hydrogen production stage....... Furthermore, this study also provided direct evidence in the dynamic fermentation process that, hydrogen production increase was reflected by acetate to butyrate ratio increase in liquid phase. (c) 2006 Elsevier Ltd. All rights reserved....

  10. Discrete time population dynamics of a two-stage species with recruitment and capture

    International Nuclear Information System (INIS)

    Ladino, Lilia M.; Mammana, Cristiana; Michetti, Elisabetta; Valverde, Jose C.

    2016-01-01

    This work models and analyzes the dynamics of a two-stage species with recruitment and capture factors. It arises from the discretization of a previous model developed by Ladino and Valverde (2013), which represents a progress in the knowledge of the dynamics of exploited populations. Although the methods used here are related to the study of discrete-time systems and are different from those related to continuous version, the results are similar in both the discrete and the continuous case what confirm the skill in the selection of the factors to design the model. Unlike for the continuous-time case, for the discrete-time one some (non-negative) parametric constraints are derived from the biological significance of the model and become fundamental for the proofs of such results. Finally, numerical simulations show different scenarios of dynamics related to the analytical results which confirm the validity of the model.

  11. Validation of Continuous CHP Operation of a Two-Stage Biomass Gasifier

    DEFF Research Database (Denmark)

    Ahrenfeldt, Jesper; Henriksen, Ulrik Birk; Jensen, Torben Kvist

    2006-01-01

    The Viking gasification plant at the Technical University of Denmark was built to demonstrate a continuous combined heat and power operation of a two-stage gasifier fueled with wood chips. The nominal input of the gasifier is 75 kW thermal. To validate the continuous operation of the plant, a 9-day...... measurement campaign was performed. The campaign verified a stable operation of the plant, and the energy balance resulted in an overall fuel to gas efficiency of 93% and a wood to electricity efficiency of 25%. Very low tar content in the producer gas was observed: only 0.1 mg/Nm3 naphthalene could...... be measured in raw gas. A stable engine operation on the producer gas was observed, and very low emissions of aldehydes, N2O, and polycyclic aromatic hydrocarbons were measured....

  12. Evaluation of biological hydrogen sulfide oxidation coupled with two-stage upflow filtration for groundwater treatment.

    Science.gov (United States)

    Levine, Audrey D; Raymer, Blake J; Jahn, Johna

    2004-01-01

    Hydrogen sulfide in groundwater can be oxidized by aerobic bacteria to form elemental sulfur and biomass. While this treatment approach is effective for conversion of hydrogen sulfide, it is important to have adequate control of the biomass exiting the biological treatment system to prevent release of elemental sulfur into the distribution system. Pilot scale tests were conducted on a Florida groundwater to evaluate the use of two-stage upflow filtration downstream of biological sulfur oxidation. The combined biological and filtration process was capable of excellent removal of hydrogen sulfide and associated turbidity. Additional benefits of this treatment approach include elimination of odor generation, reduction of chlorine demand, and improved stability of the finished water.

  13. Shaft Position Influence on Technical Characteristics of Universal Two-Stages Helical Speed Reducers

    Directory of Open Access Journals (Sweden)

    Мilan Rackov

    2005-10-01

    Full Text Available Purchasers of speed reducers decide on buying those reducers, that can the most approximately satisfy their demands with much smaller costs. Amount of used material, ie. mass and dimensions of gear unit influences on gear units price. Mass and dimensions of gear unit, besides output torque, gear unit ratio and efficiency, are the most important parameters of technical characteristics of gear units and their quality. Centre distance and position of shafts have significant influence on output torque, gear unit ratio and mass of gear unit through overall dimension of gear unit housing. Thus these characteristics are dependent on each other. This paper deals with analyzing of centre distance and shaft position influence on output torque and ratio of universal two stages gear units.

  14. Stepwise encapsulation and controlled two-stage release system for cis-Diamminediiodoplatinum.

    Science.gov (United States)

    Chen, Yun; Li, Qian; Wu, Qingsheng

    2014-01-01

    cis-Diamminediiodoplatinum (cis-DIDP) is a cisplatin-like anticancer drug with higher anticancer activity, but lower stability and price than cisplatin. In this study, a cis-DIDP carrier system based on micro-sized stearic acid was prepared by an emulsion solvent evaporation method. The maximum drug loading capacity of cis-DIDP-loaded solid lipid nanoparticles was 22.03%, and their encapsulation efficiency was 97.24%. In vitro drug release in phosphate-buffered saline (pH =7.4) at 37.5°C exhibited a unique two-stage process, which could prove beneficial for patients with tumors and malignancies. MTT (3-[4,5-dimethylthiazol-2-yl]-2, 5-diphenyltetrazolium bromide) assay results showed that cis-DIDP released from cis-DIDP-loaded solid lipid nanoparticles had better inhibition activity than cis-DIDP that had not been loaded.

  15. A Two-stage Kalman Filter for Sensorless Direct Torque Controlled PM Synchronous Motor Drive

    Directory of Open Access Journals (Sweden)

    Boyu Yi

    2013-01-01

    Full Text Available This paper presents an optimal two-stage extended Kalman filter (OTSEKF for closed-loop flux, torque, and speed estimation of a permanent magnet synchronous motor (PMSM to achieve sensorless DTC-SVPWM operation of drive system. The novel observer is obtained by using the same transformation as in a linear Kalman observer, which is proposed by C.-S. Hsieh and F.-C. Chen in 1999. The OTSEKF is an effective implementation of the extended Kalman filter (EKF and provides a recursive optimum state estimation for PMSMs using terminal signals that may be polluted by noise. Compared to a conventional EKF, the OTSEKF reduces the number of arithmetic operations. Simulation and experimental results verify the effectiveness of the proposed OTSEKF observer for DTC of PMSMs.

  16. A two-stage metal valorisation process from electric arc furnace dust (EAFD

    Directory of Open Access Journals (Sweden)

    H. Issa

    2016-04-01

    Full Text Available This paper demonstrates possibility of separate zinc and lead recovery from coal composite pellets, composed of EAFD with other synergetic iron-bearing wastes and by-products (mill scale, pyrite-cinder, magnetite concentrate, through a two-stage process. The results show that in the first, low temp erature stage performed in electro-resistant furnace, removal of lead is enabled due to presence of chlorides in the system. In the second stage, performed at higher temperatures in Direct Current (DC plasma furnace, valorisation of zinc is conducted. Using this process, several final products were obtained, including a higher purity zinc oxide, which, by its properties, corresponds washed Waelz oxide.

  17. Optimal design of distributed energy resource systems based on two-stage stochastic programming

    International Nuclear Information System (INIS)

    Yang, Yun; Zhang, Shijie; Xiao, Yunhan

    2017-01-01

    Highlights: • A two-stage stochastic programming model is built to design DER systems under uncertainties. • Uncertain energy demands have a significant effect on the optimal design. • Uncertain energy prices and renewable energy intensity have little effect on the optimal design. • The economy is overestimated if the system is designed without considering the uncertainties. • The uncertainty in energy prices has the significant and greatest effect on the economy. - Abstract: Multiple uncertainties exist in the optimal design of distributed energy resource (DER) systems. The expected energy, economic, and environmental benefits may not be achieved and a deficit in energy supply may occur if the uncertainties are not handled properly. This study focuses on the optimal design of DER systems with consideration of the uncertainties. A two-stage stochastic programming model is built in consideration of the discreteness of equipment capacities, equipment partial load operation and output bounds as well as of the influence of ambient temperature on gas turbine performance. The stochastic model is then transformed into its deterministic equivalent and solved. For an illustrative example, the model is applied to a hospital in Lianyungang, China. Comparative studies are performed to evaluate the effect of the uncertainties in load demands, energy prices, and renewable energy intensity separately and simultaneously on the system’s economy and optimal design. Results show that the uncertainties in load demands have a significant effect on the optimal system design, whereas the uncertainties in energy prices and renewable energy intensity have almost no effect. Results regarding economy show that it is obviously overestimated if the system is designed without considering the uncertainties.

  18. Kinetics of two-stage fermentation process for the production of hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Nath, Kaushik [Department of Chemical Engineering, G.H. Patel College of Engineering and Technology, Vallabh Vidyanagar 388 120, Gujarat (India); Muthukumar, Manoj; Kumar, Anish; Das, Debabrata [Fermentation Technology Laboratory, Department of Biotechnology, Indian Institute of Technology, Kharagpur 721302 (India)

    2008-02-15

    Two-stage process described in the present work is a combination of dark and photofermentation in a sequential batch mode. In the first stage glucose is fermented to acetate, CO{sub 2} and H{sub 2} in an anaerobic dark fermentation by Enterobacter cloacae DM11. This is followed by a successive second stage where acetate is converted to H{sub 2} and CO{sub 2} in a photobioreactor by photosynthetic bacteria, Rhodobacter sphaeroides O.U. 001. The yield of hydrogen in the first stage was about 3.31molH{sub 2}(molglucose){sup -1} (approximately 82% of theoretical) and that in the second stage was about 1.5-1.72molH{sub 2}(molaceticacid){sup -1} (approximately 37-43% of theoretical). The overall yield of hydrogen in two-stage process considering glucose as preliminary substrate was found to be higher compared to a single stage process. Monod model, with incorporation of substrate inhibition term, has been used to determine the growth kinetic parameters for the first stage. The values of maximum specific growth rate ({mu} {sub max}) and K{sub s} (saturation constant) were 0.398h{sup -1} and 5.509gl{sup -1}, respectively, using glucose as substrate. The experimental substrate and biomass concentration profiles have good resemblance with those obtained by kinetic model predictions. A model based on logistic equation has been developed to describe the growth of R. sphaeroides O.U 001 in the second stage. Modified Gompertz equation was applied to estimate the hydrogen production potential, rate and lag phase time in a batch process for various initial concentration of glucose, based on the cumulative hydrogen production curves. Both the curve fitting and statistical analysis showed that the equation was suitable to describe the progress of cumulative hydrogen production. (author)

  19. Removal of trichloroethylene (TCE) contaminated soil using a two-stage anaerobic-aerobic composting technique.

    Science.gov (United States)

    Ponza, Supat; Parkpian, Preeda; Polprasert, Chongrak; Shrestha, Rajendra P; Jugsujinda, Aroon

    2010-01-01

    The effect of organic carbon addition on remediation of trichloroethylene (TCE) contaminated clay soil was investigated using a two stage anaerobic-aerobic composting system. TCE removal rate and processes involved were determined. Uncontaminated clay soil was treated with composting materials (dried cow manure, rice husk and cane molasses) to represent carbon based treatments (5%, 10% and 20% OC). All treatments were spiked with TCE at 1,000 mg TCE/kg DW and incubated under anaerobic and mesophillic condition (35 degrees C) for 8 weeks followed by continuous aerobic condition for another 6 weeks. TCE dissipation, its metabolites and biogas composition were measured throughout the experimental period. Results show that TCE degradation depended upon the amount of organic carbon (OC) contained within the composting treatments/matrices. The highest TCE removal percentage (97%) and rate (75.06 micro Mole/kg DW/day) were obtained from a treatment of 10% OC composting matrices as compared to 87% and 27.75 micro Mole/kg DW/day for 20% OC, and 83% and 38.08 micro Mole/kg DW/day for soil control treatment. TCE removal rate was first order reaction kinetics. Highest degradation rate constant (k(1) = 0.035 day(- 1)) was also obtained from the 10% OC treatment, followed by 20% OC (k(1) = 0.026 day(- 1)) and 5% OC or soil control treatment (k(1) = 0.023 day(- 1)). The half-life was 20, 27 and 30 days, respectively. The overall results suggest that sequential two stages anaerobic-aerobic composting technique has potential for remediation of TCE in heavy texture soil, providing that easily biodegradable source of organic carbon is present.

  20. Design of a Two-Stage Light Gas Gun for Muzzle Velocities of 10 - 11 kms

    Science.gov (United States)

    Bogdanoff, David W.

    2016-01-01

    Space debris poses a major risk to spacecraft. In low earth orbit, impact velocities can be 10 11 kms and as high as 15 kms. For debris shield design, it would be desirable to be able to launch controlled shape projectiles to these velocities. The design of the proposed 10 11 kmsec gun uses, as a starting point, the Ames 1.280.22 two stage gun, which has achieved muzzle velocities of 10 11.3 kmsec. That gun is scaled up to a 0.3125 launch tube diameter. The gun is then optimized with respect to maximum pressures by varying the pump tube length to diameter ratio (LD), the piston mass and the hydrogen pressure. A pump tube LD of 36.4 is selected giving the best overall performance. Piezometric ratios for the optimized guns are found to be 2.3, much more favorable than for more traditional two stage light gas guns, which range from 4 to 6. The maximum powder chamber pressures are 20 to 30 ksi. To reduce maximum pressures, the desirable range of the included angle of the cone of the high pressure coupling is found to be 7.3 to 14.6 degrees. Lowering the break valve rupture pressure is found to lower the maximum projectile base pressure, but to raise the maximum gun pressure. For the optimized gun with a pump tube LD of 36.4, increasing the muzzle velocity by decreasing the projectile mass and increasing the powder loads is studied. It appears that saboted spheres could be launched to 10.25 and possibly as high as 10.7 10.8 kmsec, and that disc-like plastic models could be launched to 11.05 kms. The use of a tantalum liner to greatly reduce bore erosion and increase muzzle velocity is discussed. With a tantalum liner, CFD code calculations predict muzzle velocities as high as 12 to 13 kms.

  1. Study of two-stage turbine characteristic and its influence on turbo-compound engine performance

    International Nuclear Information System (INIS)

    Zhao, Rongchao; Zhuge, Weilin; Zhang, Yangjun; Yang, Mingyang; Martinez-Botas, Ricardo; Yin, Yong

    2015-01-01

    Highlights: • An analytical model was built to study the interactions between two turbines in series. • The impacts of HP VGT and LP VGT on turbo-compound engine performance were investigated. • The fuel reductions obtained by HP VGT at 1900 rpm and 1000 rpm are 3.08% and 7.83% respectively. • The optimum value of AR ranged from 2.0 to 2.5 as the turbo-compound engine speed decreases. - Abstract: Turbo-compounding is an effective way to recover waste heat from engine exhaust and reduce fuel consumption for internal combustion engine (ICE). The characteristics of two-stage turbine, including turbocharger turbine and power turbine, have significant effects on the overall performance of turbo-compound engine. This paper investigates the interaction between two turbines in a turbo-compound engine and its impact on the engine performance. Firstly an analytical model is built to investigate the effects of turbine equivalent flow area on the two-stage turbine characteristics, including swallowing capacity and load split. Next both simulation and experimental method are carried out to study the effects of high pressure variable geometry turbine (HP VGT), low pressure variable geometry turbine (LP VGT) and combined VGT on the engine overall performance. The results show that the engine performance is more sensitive to HP VGT compared with LP VGT at all the operation conditions, which is caused by the larger influences of HP VGT on the total expansion ratio and engine air–fuel ratio. Using the HP VGT method, the fuel reductions of the turbo-compound engine at 1900 rpm and 1000 rpm are 3.08% and 7.83% respectively, in comparison with the baseline engine. The corresponding optimum values of AR are 2.0 and 2.5

  2. Anti-kindling induced by two-stage coordinated reset stimulation with weak onset intensity

    Directory of Open Access Journals (Sweden)

    Magteld eZeitler

    2016-05-01

    Full Text Available Abnormal neuronal synchrony plays an important role in a number of brain diseases. To specifically counteract abnormal neuronal synchrony by desynchronization, Coordinated Reset (CR stimulation, a spatiotemporally patterned stimulation technique, was designed with computational means. In neuronal networks with spike timing–dependent plasticity CR stimulation causes a decrease of synaptic weights and finally anti-kindling, i.e. unlearning of abnormally strong synaptic connectivity and abnormal neuronal synchrony. Long-lasting desynchronizing aftereffects of CR stimulation have been verified in pre-clinical and clinical proof of concept studies. In general, for different neuromodulation approaches, both invasive and non-invasive, it is desirable to enable effective stimulation at reduced stimulation intensities, thereby avoiding side effects. For the first time, we here present a two-stage CR stimulation protocol, where two qualitatively different types of CR stimulation are delivered one after another, and the first stage comes at a particularly weak stimulation intensity. Numerical simulations show that a two-stage CR stimulation can induce the same degree of anti-kindling as a single-stage CR stimulation with intermediate stimulation intensity. This stimulation approach might be clinically beneficial in patients suffering from brain diseases characterized by abnormal neuronal synchrony where a first treatment stage should be performed at particularly weak stimulation intensities in order to avoid side effects. This might, e.g., be relevant in the context of acoustic CR stimulation in tinnitus patients with hyperacusis or in the case of electrical deep brain CR stimulation with sub-optimally positioned leads or side effects caused by stimulation of the target itself. We discuss how to apply our method in first in man and proof of concept studies.

  3. Two-Stage Latissimus Dorsi Flap with Implant for Unilateral Breast Reconstruction: Getting the Size Right

    Directory of Open Access Journals (Sweden)

    Jiajun Feng

    2016-03-01

    Full Text Available BackgroundThe aim of unilateral breast reconstruction after mastectomy is to craft a natural-looking breast with symmetry. The latissimus dorsi (LD flap with implant is an established technique for this purpose. However, it is challenging to obtain adequate volume and satisfactory aesthetic results using a one-stage operation when considering factors such as muscle atrophy, wound dehiscence and excessive scarring. The two-stage reconstruction addresses these difficulties by using a tissue expander to gradually enlarge the skin pocket which eventually holds an appropriately sized implant.MethodsWe analyzed nine patients who underwent unilateral two-stage LD reconstruction. In the first stage, an expander was placed along with the LD flap to reconstruct the mastectomy defect, followed by gradual tissue expansion to achieve overexpansion of the skin pocket. The final implant volume was determined by measuring the residual expander volume after aspirating the excess saline. Finally, the expander was replaced with the chosen implant.ResultsThe average volume of tissue expansion was 460 mL. The resultant expansion allowed an implant ranging in volume from 255 to 420 mL to be placed alongside the LD muscle. Seven patients scored less than six on the relative breast retraction assessment formula for breast symmetry, indicating excellent breast symmetry. The remaining two patients scored between six and eight, indicating good symmetry.ConclusionsThis approach allows the size of the eventual implant to be estimated after the skin pocket has healed completely and the LD muscle has undergone natural atrophy. Optimal reconstruction results were achieved using this approach.

  4. Two-stage double-effect ammonia/lithium nitrate absorption cycle

    International Nuclear Information System (INIS)

    Ventas, R.; Lecuona, A.; Vereda, C.; Legrand, M.

    2016-01-01

    Highlights: • A two stage double effect cycle with NH3-LiNO3 is proposed. • The cycle operates at lower pressures than conventional. • Adiabatic absorber offers better performance than the diabatic version. • Evaporator external inlet temperatures higher than −10 °C avoids crystallization. • Maximum COP is 1.25 for driving water inlet temperature of 100 C. - Abstract: The two-stage configuration of a double-effect absorption cycle using ammonia/lithium nitrate as working fluid is studied by means of a thermodynamic model. The maximum pressure of this cycle configuration is the same as the single-effect cycle, up to 15.8 bars, being an advantage over the double-effect conventional configuration with three pressure levels that exhibits much higher maximum pressure. The performance of the cycle and the limitation imposed by crystallization of the working fluid is determined for both adiabatic and diabatic absorber cycles. Both cycles offer similar COP; however the adiabatic variant shows a larger margin against crystallization. This cycle can produce cold for external inlet evaporator temperatures down to −10 °C, but for this limit the crystallization could happen at high inlet generator temperatures. The maximum COP can be 1.25 for an external inlet generator temperature of 100 °C. This cycle shows a better COP than a typical double effect cycle with in-parallel configuration for the range of the moderate temperatures under study and using the same working fluid. Comparisons with double effect cycles using H_2O/LiBr and NH_3/H_2O as working fluids are also offered, highlighting the present configurations advantages regarding COP, evaporation and condensation temperatures as well as crystallization.

  5. Two-stage high frequency pulse tube refrigerator with base temperature below 10 K

    Science.gov (United States)

    Chen, Liubiao; Wu, Xianlin; Liu, Sixue; Zhu, Xiaoshuang; Pan, Changzhao; Guo, Jia; Zhou, Yuan; Wang, Junjie

    2017-12-01

    This paper introduces our recent experimental results of pulse tube refrigerator driven by linear compressor. The working frequency is 23-30 Hz, which is much higher than the G-M type cooler (the developed cryocooler will be called high frequency pulse tube refrigerator in this paper). To achieve a temperature below 10 K, two types of two-stage configuration, gas coupled and thermal coupled, have been designed, built and tested. At present, both types can achieve a no-load temperature below 10 K by using only one compressor. As to gas-coupled HPTR, the second stage can achieve a cooling power of 16 mW/10K when the first stage applied a 400 mW heat load at 60 K with a total input power of 400 W. As to thermal-coupled HPTR, the designed cooling power of the first stage is 10W/80K, and then the temperature of the second stage can get a temperature below 10 K with a total input power of 300 W. In the current preliminary experiment, liquid nitrogen is used to replace the first coaxial configuration as the precooling stage, and a no-load temperature 9.6 K can be achieved with a stainless steel mesh regenerator. Using Er3Ni sphere with a diameter about 50-60 micron, the simulation results show it is possible to achieve a temperature below 8 K. The configuration, the phase shifters and the regenerative materials of the developed two types of two-stage high frequency pulse tube refrigerator will be discussed, and some typical experimental results and considerations for achieving a better performance will also be presented in this paper.

  6. A Two-Stage Composition Method for Danger-Aware Services Based on Context Similarity

    Science.gov (United States)

    Wang, Junbo; Cheng, Zixue; Jing, Lei; Ota, Kaoru; Kansen, Mizuo

    Context-aware systems detect user's physical and social contexts based on sensor networks, and provide services that adapt to the user accordingly. Representing, detecting, and managing the contexts are important issues in context-aware systems. Composition of contexts is a useful method for these works, since it can detect a context by automatically composing small pieces of information to discover service. Danger-aware services are a kind of context-aware services which need description of relations between a user and his/her surrounding objects and between users. However when applying the existing composition methods to danger-aware services, they show the following shortcomings that (1) they have not provided an explicit method for representing composition of multi-user' contexts, (2) there is no flexible reasoning mechanism based on similarity of contexts, so that they can just provide services exactly following the predefined context reasoning rules. Therefore, in this paper, we propose a two-stage composition method based on context similarity to solve the above problems. The first stage is composition of the useful information to represent the context for a single user. The second stage is composition of multi-users' contexts to provide services by considering the relation of users. Finally the danger degree of the detected context is computed by using context similarity between the detected context and the predefined context. Context is dynamically represented based on two-stage composition rules and a Situation theory based Ontology, which combines the advantages of Ontology and Situation theory. We implement the system in an indoor ubiquitous environment, and evaluate the system through two experiments with the support of subjects. The experiment results show the method is effective, and the accuracy of danger detection is acceptable to a danger-aware system.

  7. Development and enrolee satisfaction with basic medical insurance in China: A systematic review and stratified cluster sampling survey.

    Science.gov (United States)

    Jing, Limei; Chen, Ru; Jing, Lisa; Qiao, Yun; Lou, Jiquan; Xu, Jing; Wang, Junwei; Chen, Wen; Sun, Xiaoming

    2017-07-01

    Basic Medical Insurance (BMI) has changed remarkably over time in China because of health reforms that aim to achieve universal coverage and better health care with adequate efforts by increasing subsidies, reimbursement, and benefits. In this paper, we present the development of BMI, including financing and operation, with a systematic review. Meanwhile, Pudong New Area in Shanghai was chosen as a typical BMI sample for its coverage and management; a stratified cluster sampling survey together with an ordinary logistic regression model was used for the analysis. Enrolee satisfaction and the factors associated with enrolee satisfaction with BMI were analysed. We found that the reenrolling rate superficially improved the BMI coverage and nearly achieved universal coverage. However, BMI funds still faced dual contradictions of fund deficit and insured under compensation, and a long-term strategy is needed to realize the integration of BMI schemes with more homogeneous coverage and benefits. Moreover, Urban Resident Basic Medical Insurance participants reported a higher rate of dissatisfaction than other participants. The key predictors of the enrolees' satisfaction were awareness of the premium and compensation, affordability of out-of-pocket costs, and the proportion of reimbursement. These results highlight the importance that the Chinese government takes measures, such as strengthening BMI fund management, exploring mixed payment methods, and regulating sequential medical orders, to develop an integrated medical insurance system of universal coverage and vertical equity while simultaneously improving enrolee satisfaction. Copyright © 2017 John Wiley & Sons, Ltd.

  8. The Swift/BAT AGN Spectroscopic Survey. IX. The Clustering Environments of an Unbiased Sample of Local AGNs

    Science.gov (United States)

    Powell, M. C.; Cappelluti, N.; Urry, C. M.; Koss, M.; Finoguenov, A.; Ricci, C.; Trakhtenbrot, B.; Allevato, V.; Ajello, M.; Oh, K.; Schawinski, K.; Secrest, N.

    2018-05-01

    We characterize the environments of local accreting supermassive black holes by measuring the clustering of AGNs in the Swift/BAT Spectroscopic Survey (BASS). With 548 AGN in the redshift range 0.01 2MASS galaxies, and interpreting it via halo occupation distribution and subhalo-based models, we constrain the occupation statistics of the full sample, as well as in bins of absorbing column density and black hole mass. We find that AGNs tend to reside in galaxy group environments, in agreement with previous studies of AGNs throughout a large range of luminosity and redshift, and that on average they occupy their dark matter halos similar to inactive galaxies of comparable stellar mass. We also find evidence that obscured AGNs tend to reside in denser environments than unobscured AGNs, even when samples were matched in luminosity, redshift, stellar mass, and Eddington ratio. We show that this can be explained either by significantly different halo occupation distributions or statistically different host halo assembly histories. Lastly, we see that massive black holes are slightly more likely to reside in central galaxies than black holes of smaller mass.

  9. RHAPSODY. I. STRUCTURAL PROPERTIES AND FORMATION HISTORY FROM A STATISTICAL SAMPLE OF RE-SIMULATED CLUSTER-SIZE HALOS

    International Nuclear Information System (INIS)

    Wu, Hao-Yi; Hahn, Oliver; Wechsler, Risa H.; Mao, Yao-Yuan; Behroozi, Peter S.

    2013-01-01

    We present the first results from the RHAPSODY cluster re-simulation project: a sample of 96 'zoom-in' simulations of dark matter halos of 10 14.8±0.05 h –1 M ☉ , selected from a 1 h –3 Gpc 3 volume. This simulation suite is the first to resolve this many halos with ∼5 × 10 6 particles per halo in the cluster mass regime, allowing us to statistically characterize the distribution of and correlation between halo properties at fixed mass. We focus on the properties of the main halos and how they are affected by formation history, which we track back to z = 12, over five decades in mass. We give particular attention to the impact of the formation history on the density profiles of the halos. We find that the deviations from the Navarro-Frenk-White (NFW) model and the Einasto model depend on formation time. Late-forming halos tend to have considerable deviations from both models, partly due to the presence of massive subhalos, while early-forming halos deviate less but still significantly from the NFW model and are better described by the Einasto model. We find that the halo shapes depend only moderately on formation time. Departure from spherical symmetry impacts the density profiles through the anisotropic distribution of massive subhalos. Further evidence of the impact of subhalos is provided by analyzing the phase-space structure. A detailed analysis of the properties of the subhalo population in RHAPSODY is presented in a companion paper.

  10. Probing BL Lac and Cluster Evolution via a Wide-angle, Deep X-ray Selected Sample

    Science.gov (United States)

    Perlman, E.; Jones, L.; White, N.; Angelini, L.; Giommi, P.; McHardy, I.; Wegner, G.

    1994-12-01

    The WARPS survey (Wide-Angle ROSAT Pointed Survey) has been constructed from the archive of all public ROSAT PSPC observations, and is a subset of the WGACAT catalog. WARPS will include a complete sample of >= 100 BL Lacs at F_x >= 10(-13) erg s(-1) cm(-2) . A second selection technique will identify ~ 100 clusters at 0.15 = 0.304 +/- 0.062 for XBLs but = 0.60 +/- 0.05 for RBLs. Models of the X-ray luminosity function (XLF) are also poorly constrained. WARPS will allow us to compute an accurate XLF, decreasing the error bars above by over a factor of two. We will also test for low-luminosity BL Lacs, whose non-thermal nuclear sources are dim compared to the host galaxy. Browne and Marcha (1993) claim the EMSS missed most of these objects and is incomplete. If their predictions are correct, 20-40% of the BL Lacs we find will fall in this category, enabling us to probe the evolution and internal workings of BL Lacs at lower luminosities than ever before. By removing likely QSOs before optical spectroscopy, WARPS requires only modest amounts of telescope time. It will extend measurement of the cluster XLF both to higher redshifts (z>0.5) and lower luminosities (LX<1x10(44) erg s(-1) ) than previous measurements, confirming or rejecting the 3sigma detection of negative evolution found in the EMSS, and constraining Cold Dark Matter cosmologies. Faint NELGs are a recently discovered major contributor to the X-ray background. They are a mixture of Sy2s, starbursts and galaxies of unknown type. Detailed classification and evolution of their XLF will be determined for the first time.

  11. Pyrosequencing analysis yields comprehensive assessment of microbial communities in pilot-scale two-stage membrane biofilm reactors.

    Science.gov (United States)

    Ontiveros-Valencia, Aura; Tang, Youneng; Zhao, He-Ping; Friese, David; Overstreet, Ryan; Smith, Jennifer; Evans, Patrick; Rittmann, Bruce E; Krajmalnik-Brown, Rosa

    2014-07-01

    We studied the microbial community structure of pilot two-stage membrane biofilm reactors (MBfRs) designed to reduce nitrate (NO3(-)) and perchlorate (ClO4(-)) in contaminated groundwater. The groundwater also contained oxygen (O2) and sulfate (SO4(2-)), which became important electron sinks that affected the NO3(-) and ClO4(-) removal rates. Using pyrosequencing, we elucidated how important phylotypes of each "primary" microbial group, i.e., denitrifying bacteria (DB), perchlorate-reducing bacteria (PRB), and sulfate-reducing bacteria (SRB), responded to changes in electron-acceptor loading. UniFrac, principal coordinate analysis (PCoA), and diversity analyses documented that the microbial community of biofilms sampled when the MBfRs had a high acceptor loading were phylogenetically distant from and less diverse than the microbial community of biofilm samples with lower acceptor loadings. Diminished acceptor loading led to SO4(2-) reduction in the lag MBfR, which allowed Desulfovibrionales (an SRB) and Thiothrichales (sulfur-oxidizers) to thrive through S cycling. As a result of this cooperative relationship, they competed effectively with DB/PRB phylotypes such as Xanthomonadales and Rhodobacterales. Thus, pyrosequencing illustrated that while DB, PRB, and SRB responded predictably to changes in acceptor loading, a decrease in total acceptor loading led to important shifts within the "primary" groups, the onset of other members (e.g., Thiothrichales), and overall greater diversity.

  12. High Precision Motion Control System for the Two-Stage Light Gas Gun at the Dynamic Compression Sector

    Science.gov (United States)

    Zdanowicz, E.; Guarino, V.; Konrad, C.; Williams, B.; Capatina, D.; D'Amico, K.; Arganbright, N.; Zimmerman, K.; Turneaure, S.; Gupta, Y. M.

    2017-06-01

    The Dynamic Compression Sector (DCS) at the Advanced Photon Source (APS), located at Argonne National Laboratory (ANL), has a diverse set of dynamic compression drivers to obtain time resolved x-ray data in single event, dynamic compression experiments. Because the APS x-ray beam direction is fixed, each driver at DCS must have the capability to move through a large range of linear and angular motions with high precision to accommodate a wide variety of scientific needs. Particularly challenging was the design and implementation of the motion control system for the two-stage light gas gun, which rests on a 26' long structure and weighs over 2 tons. The target must be precisely positioned in the x-ray beam while remaining perpendicular to the gun barrel axis to ensure one-dimensional loading of samples. To accommodate these requirements, the entire structure can pivot through 60° of angular motion and move 10's of inches along four independent linear directions with 0.01° and 10 μm resolution, respectively. This presentation will provide details of how this system was constructed, how it is controlled, and provide examples of the wide range of x-ray/sample geometries that can be accommodated. Work supported by DOE/NNSA.

  13. Comparison of single-stage and temperature-phased two-stage anaerobic digestion of oily food waste

    International Nuclear Information System (INIS)

    Wu, Li-Jie; Kobayashi, Takuro; Li, Yu-You; Xu, Kai-Qin

    2015-01-01

    Highlights: • A single-stage and two two-stage anaerobic systems were synchronously operated. • Similar methane production 0.44 L/g VS_a_d_d_e_d from oily food waste was achieved. • The first stage of the two-stage process became inefficient due to serious pH drop. • Recycle favored the hythan production in the two-stage digestion. • The conversion of unsaturated fatty acids was enhanced by recycle introduction. - Abstract: Anaerobic digestion is an effective technology to recover energy from oily food waste. A single-stage system and temperature-phased two-stage systems with and without recycle for anaerobic digestion of oily food waste were constructed to compare the operation performances. The synchronous operation indicated the similar ability to produce methane in the three systems, with a methane yield of 0.44 L/g VS_a_d_d_e_d. The pH drop to less than 4.0 in the first stage of two-stage system without recycle resulted in poor hydrolysis, and methane or hydrogen was not produced in this stage. Alkalinity supplement from the second stage of two-stage system with recycle improved pH in the first stage to 5.4. Consequently, 35.3% of the particulate COD in the influent was reduced in the first stage of two-stage system with recycle according to a COD mass balance, and hydrogen was produced with a percentage of 31.7%, accordingly. Similar solids and organic matter were removed in the single-stage system and two-stage system without recycle. More lipid degradation and the conversion of long-chain fatty acids were achieved in the single-stage system. Recycling was proved to be effective in promoting the conversion of unsaturated long-chain fatty acids into saturated fatty acids in the two-stage system.

  14. Hybrid alkali-hydrodynamic disintegration of waste-activated sludge before two-stage anaerobic digestion process.

    Science.gov (United States)

    Grübel, Klaudiusz; Suschka, Jan

    2015-05-01

    The first step of anaerobic digestion, the hydrolysis, is regarded as the rate-limiting step in the degradation of complex organic compounds, such as waste-activated sludge (WAS). The aim of lab-scale experiments was to pre-hydrolyze the sludge by means of low intensive alkaline sludge conditioning before applying hydrodynamic disintegration, as the pre-treatment procedure. Application of both processes as a hybrid disintegration sludge technology resulted in a higher organic matter release (soluble chemical oxygen demand (SCOD)) to the liquid sludge phase compared with the effects of processes conducted separately. The total SCOD after alkalization at 9 pH (pH in the range of 8.96-9.10, SCOD = 600 mg O2/L) and after hydrodynamic (SCOD = 1450 mg O2/L) disintegration equaled to 2050 mg/L. However, due to the synergistic effect, the obtained SCOD value amounted to 2800 mg/L, which constitutes an additional chemical oxygen demand (COD) dissolution of about 35 %. Similarly, the synergistic effect after alkalization at 10 pH was also obtained. The applied hybrid pre-hydrolysis technology resulted in a disintegration degree of 28-35%. The experiments aimed at selection of the most appropriate procedures in terms of optimal sludge digestion results, including high organic matter degradation (removal) and high biogas production. The analyzed soft hybrid technology influenced the effectiveness of mesophilic/thermophilic anaerobic digestion in a positive way and ensured the sludge minimization. The adopted pre-treatment technology (alkalization + hydrodynamic cavitation) resulted in 22-27% higher biogas production and 13-28% higher biogas yield. After two stages of anaerobic digestion (mesophilic conditions (MAD) + thermophilic anaerobic digestion (TAD)), the highest total solids (TS) reduction amounted to 45.6% and was received for the following sample at 7 days MAD + 17 days TAD. About 7% higher TS reduction was noticed compared with the sample after 9

  15. Posttraumatic stress disorder in new mothers: results from a two-stage U.S. national survey.

    Science.gov (United States)

    Beck, Cheryl Tatano; Gable, Robert K; Sakala, Carol; Declercq, Eugene R

    2011-09-01

    Prevalence rates of women in community samples who screened positive for meeting the DSM-IV criteria for posttraumatic stress disorder after childbirth range from 1.7 to 9 percent. A positive screen indicates a high likelihood of this postpartum anxiety disorder. The objective of this analysis was to examine the results that focus on the posttraumatic stress disorder data obtained from a two-stage United States national survey conducted by Childbirth Connection: Listening to Mothers II (LTM II) and Listening to Mothers II Postpartum Survey (LTM II/PP). In the LTM II study, 1,373 women completed the survey online, and 200 mothers were interviewed by telephone. The same mothers were recontacted and asked to complete a second questionnaire 6 months later and of those, 859 women completed the online survey and 44 a telephone interview. Data obtained from three instruments are reported in this article: Posttraumatic Stress Disorder Symptom Scale-Self Report (PSS-SR), Postpartum Depression Screening Scale (PDSS), and the Patient Health Questionnaire-2 (PHQ-2). Nine percent of the sample screened positive for meeting the diagnostic criteria of posttraumatic stress disorder after childbirth as determined by responses on the PSS-SR. A total of 18 percent of women scored above the cutoff score on the PSS-SR, which indicated that they were experiencing elevated levels of posttraumatic stress symptoms. The following variables were significantly related to elevated posttraumatic stress symptoms levels: low partner support, elevated postpartum depressive symptoms, more physical problems since birth, and less health-promoting behaviors. In addition, eight variables significantly differentiated women who had elevated posttraumatic stress symptom levels from those who did not: no private health insurance, unplanned pregnancy, pressure to have an induction and epidural analgesia, planned cesarean birth, not breastfeeding as long as wanted, not exclusively breastfeeding at 1 month

  16. Baryon Content in a Sample of 91 Galaxy Clusters Selected by the South Pole Telescope at 0.2 < z < 1.25

    Science.gov (United States)

    Chiu, I.; Mohr, J. J.; McDonald, M.; Bocquet, S.; Desai, S.; Klein, M.; Israel, H.; Ashby, M. L. N.; Stanford, A.; Benson, B. A.; Brodwin, M.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bayliss, M.; Benoit-Lévy, A.; Bertin, E.; Bleem, L.; Brooks, D.; Buckley-Geer, E.; Bulbul, E.; Capasso, R.; Carlstrom, J. E.; Rosell, A. Carnero; Carretero, J.; Castander, F. J.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Davis, C.; Diehl, H. T.; Dietrich, J. P.; Doel, P.; Drlica-Wagner, A.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; García-Bellido, J.; Garmire, G.; Gaztanaga, E.; Gerdes, D. W.; Gonzalez, A.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gupta, N.; Gutierrez, G.; Hlavacek-L, J.; Honscheid, K.; James, D. J.; Jeltema, T.; Kraft, R.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Lima, M.; Maia, M. A. G.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Miquel, R.; Murray, S.; Nord, B.; Ogando, R. L. C.; Plazas, A. A.; Rapetti, D.; Reichardt, C. L.; Romer, A. K.; Roodman, A.; Sanchez, E.; Saro, A.; Scarpine, V.; Schindler, R.; Schubnell, M.; Sharon, K.; Smith, R. C.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Stalder, B.; Stern, C.; Strazzullo, V.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Vikram, V.; Walker, A. R.; Weller, J.; Zhang, Y.

    2018-05-01

    We estimate total mass (M500), intracluster medium (ICM) mass (MICM) and stellar mass (M⋆) in a Sunyaev-Zel'dovich effect (SZE) selected sample of 91 galaxy clusters with masses M500 ≳ 2.5 × 1014M⊙ and redshift 0.2 baryonic mass and the cold baryonic fraction with cluster halo mass and redshift. We find significant departures from self-similarity in the mass scaling for all quantities, while the redshift trends are all statistically consistent with zero, indicating that the baryon content of clusters at fixed mass has changed remarkably little over the past ≈9 Gyr. We compare our results to the mean baryon fraction (and the stellar mass fraction) in the field, finding that these values lie above (below) those in cluster virial regions in all but the most massive clusters at low redshift. Using a simple model of the matter assembly of clusters from infalling groups with lower masses and from infalling material from the low density environment or field surrounding the parent halos, we show that the measured mass trends without strong redshift trends in the stellar mass scaling relation could be explained by a mass and redshift dependent fractional contribution from field material. Similar analyses of the ICM and baryon mass scaling relations provide evidence for the so-called "missing baryons" outside cluster virial regions.

  17. Two-Sample Two-Stage Least Squares (TSTSLS estimates of earnings mobility: how consistent are they?

    Directory of Open Access Journals (Sweden)

    John Jerrim

    2016-08-01

    Full Text Available Academics and policymakers have shown great interest in cross-national comparisons of intergenerational earnings mobility. However, producing consistent and comparable estimates of earnings mobility is not a trivial task. In most countries researchers are unable to observe earnings information for two generations. They are thus forced to rely upon imputed data from different surveys instead. This paper builds upon previous work by considering the consistency of the intergenerational correlation (ρ as well as the elasticity (β, how this changes when using a range of different instrumental (imputer variables, and highlighting an important but infrequently discussed measurement issue. Our key finding is that, while TSTSLS estimates of β and ρ are both likely to be inconsistent, the magnitude of this problem is much greater for the former than it is for the latter. We conclude by offering advice on estimating earnings mobility using this methodology.

  18. Weak-lensing mass calibration of the Atacama Cosmology Telescope equatorial Sunyaev-Zeldovich cluster sample with the Canada-France-Hawaii telescope stripe 82 survey

    Energy Technology Data Exchange (ETDEWEB)

    Battaglia, N.; Miyatake, H.; Hasselfield, M.; Calabrese, E.; Ferrara, S.; Hložek, R. [Dept. of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); Leauthaud, A. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583 (Japan); Gralla, M.B.; Crichton, D. [Dept. of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218 (United States); Allison, R.; Dunkley, J. [Dept. of Astrophysics, University of Oxford, Oxford OX1 3RH (United Kingdom); Bond, J.R. [Canadian Institute for Theoretical Astrophysics, Toronto, ON M5S 3H8 (Canada); Devlin, M.J. [Dept. of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 (United States); Dünner, R. [Dept. de Astronomía y Astrofísica, Facultad de Física, Pontificia Universidad Católica de Chile, Santiago (Chile); Erben, T. [Argelander-Institut für Astronomie, University of Bonn, 53121 Bonn (Germany); Halpern, M.; Hincks, A.D. [Dept. of Physics and Astronomy, University of British Columbia, Vancouver, BC, V6T 1Z4 (Canada); Hilton, M. [Astrophysics and Cosmology Research Unit, School of Mathematical, Statistics and Computer Science, University of KwaZulu-Natal, Durban, 4041 (South Africa); Hill, J.C. [Dept. of Astronomy, Columbia University, New York, NY 10027 (United States); Huffenberger, K.M., E-mail: nbatta@astro.princeton.edu [Dept. of Physics, Florida State University, Tallahassee, FL 32306 (United States); and others

    2016-08-01

    Mass calibration uncertainty is the largest systematic effect for using clusters of galaxies to constrain cosmological parameters. We present weak lensing mass measurements from the Canada-France-Hawaii Telescope Stripe 82 Survey for galaxy clusters selected through their high signal-to-noise thermal Sunyaev-Zeldovich (tSZ) signal measured with the Atacama Cosmology Telescope (ACT). For a sample of 9 ACT clusters with a tSZ signal-to-noise greater than five the average weak lensing mass is (4.8±0.8) ×10{sup 14} M{sub ⊙}, consistent with the tSZ mass estimate of (4.70±1.0) ×10{sup 14} M{sub ⊙} which assumes a universal pressure profile for the cluster gas. Our results are consistent with previous weak-lensing measurements of tSZ-detected clusters from the Planck satellite. When comparing our results, we estimate the Eddington bias correction for the sample intersection of Planck and weak-lensing clusters which was previously excluded.

  19. Weak-lensing mass calibration of the Atacama Cosmology Telescope equatorial Sunyaev-Zeldovich cluster sample with the Canada-France-Hawaii telescope stripe 82 survey

    International Nuclear Information System (INIS)

    Battaglia, N.; Miyatake, H.; Hasselfield, M.; Calabrese, E.; Ferrara, S.; Hložek, R.; Leauthaud, A.; Gralla, M.B.; Crichton, D.; Allison, R.; Dunkley, J.; Bond, J.R.; Devlin, M.J.; Dünner, R.; Erben, T.; Halpern, M.; Hincks, A.D.; Hilton, M.; Hill, J.C.; Huffenberger, K.M.

    2016-01-01

    Mass calibration uncertainty is the largest systematic effect for using clusters of galaxies to constrain cosmological parameters. We present weak lensing mass measurements from the Canada-France-Hawaii Telescope Stripe 82 Survey for galaxy clusters selected through their high signal-to-noise thermal Sunyaev-Zeldovich (tSZ) signal measured with the Atacama Cosmology Telescope (ACT). For a sample of 9 ACT clusters with a tSZ signal-to-noise greater than five the average weak lensing mass is (4.8±0.8) ×10 14 M ⊙ , consistent with the tSZ mass estimate of (4.70±1.0) ×10 14 M ⊙ which assumes a universal pressure profile for the cluster gas. Our results are consistent with previous weak-lensing measurements of tSZ-detected clusters from the Planck satellite. When comparing our results, we estimate the Eddington bias correction for the sample intersection of Planck and weak-lensing clusters which was previously excluded.

  20. Post analysis of AE data of seal plug leakage of NAPS-2 and fatigue crack initiation of three point bend sample using cluster and artificial neural network

    International Nuclear Information System (INIS)

    Singh, A.K.; Mehta, H.R.; Bhattacharya, S.

    2003-01-01

    Acoustic Emission data is very weak and passive in nature that leads to a challenging task to separate AE data from noise. This paper illuminates the work done of post analysis of acoustic emission data of seal plug leakage of operating PHWR, NAPS-2, Narora and Fatigue Crack initiation of three-point bend sample using cluster analysis and artificial neural network (ANN). First the known AE data generated in lab by PCB debonding and pencil leak break were analyzed using ANN to get the confidence. After that the AE data acquired by scanning all 306-coolant channels at NAPS-2 was sorted out in five separate clusters for different leakage rate and background noise. Fatigue crack initiation, AE data generated in MSD lab on three-point bend sample was clustered in ten separate clusters in which one cluster was having 98% AE data of crack initiation period noted with the help of travelling microscope but remaining clusters indicating AE data of different sources and noise. The above data was further analysed with self organizing map of Artificial Neural Network. (author)

  1. Genetic variants at 1p11.2 and breast cancer risk: a two-stage study in Chinese women.

    Directory of Open Access Journals (Sweden)

    Yue Jiang

    Full Text Available BACKGROUND: Genome-wide association studies (GWAS have identified several breast cancer susceptibility loci, and one genetic variant, rs11249433, at 1p11.2 was reported to be associated with breast cancer in European populations. To explore the genetic variants in this region associated with breast cancer in Chinese women, we conducted a two-stage fine-mapping study with a total of 1792 breast cancer cases and 1867 controls. METHODOLOGY/PRINCIPAL FINDINGS: Seven single nucleotide polymorphisms (SNPs including rs11249433 in a 277 kb region at 1p11.2 were selected and genotyping was performed by using TaqMan® OpenArray™ Genotyping System for stage 1 samples (878 cases and 900 controls. In stage 2 (914 cases and 967 controls, three SNPs (rs2580520, rs4844616 and rs11249433 were further selected and genotyped for validation. The results showed that one SNP (rs2580520 located at a predicted enhancer region of SRGAP2 was consistently associated with a significantly increased risk of breast cancer in a recessive genetic model [Odds Ratio (OR  =  1.66, 95% confidence interval (CI  =  1.16-2.36 for stage 2 samples; OR  =  1.51, 95% CI  =  1.16-1.97 for combined samples, respectively]. However, no significant association was observed between rs11249433 and breast cancer risk in this Chinese population (dominant genetic model in combined samples: OR  =  1.20, 95% CI  =  0.92-1.57. CONCLUSIONS/SIGNIFICANCE: Genotypes of rs2580520 at 1p11.2 suggest that Chinese women may have different breast cancer susceptibility loci, which may contribute to the development of breast cancer in this population.

  2. Experimental and numerical studies on two-stage combustion of biomass

    Energy Technology Data Exchange (ETDEWEB)

    Houshfar, Eshan

    2012-07-01

    In this thesis, two-stage combustion of biomass was experimentally/numerically investigated in a multifuel reactor. The following emissions issues have been the main focus of the work: 1- NOx and N2O 2- Unburnt species (CO and CxHy) 3- Corrosion related emissions.The study had a focus on two-stage combustion in order to reduce pollutant emissions (primarily NOx emissions). It is well known that pollutant emissions are very dependent on the process conditions such as temperature, reactant concentrations and residence times. On the other hand, emissions are also dependent on the fuel properties (moisture content, volatiles, alkali content, etc.). A detailed study of the important parameters with suitable biomass fuels in order to optimize the various process conditions was performed. Different experimental studies were carried out on biomass fuels in order to study the effect of fuel properties and combustion parameters on pollutant emissions. Process conditions typical for biomass combustion processes were studied. Advanced experimental equipment was used in these studies. The experiments showed the effects of staged air combustion, compared to non-staged combustion, on the emission levels clearly. A NOx reduction of up to 85% was reached with staged air combustion using demolition wood as fuel. An optimum primary excess air ratio of 0.8-0.95 was found as a minimizing parameter for the NOx emissions for staged air combustion. Air staging had, however, a negative effect on N2O emissions. Even though the trends showed a very small reduction in the NOx level as temperature increased for non-staged combustion, the effect of temperature was not significant for NOx and CxHy, neither in staged air combustion or non-staged combustion, while it had a great influence on the N2O and CO emissions, with decreasing levels with increasing temperature. Furthermore, flue gas recirculation (FGR) was used in combination with staged combustion to obtain an enhanced NOx reduction. The

  3. THE ATACAMA COSMOLOGY TELESCOPE: DYNAMICAL MASSES AND SCALING RELATIONS FOR A SAMPLE OF MASSIVE SUNYAEV-ZEL'DOVICH EFFECT SELECTED GALAXY CLUSTERS {sup ,}

    Energy Technology Data Exchange (ETDEWEB)

    Sifon, Cristobal; Barrientos, L. Felipe; Gonzalez, Jorge; Infante, Leopoldo; Duenner, Rolando [Departamento de Astronomia y Astrofisica, Facultad de Fisica, Pontificia Universidad Catolica de Chile, Casilla 306, Santiago 22 (Chile); Menanteau, Felipe; Hughes, John P.; Baker, Andrew J. [Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen Road, Piscataway, NJ 08854 (United States); Hasselfield, Matthew [Department of Physics and Astronomy, University of British Columbia, Vancouver, BC V6T 1Z4 (Canada); Marriage, Tobias A.; Crichton, Devin; Gralla, Megan B. [Department of Physics and Astronomy, The Johns Hopkins University, Baltimore, MD 21218-2686 (United States); Addison, Graeme E.; Dunkley, Joanna [Sub-department of Astrophysics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); Battaglia, Nick; Bond, J. Richard; Hajian, Amir [Canadian Institute for Theoretical Astrophysics, University of Toronto, Toronto, ON M5S 3H8 (Canada); Das, Sudeep [Berkeley Center for Cosmological Physics, LBL and Department of Physics, University of California, Berkeley, CA 94720 (United States); Devlin, Mark J. [Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Hilton, Matt [School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, NG7 2RD (United Kingdom); and others

    2013-07-20

    We present the first dynamical mass estimates and scaling relations for a sample of Sunyaev-Zel'dovich effect (SZE) selected galaxy clusters. The sample consists of 16 massive clusters detected with the Atacama Cosmology Telescope (ACT) over a 455 deg{sup 2} area of the southern sky. Deep multi-object spectroscopic observations were taken to secure intermediate-resolution (R {approx} 700-800) spectra and redshifts for Almost-Equal-To 60 member galaxies on average per cluster. The dynamical masses M{sub 200c} of the clusters have been calculated using simulation-based scaling relations between velocity dispersion and mass. The sample has a median redshift z = 0.50 and a median mass M{sub 200c}{approx_equal}12 Multiplication-Sign 10{sup 14} h{sub 70}{sup -1} M{sub sun} with a lower limit M{sub 200c}{approx_equal}6 Multiplication-Sign 10{sup 14} h{sub 70}{sup -1} M{sub sun}, consistent with the expectations for the ACT southern sky survey. These masses are compared to the ACT SZE properties of the sample, specifically, the match-filtered central SZE amplitude y{sub 0}-tilde, the central Compton parameter y{sub 0}, and the integrated Compton signal Y{sub 200c}, which we use to derive SZE-mass scaling relations. All SZE estimators correlate with dynamical mass with low intrinsic scatter ({approx}< 20%), in agreement with numerical simulations. We explore the effects of various systematic effects on these scaling relations, including the correlation between observables and the influence of dynamically disturbed clusters. Using the three-dimensional information available, we divide the sample into relaxed and disturbed clusters and find that {approx}50% of the clusters are disturbed. There are hints that disturbed systems might bias the scaling relations, but given the current sample sizes, these differences are not significant; further studies including more clusters are required to assess the impact of these clusters on the scaling relations.

  4. Hydrogen and methane production from condensed molasses fermentation soluble by a two-stage anaerobic process

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Chiu-Yue; Liang, You-Chyuan; Lay, Chyi-How [Feng Chia Univ., Taichung, Taiwan (China). Dept. of Environmental Engineering and Science; Chen, Chin-Chao [Chungchou Institute of Technology, Taiwan (China). Environmental Resources Lab.; Chang, Feng-Yuan [Feng Chia Univ., Taichung, Taiwan (China). Research Center for Energy and Resources

    2010-07-01

    The treatment of condensed molasses fermentation soluble (CMS) is a troublesome problem for glutamate manufacturing factory. However, CMS contains high carbohydrate and nutrient contents and is an attractive and commercially potential feedstock for bioenergy production. The aim of this paper is to produce hydrogen and methane by two-stage anaerobic fermentation process. The fermentative hydrogen production from CMS was conducted in a continuously-stirred tank bioreactor (working volume 4 L) which was operated at a hydraulic retention time (HRT) of 8 h, organic loading rate (OLR) of 120 kg COD/m{sup 3}-d, temperature of 35 C, pH 5.5 and sewage sludge as seed. The anaerobic methane production was conducted in an up-flow bioreactor (working volume 11 L) which was operated at a HRT of 24 -60 hrs, OLR of 4.0-10 kg COD/m{sup 3}-d, temperature of 35 C, pH 7.0 with using anaerobic granule sludge from fructose manufacturing factory as the seed and the effluent from hydrogen production process as the substrate. These two reactors have been operated successfully for more than 400 days. The steady-state hydrogen content, hydrogen production rate and hydrogen production yield in the hydrogen fermentation system were 37%, 169 mmol-H{sub 2}/L-d and 93 mmol-H{sub 2}/g carbohydrate{sub removed}, respectively. In the methane fermentation system, the peak methane content and methane production rate were 66.5 and 86.8 mmol-CH{sub 4}/L-d with methane production yield of 189.3 mmol-CH{sub 4}/g COD{sub removed} at an OLR 10 kg/m{sup 3}-d. The energy production rate was used to elucidate the energy efficiency for this two-stage process. The total energy production rate of 133.3 kJ/L/d was obtained with 5.5 kJ/L/d from hydrogen fermentation and 127.8 kJ/L/d from methane fermentation. (orig.)

  5. Adaptive Urban Stormwater Management Using a Two-stage Stochastic Optimization Model

    Science.gov (United States)

    Hung, F.; Hobbs, B. F.; McGarity, A. E.

    2014-12-01

    In many older cities, stormwater results in combined sewer overflows (CSOs) and consequent water quality impairments. Because of the expense of traditional approaches for controlling CSOs, cities are considering the use of green infrastructure (GI) to reduce runoff and pollutants. Examples of GI include tree trenches, rain gardens, green roofs, and rain barrels. However, the cost and effectiveness of GI are uncertain, especially at the watershed scale. We present a two-stage stochastic extension of the Stormwater Investment Strategy Evaluation (StormWISE) model (A. McGarity, JWRPM, 2012, 111-24) to explicitly model and optimize these uncertainties in an adaptive management framework. A two-stage model represents the immediate commitment of resources ("here & now") followed by later investment and adaptation decisions ("wait & see"). A case study is presented for Philadelphia, which intends to extensively deploy GI over the next two decades (PWD, "Green City, Clean Water - Implementation and Adaptive Management Plan," 2011). After first-stage decisions are made, the model updates the stochastic objective and constraints (learning). We model two types of "learning" about GI cost and performance. One assumes that learning occurs over time, is automatic, and does not depend on what has been done in stage one (basic model). The other considers learning resulting from active experimentation and learning-by-doing (advanced model). Both require expert probability elicitations, and learning from research and monitoring is modelled by Bayesian updating (as in S. Jacobi et al., JWRPM, 2013, 534-43). The model allocates limited financial resources to GI investments over time to achieve multiple objectives with a given reliability. Objectives include minimizing construction and O&M costs; achieving nutrient, sediment, and runoff volume targets; and community concerns, such as aesthetics, CO2 emissions, heat islands, and recreational values. CVaR (Conditional Value at Risk) and

  6. Combined two-stage xanthate processes for the treatment of copper-containing wastewater

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Y.K. [Department of Safety Health and Environmental Engineering, Central Taiwan University of Sciences and Technology, Taichung (Taiwan); Leu, M.H. [Department of Environmental Engineering, Kun Shan University of Technology, Yung-Kang City (Taiwan); Chang, J.E.; Lin, T.F.; Chen, T.C. [Department of Environmental Engineering, National Cheng Kung University, Tainan City (Taiwan); Chiang, L.C.; Shih, P.H. [Department of Environmental Engineering and Science, Fooyin University, Kaohsiung County (Taiwan)

    2007-02-15

    Heavy metal removal is mainly conducted by adjusting the wastewater pH to form metal hydroxide precipitates. However, in recent years, the xanthate process with a high metal removal efficiency, attracted attention due to its use of sorption/desorption of heavy metals from aqueous solutions. In this study, two kinds of agricultural xanthates, insoluble peanut-shell xanthate (IPX) and insoluble starch xanthate (ISX), were used as sorbents to treat the copper-containing wastewater (Cu concentration from 50 to 1,000 mg/L). The experimental results showed that the maximum Cu removal efficiency by IPX was 93.5 % in the case of high Cu concentrations, whereby 81.1 % of copper could rapidly be removed within one minute. Moreover, copper-containing wastewater could also be treated by ISX over a wide range (50 to 1,000 mg/L) to a level that meets the Taiwan EPA's effluent regulations (3 mg/L) within 20 minutes. Whereas IPX had a maximum binding capacity for copper of 185 mg/g IPX, the capacity for ISX was 120 mg/g ISX. IPX is cheaper than ISX, and has the benefits of a rapid reaction and a high copper binding capacity, however, it exhibits a lower copper removal efficiency. A sequential IPX and ISX treatment (i.e., two-stage xanthate processes) could therefore be an excellent alternative. The results obtained using the two-stage xanthate process revealed an effective copper treatment. The effluent (C{sub e}) was below 0.6 mg/L, compared to the influent (C{sub 0}) of 1,001 mg/L at pH = 4 and a dilution rate of 0.6 h{sup -1}. Furthermore, the Cu-ISX complex formed could meet the Taiwan TCLP regulations, and be classified as non-hazardous waste. The xanthatilization of agricultural wastes offers a comprehensive strategy for solving both agricultural waste disposal and metal-containing wastewater treatment problems. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  7. Studies on quantitative physiology of Trichoderma reesei with two-stage continuous culture for cellulase production

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, D; Andreotti, R; Mandels, M; Gallo, B; Reese, E T

    1979-11-01

    By employing a two-stage continuous-culture system, some of the more important physiological parameters involved in cellulase biosynthesis have been evaluated with an ultimate objective of designing an optimally controlled cellulase process. The two-stage continuous-culture system was run for a period of 1350 hr with Trichoderma reesei strain MCG-77. The temperature and pH were controlled at 32/sup 0/C and pH 4.5 for the first stage (growth) and 28/sup 0/C and pH 3.5 for the second stage (enzyme production). Lactose was the only carbon source for both stages. The ratio of specific uptake rate of carbon to that of nitrogen, Q(C)/Q(N), that supported good cell growth ranged from 11 to 15, and the ratio for maximum specific enzyme productivity ranged from 5 to 13. The maintenance coefficients determined for oxygen, M/sub 0/, and for carbon source, M/sub c/, are 0.85 mmol O/sub 2//g biomass/hr and 0.14 mmol hexose/g biomass/hr, respectively. The yield constants determined are: Y/sub X/O/ = 32.3 g biomass/mol O/sub 2/, Y/sub X/C/ = 1.1 g biomass/g C or Y/sub X/C/ = 0.44 g biomass/g hexose, Y/sub X/N/ = 12.5 g biomass/g nitrogen for the cell growth stage, and Y/sub X/N/ = 16.6 g biomass/g nitrogen for the enzyme production stage. Enzyme was produced only in the second stage. Volumetric and specific enzyme productivities obtained were 90 IU/liter/hrand 8 IU/g biomass/hr, respectively. The maximum specific enzyme productivity observed was 14.8 IU/g biomass/hr. The optimal dilution rate in the second stage that corresponded to the maximum enzyme productivity was 0.026 approx. 0.028 hr/sup -1/, and the specific growth rate in the second stage that supported maximum specific enzyme productivity was equal to or slightly less than zero.

  8. Removal of cesium from simulated liquid waste with countercurrent two-stage adsorption followed by microfiltration

    Energy Technology Data Exchange (ETDEWEB)

    Han, Fei; Zhang, Guang-Hui [School of Environmental Science and Engineering, Tianjin University, Tianjin, 300072 (China); Gu, Ping, E-mail: guping@tju.edu.cn [School of Environmental Science and Engineering, Tianjin University, Tianjin, 300072 (China)

    2012-07-30

    Highlights: Black-Right-Pointing-Pointer The adsorption isotherm of cesium by copper ferrocyanide followed a Freundlich model. Black-Right-Pointing-Pointer Decontamination factor of cesium was higher in lab-scale test than that in jar test. Black-Right-Pointing-Pointer A countercurrent two-stage adsorption-microfiltration process was achieved. Black-Right-Pointing-Pointer Cesium concentration in the effluent could be calculated. Black-Right-Pointing-Pointer It is a new cesium removal process with a higher decontamination factor. - Abstract: Copper ferrocyanide (CuFC) was used as an adsorbent to remove cesium. Jar test results showed that the adsorption capacity of CuFC was better than that of potassium zinc hexacyanoferrate. Lab-scale tests were performed by an adsorption-microfiltration process, and the mean decontamination factor (DF) was 463 when the initial cesium concentration was 101.3 {mu}g/L, the dosage of CuFC was 40 mg/L and the adsorption time was 20 min. The cesium concentration in the effluent continuously decreased with the operation time, which indicated that the used adsorbent retained its adsorption capacity. To use this capacity, experiments on a countercurrent two-stage adsorption (CTA)-microfiltration (MF) process were carried out with CuFC adsorption combined with membrane separation. A calculation method for determining the cesium concentration in the effluent was given, and batch tests in a pressure cup were performed to verify the calculated method. The results showed that the experimental values fitted well with the calculated values in the CTA-MF process. The mean DF was 1123 when the dilution factor was 0.4, the initial cesium concentration was 98.75 {mu}g/L and the dosage of CuFC and adsorption time were the same as those used in the lab-scale test. The DF obtained by CTA-MF process was more than three times higher than the single-stage adsorption in the jar test.

  9. A farm-scale pilot plant for biohydrogen and biomethane production by two-stage fermentation

    Directory of Open Access Journals (Sweden)

    R. Oberti

    2013-09-01

    Full Text Available Hydrogen is considered one of the possible main energy carriers for the future, thanks to its unique environmental properties. Indeed, its energy content (120 MJ/kg can be exploited virtually without emitting any exhaust in the atmosphere except for water. Renewable production of hydrogen can be obtained through common biological processes on which relies anaerobic digestion, a well-established technology in use at farm-scale for treating different biomass and residues. Despite two-stage hydrogen and methane producing fermentation is a simple variant of the traditional anaerobic digestion, it is a relatively new approach mainly studied at laboratory scale. It is based on biomass fermentation in two separate, seuqential stages, each maintaining conditions optimized to promote specific bacterial consortia: in the first acidophilic reactorhydrogen is produced production, while volatile fatty acids-rich effluent is sent to the second reactor where traditional methane rich biogas production is accomplished. A two-stage pilot-scale plant was designed, manufactured and installed at the experimental farm of the University of Milano and operated using a biomass mixture of livestock effluents mixed with sugar/starch-rich residues (rotten fruits and potatoes and expired fruit juices, afeedstock mixture based on waste biomasses directly available in the rural area where plant is installed. The hydrogenic and the methanogenic reactors, both CSTR type, had a total volume of 0.7m3 and 3.8 m3 respectively, and were operated in thermophilic conditions (55 2 °C without any external pH control, and were fully automated. After a brief description of the requirements of the system, this contribution gives a detailed description of its components and of engineering solutions to the problems encountered during the plant realization and start-up. The paper also discusses the results obtained in a first experimental run which lead to production in the range of previous

  10. Comparative Analysis of Direct Hospital Care Costs between Aseptic and Two-Stage Septic Knee Revision

    Science.gov (United States)

    Kasch, Richard; Merk, Sebastian; Assmann, Grit; Lahm, Andreas; Napp, Matthias; Merk, Harry; Flessa, Steffen

    2017-01-01

    Background The most common intermediate and long-term complications of total knee arthroplasty (TKA) include aseptic and septic failure of prosthetic joints. These complications cause suffering, and their management is expensive. In the future the number of revision TKA will increase, which involves a greater financial burden. Little concrete data about direct costs for aseptic and two-stage septic knee revisions with an in depth-analysis of septic explantation and implantation is available. Questions/Purposes A retrospective consecutive analysis of the major partial costs involved in revision TKA for aseptic and septic failure was undertaken to compare 1) demographic and clinical characteristics, and 2) variable direct costs (from a hospital department’s perspective) between patients who underwent single-stage aseptic and two-stage septic revision of TKA in a hospital providing maximum care. We separately analyze the explantation and implantation procedures in septic revision cases and identify the major cost drivers of knee revision operations. Methods A total of 106 consecutive patients (71 aseptic and 35 septic) was included. All direct costs of diagnosis, surgery, and treatment from the hospital department’s perspective were calculated as real purchase prices. Personnel involvement was calculated in units of minutes. Results Aseptic versus septic revisions differed significantly in terms of length of hospital stay (15.2 vs. 39.9 days), number of reported secondary diagnoses (6.3 vs. 9.8) and incision-suture time (108.3 min vs. 193.2 min). The management of septic revision TKA was significantly more expensive than that of aseptic failure ($12,223.79 vs. $6,749.43) (p costs of explantation stage ($4,540.46) were lower than aseptic revision TKA ($6,749.43) which were again lower than those of the septic implantation stage ($7,683.33). All mean costs of stays were not comparable as they differ significantly (p cost drivers were the cost of the implant and

  11. Two-stage laparoscopic approaches for high anorectal malformation: transumbilical colostomy and anorectoplasty.

    Science.gov (United States)

    Yang, Li; Tang, Shao-Tao; Li, Shuai; Aubdoollah, T H; Cao, Guo-Qing; Lei, Hai-Yan; Wang, Xin-Xing

    2014-11-01

    Trans-umbilical colostomy (TUC) has been previously created in patients with Hirschsprung's disease and intermediate anorectal malformation (ARM), but not in patients with high-ARM. The purposes of this study were to assess the feasibility, safety, complications and cosmetic results of TUC in a divided fashion, and subsequently stoma closure and laparoscopic assisted anorectoplasty (LAARP) were simultaneously completed by using the colostomy site for a laparoscopic port in high-ARM patients. Twenty male patients with high-ARMs were chosen for this two-stage procedure. The first-stage consisted of creating the TUC in double-barreled fashion colostomy with a high chimney at the umbilicus, and the loop was divided at the same time, in such a way that the two diverting ends were located at the umbilical incision with the distal end half closed and slightly higher than proximal end. In the second-stage, 3 to 7 months later, the stoma was closed through a peristomal skin incision followed by end-to-end anastomosis and simultaneously LAARP was performed by placing a laparoscopic port at the umbilicus, which was previously the colonostomy site. Umbilical wound closure was performed in a semi-opened fashion to create a deep umbilicus. TUC and LAARP were successfully performed in 20 patients. Four cases with bladder neck fistulas and 16 cases with prostatic urethra fistulas were found. Postoperative complications were rectal mucosal prolapsed in three cases, anal stricture in two cases and wound dehiscence in one case. Neither umbilical ring narrowing, parastomal hernia nor obstructive symptoms was observed. Neither umbilical nor perineal wound infection was observed. Stoma care was easily carried-out by attaching stoma bag. Healing of umbilical wounds after the second-stage was excellent. Early functional stooling outcome were satisfactory. The umbilicus may be an alternative stoma site for double-barreled colostomy in high-ARM patients. The two-stage laparoscopic

  12. Numerical Investigation and Experimental Demonstration of Chaos from Two-Stage Colpitts Oscillator in the Ultrahigh Frequency Range

    DEFF Research Database (Denmark)

    Bumeliene, S.; Tamasevicius, A.; Mykolaitis, G.

    2006-01-01

    A hardware prototype of the two-stage Colpitts oscillator employing the microwave BFG520 type transistors with the threshold frequency of 9 GHz and designed to operate in the ultrahigh frequency range (300–1000 MHz) is described. The practical circuit in addition to the intrinsic two-stage oscill......A hardware prototype of the two-stage Colpitts oscillator employing the microwave BFG520 type transistors with the threshold frequency of 9 GHz and designed to operate in the ultrahigh frequency range (300–1000 MHz) is described. The practical circuit in addition to the intrinsic two......-stage oscillator contains an emitter follower acting as a buffer and minimizing the influence of the load. The circuit is investigated both numerically and experimentally. Typical phase portraits, Lyapunov exponents, Lyapunov dimension and broadband continuous power spectra are presented. The main advantage...

  13. Evaluation of immunization coverage in the rural area of Pune, Maharashtra, using the 30 cluster sampling technique

    Directory of Open Access Journals (Sweden)

    Pankaj Kumar Gupta

    2013-01-01

    Full Text Available Background: Infectious diseases are a major cause of morbidity and mortality in children. One of the most cost-effective and easy methods for child survival is immunization. Despite all the efforts put in by governmental and nongovernmental institutes for 100% immunization coverage, there are still pockets of low-coverage areas. In India, immunization services are offered free in public health facilities, but, despite rapid increases, the immunization rate remains low in some areas. The Millennium Development Goals (MDG indicators also give importance to immunization. Objective: To assess the immunization coverage in the rural area of Pune. Materials and Methods: A cross-sectional study was conducted in the field practice area of the Rural Health Training Center (RHTC using the WHO′s 30 cluster sampling method for evaluation of immunization coverage. Results: A total of 1913 houses were surveyed. A total of 210 children aged 12-23 months were included in the study. It was found that 86.67% of the children were fully immunized against all the six vaccine-preventable diseases. The proportion of fully immunized children was marginally higher in males (87.61% than in females (85.57%, and the immunization card was available with 60.95% of the subjects. The most common cause for partial immunization was that the time of immunization was inconvenient (36%. Conclusion: Sustained efforts are required to achieve universal coverage of immunization in the rural area of Pune district.

  14. A large sample of shear-selected clusters from the Hyper Suprime-Cam Subaru Strategic Program S16A Wide field mass maps

    Science.gov (United States)

    Miyazaki, Satoshi; Oguri, Masamune; Hamana, Takashi; Shirasaki, Masato; Koike, Michitaro; Komiyama, Yutaka; Umetsu, Keiichi; Utsumi, Yousuke; Okabe, Nobuhiro; More, Surhud; Medezinski, Elinor; Lin, Yen-Ting; Miyatake, Hironao; Murayama, Hitoshi; Ota, Naomi; Mitsuishi, Ikuyuki

    2018-01-01

    We present the result of searching for clusters of galaxies based on weak gravitational lensing analysis of the ˜160 deg2 area surveyed by Hyper Suprime-Cam (HSC) as a Subaru Strategic Program. HSC is a new prime focus optical imager with a 1.5°-diameter field of view on the 8.2 m Subaru telescope. The superb median seeing on the HSC i-band images of 0.56" allows the reconstruction of high angular resolution mass maps via weak lensing, which is crucial for the weak lensing cluster search. We identify 65 mass map peaks with a signal-to-noise (S/N) ratio larger than 4.7, and carefully examine their properties by cross-matching the clusters with optical and X-ray cluster catalogs. We find that all the 39 peaks with S/N > 5.1 have counterparts in the optical cluster catalogs, and only 2 out of the 65 peaks are probably false positives. The upper limits of X-ray luminosities from the ROSAT All Sky Survey (RASS) imply the existence of an X-ray underluminous cluster population. We show that the X-rays from the shear-selected clusters can be statistically detected by stacking the RASS images. The inferred average X-ray luminosity is about half that of the X-ray-selected clusters of the same mass. The radial profile of the dark matter distribution derived from the stacking analysis is well modeled by the Navarro-Frenk-White profile with a small concentration parameter value of c500 ˜ 2.5, which suggests that the selection bias on the orientation or the internal structure for our shear-selected cluster sample is not strong.

  15. A New Cost-Effective Multi-Drive Solution based on a Two-Stage Direct Power Electronic Conversion Topology

    DEFF Research Database (Denmark)

    Klumpner, Christian; Blaabjerg, Frede

    2002-01-01

    of a protection circuit involving twelve diodes with full voltage/current ratings used only during faulty situations, makes this topology not so attractive. Lately, two stage Direct Power Electronic Conversion (DPEC) topologies have been proposed, providing similar functionality as a matrix converter but allowing...... shared by many loads, making this topology more cost effective. The functionality of the proposed two-stage multi-drive direct power electronic conversion topology is validated by experiments on a realistic laboratory prototype....

  16. Preservation Effect of Two-Stage Cinnamon Bark (Cinnamomum Burmanii) Oleoresin Microcapsules On Vacuum-Packed Ground Beef During Refrigerated Storage

    Science.gov (United States)

    Irfiana, D.; Utami, R.; Khasanah, L. U.; Manuhara, G. J.

    2017-04-01

    The purpose of this study was to determine the effect of two stage cinnamon bark oleoresin microcapsules (0%, 0.5% and 1%) on the TPC (Total Plate Count), TBA (thiobarbituric acid), pH, and RGB color (Red, Green, and Blue) of vacuum-packed ground beef during refrigerated storage (at 0, 4, 8, 12, and 16 days). This study showed that the addition of two stage cinnamon bark oleoresin microcapsules affected the quality of vacuum-packed ground beef during 16 days of refrigerated storage. The results showed that the TPC value of the vacuum-packed ground beef sample with the addition 0.5% and 1% microcapsules was lower than the value of control sample. The TPC value of the control sample, sample with additional 0.5% and 1% microcapsules were 5.94; 5.46; and 5.16 log CFU/g respectively. The TBA value of vacuum-packed ground beef were 0.055; 0.041; and 0.044 mg malonaldehyde/kg, resepectively on the 16th day of storage. The addition of two-stage cinnamon bark oleoresin microcapsules could inhibit the growth of microbia and decrease the oxidation process of vacuum-packed ground beef. Moreover, the change of vacuum-packed ground beef pH and RGB color with the addition 0.5% and 1% microcapsules were less than those of the control sample. The addition of 1% microcapsules showed the best effect in preserving the vacuum-packed ground beef.

  17. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.

    Science.gov (United States)

    Savalei, Victoria; Rhemtulla, Mijke

    2017-08-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.

  18. Biogas production of Chicken Manure by Two-stage fermentation process

    Science.gov (United States)

    Liu, Xin Yuan; Wang, Jing Jing; Nie, Jia Min; Wu, Nan; Yang, Fang; Yang, Ren Jie

    2018-06-01

    This paper performs a batch experiment for pre-acidification treatment and methane production from chicken manure by the two-stage anaerobic fermentation process. Results shows that the acetate was the main component in volatile fatty acids produced at the end of pre-acidification stage, accounting for 68% of the total amount. The daily biogas production experienced three peak period in methane production stage, and the methane content reached 60% in the second period and then slowly reduced to 44.5% in the third period. The cumulative methane production was fitted by modified Gompertz equation, and the kinetic parameters of the methane production potential, the maximum methane production rate and lag phase time were 345.2 ml, 0.948 ml/h and 343.5 h, respectively. The methane yield of 183 ml-CH4/g-VSremoved during the methane production stage and VS removal efficiency of 52.7% for the whole fermentation process were achieved.

  19. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2017-01-01

    Full Text Available Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total cost of construction and operation and overall sojourn time of stormwater. An improved genetic algorithm is redesigned to solve this complex nondeterministic problem, which incorporates with stochastic and fuzzy characteristics in whole drainage process. A numerical example in Shanghai illustrates how to implement the model, and comparisons with alternative algorithms show its performance in computational flexibility and efficiency. Discussions on sensitivity of four main parameters, that is, quantity of pump stations, drainage pipe diameter, rainstorm precipitation intensity, and confidence levels, are also presented to provide guidance for designing urban drainage system.

  20. A Risk-Based Interval Two-Stage Programming Model for Agricultural System Management under Uncertainty

    Directory of Open Access Journals (Sweden)

    Ye Xu

    2016-01-01

    Full Text Available Nonpoint source (NPS pollution caused by agricultural activities is main reason that water quality in watershed becomes worse, even leading to deterioration. Moreover, pollution control is accompanied with revenue’s fall for agricultural system. How to design and generate a cost-effective and environmentally friendly agricultural production pattern is a critical issue for local managers. In this study, a risk-based interval two-stage programming model (RBITSP was developed. Compared to general ITSP model, significant contribution made by RBITSP model was that it emphasized importance of financial risk under various probabilistic levels, rather than only being concentrated on expected economic benefit, where risk is expressed as the probability of not meeting target profit under each individual scenario realization. This way effectively avoided solutions’ inaccuracy caused by traditional expected objective function and generated a variety of solutions through adjusting weight coefficients, which reflected trade-off between system economy and reliability. A case study of agricultural production management with the Tai Lake watershed was used to demonstrate superiority of proposed model. Obtained results could be a base for designing land-structure adjustment patterns and farmland retirement schemes and realizing balance of system benefit, system-failure risk, and water-body protection.

  1. A two-stage bioprocess for hydrogen and methane production from rice straw bioethanol residues.

    Science.gov (United States)

    Cheng, Hai-Hsuan; Whang, Liang-Ming; Wu, Chao-Wei; Chung, Man-Chien

    2012-06-01

    This study evaluates a two-stage bioprocess for recovering hydrogen and methane while treating organic residues of fermentative bioethanol from rice straw. The obtained results indicate that controlling a proper volumetric loading rate, substrate-to-biomass ratio, or F/M ratio is important to maximizing biohydrogen production from rice straw bioethanol residues. Clostridium tyrobutyricum, the identified major hydrogen-producing bacteria enriched in the hydrogen bioreactor, is likely utilizing lactate and acetate for biohydrogen production. The occurrence of acetogenesis during biohydrogen fermentation may reduce the B/A ratio and lead to a lower hydrogen production. Organic residues remained in the effluent of hydrogen bioreactor can be effectively converted to methane with a rate of 2.8 mmol CH(4)/gVSS/h at VLR of 4.6 kg COD/m(3)/d. Finally, approximately 75% of COD in rice straw bioethanol residues can be removed and among that 1.3% and 66.1% of COD can be recovered in the forms of hydrogen and methane, respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Antioxidant activity and total phenolic content of Moringa oleifera leaves in two stages of maturity.

    Science.gov (United States)

    Sreelatha, S; Padma, P R

    2009-12-01

    Antioxidants play an important role in inhibiting and scavenging free radicals, thus providing protection to human against infections and degenerative diseases. Current research is now directed towards natural antioxidants originated from plants due to safe therapeutics. Moringa oleifera is used in Indian traditional medicine for a wide range of various ailments. To understand the mechanism of pharmacological actions, antioxidant properties of the Moringa oleifera leaf extracts were tested in two stages of maturity using standard in vitro models. The successive aqueous extract of Moringa oleifera exhibited strong scavenging effect on 2, 2-diphenyl-2-picryl hydrazyl (DPPH) free radical, superoxide, nitric oxide radical and inhibition of lipid per oxidation. The free radical scavenging effect of Moringa oleifera leaf extract was comparable with that of the reference antioxidants. The data obtained in the present study suggests that the extracts of Moringa oleifera both mature and tender leaves have potent antioxidant activity against free radicals, prevent oxidative damage to major biomolecules and afford significant protection against oxidative damage.

  3. Spectral Characteristic Based on Fabry—Pérot Laser Diode with Two-Stage Optical Feedback

    International Nuclear Information System (INIS)

    Wu Jian-Wei; Nakarmi Bikash

    2013-01-01

    An optical device, consisting of a multi-mode Fabry—Pérot laser diode (MMFP-LD) with two-stage optical feedback, is proposed and experimentally demonstrated. The results show that the single-mode output with side-mode suppression ratio (SMSR) of ∼21.7 dB is attained by using the first-stage feedback. By using the second-stage feedback, the SMSR of single-mode operation could be increased to ∼28.5 dB while injection feedback power of −29 dBm is introduced into the laser diode. In the case of up to −29 dBm feedback power, the outcome SMSR is rapidly decayed to a very low level so that an obvious multi-mode operation in the output spectrum could be achieved at the feedback power level of −15.5 dBm. Thus, a transition between single- and multi-mode operations could be flexibly controlled by adjusting the injected power in the second-stage feedback system. Additionally, in the case of injection locking, the outcome SMSR and output power at the locked wavelength are as high as ∼50 dB and ∼5.8 dBm, respectively

  4. Two-stage light-gas magnetoplasma accelerator for hypervelocity impact simulation

    International Nuclear Information System (INIS)

    Khramtsov, P P; Vasetskij, V A; Makhnach, A I; Grishenko, V M; Chernik, M Yu; Shikh, I A; Doroshko, M V

    2016-01-01

    The development of macroparticles acceleration methods for high-speed impact simulation in a laboratory is an actual problem due to increasing of space flights duration and necessity of providing adequate spacecraft protection against micrometeoroid and space debris impacts. This paper presents results of experimental study of a two-stage light- gas magnetoplasma launcher for acceleration of a macroparticle, in which a coaxial plasma accelerator creates a shock wave in a high-pressure channel filled with light gas. Graphite and steel spheres with diameter of 2.5-4 mm were used as a projectile and were accelerated to the speed of 0.8-4.8 km/s. A launching of particle occurred in vacuum. For projectile velocity control the speed measuring method was developed. The error of this metod does not exceed 5%. The process of projectile flight from the barrel and the process of a particle collision with a target were registered by use of high-speed camera. The results of projectile collision with elements of meteoroid shielding are presented. In order to increase the projectile velocity, the high-pressure channel should be filled with hydrogen. However, we used helium in our experiments for safety reasons. Therefore, we can expect that the range of mass and velocity of the accelerated particles can be extended by use of hydrogen as an accelerating gas. (paper)

  5. Two-stage light-gas magnetoplasma accelerator for hypervelocity impact simulation

    Science.gov (United States)

    Khramtsov, P. P.; Vasetskij, V. A.; Makhnach, A. I.; Grishenko, V. M.; Chernik, M. Yu; Shikh, I. A.; Doroshko, M. V.

    2016-11-01

    The development of macroparticles acceleration methods for high-speed impact simulation in a laboratory is an actual problem due to increasing of space flights duration and necessity of providing adequate spacecraft protection against micrometeoroid and space debris impacts. This paper presents results of experimental study of a two-stage light- gas magnetoplasma launcher for acceleration of a macroparticle, in which a coaxial plasma accelerator creates a shock wave in a high-pressure channel filled with light gas. Graphite and steel spheres with diameter of 2.5-4 mm were used as a projectile and were accelerated to the speed of 0.8-4.8 km/s. A launching of particle occurred in vacuum. For projectile velocity control the speed measuring method was developed. The error of this metod does not exceed 5%. The process of projectile flight from the barrel and the process of a particle collision with a target were registered by use of high-speed camera. The results of projectile collision with elements of meteoroid shielding are presented. In order to increase the projectile velocity, the high-pressure channel should be filled with hydrogen. However, we used helium in our experiments for safety reasons. Therefore, we can expect that the range of mass and velocity of the accelerated particles can be extended by use of hydrogen as an accelerating gas.

  6. Two stage S-N curve in corrosion fatigue of extruded magnesium alloy AZ31

    Directory of Open Access Journals (Sweden)

    Yoshiharu Mutoh

    2009-11-01

    Full Text Available Tension-compression fatigue tests of extruded AZ31 magnesium alloys were carried out under corrosive environments:(a high humidity environment (80 %RH and (b 5 wt. %NaCl environment. It was found that the reduction rate of fatiguestrength due to corrosive environment was 0.12 under a high humidity and 0.53 under a NaCl environment. It was alsoobserved that under corrosive environments, the S-N curve was not a single curve but a two-stage curve. Above the fatiguelimit under low humidity, the crack nucleation mechanism was due to a localized slip band formation mechanism. Below thefatigue limit under low humidity, the reduction in fatigue strength was attributed to the corrosion pit formation and growth to the critical size for fatigue crack nucleation under the combined effect of cyclic load and the corrosive environment. The critical size was attained when the stress intensity factor range reached the threshold value for crack growth.

  7. Spread and Control of Mobile Benign Worm Based on Two-Stage Repairing Mechanism

    Directory of Open Access Journals (Sweden)

    Meng Wang

    2014-01-01

    Full Text Available Both in traditional social network and in mobile network environment, the worm is a serious threat, and this threat is growing all the time. Mobile smartphones generally promote the development of mobile network. The traditional antivirus technologies have become powerless when facing mobile networks. The development of benign worms, especially active benign worms and passive benign worms, has become a new network security measure. In this paper, we focused on the spread of worm in mobile environment and proposed the benign worm control and repair mechanism. The control process of mobile benign worms is divided into two stages: the first stage is rapid repair control, which uses active benign worm to deal with malicious worm in the mobile network; when the network is relatively stable, it enters the second stage of postrepair and uses passive mode to optimize the environment for the purpose of controlling the mobile network. Considering whether the existence of benign worm, we simplified the model and analyzed the four situations. Finally, we use simulation to verify the model. This control mechanism for benign worm propagation is of guiding significance to control the network security.

  8. Two-Stage Regularized Linear Discriminant Analysis for 2-D Data.

    Science.gov (United States)

    Zhao, Jianhua; Shi, Lei; Zhu, Ji

    2015-08-01

    Fisher linear discriminant analysis (LDA) involves within-class and between-class covariance matrices. For 2-D data such as images, regularized LDA (RLDA) can improve LDA due to the regularized eigenvalues of the estimated within-class matrix. However, it fails to consider the eigenvectors and the estimated between-class matrix. To improve these two matrices simultaneously, we propose in this paper a new two-stage method for 2-D data, namely a bidirectional LDA (BLDA) in the first stage and the RLDA in the second stage, where both BLDA and RLDA are based on the Fisher criterion that tackles correlation. BLDA performs the LDA under special separable covariance constraints that incorporate the row and column correlations inherent in 2-D data. The main novelty is that we propose a simple but effective statistical test to determine the subspace dimensionality in the first stage. As a result, the first stage reduces the dimensionality substantially while keeping the significant discriminant information in the data. This enables the second stage to perform RLDA in a much lower dimensional subspace, and thus improves the two estimated matrices simultaneously. Experiments on a number of 2-D synthetic and real-world data sets show that BLDA+RLDA outperforms several closely related competitors.

  9. Design of Korean nuclear reliability data-base network using a two-stage Bayesian concept

    International Nuclear Information System (INIS)

    Kim, T.W.; Jeong, K.S.; Chae, S.K.

    1987-01-01

    In an analysis of probabilistic risk, safety, and reliability of a nuclear power plant, the reliability data base (DB) must be established first. As the importance of the reliability data base increases, event reporting systems such as the US Nuclear Regulatory Commission's Licensee Event Report and the International Atomic Energy Agency's Incident Reporting System have been developed. In Korea, however, the systematic reliability data base is not yet available. Therefore, foreign data bases have been directly quoted in reliability analyses of Korean plants. In order to develop a reliability data base for Korean plants, the problem is which methodology is to be used, and the application limits of the selected method must be solved and clarified. After starting the commercial operation of Korea Nuclear Unit-1 (KNU-1) in 1978, six nuclear power plants have begun operation. Of these, only KNU-3 is a Canada Deuterium Uranium pressurized heavy-water reactor, and the others are all pressurized water reactors. This paper describes the proposed reliability data-base network (KNRDS) for Korean nuclear power plants in the context of two-stage Bayesian (TSB) procedure of Kaplan. It describes the concept of TSB to obtain the Korean-specific plant reliability data base, which is updated with the incorporation of both the reported generic reliability data and the operation experiences of similar plants

  10. Optimization of Boiling Water Reactor Loading Pattern Using Two-Stage Genetic Algorithm

    International Nuclear Information System (INIS)

    Kobayashi, Yoko; Aiyoshi, Eitaro

    2002-01-01

    A new two-stage optimization method based on genetic algorithms (GAs) using an if-then heuristic rule was developed to generate optimized boiling water reactor (BWR) loading patterns (LPs). In the first stage, the LP is optimized using an improved GA operator. In the second stage, an exposure-dependent control rod pattern (CRP) is sought using GA with an if-then heuristic rule. The procedure of the improved GA is based on deterministic operators that consist of crossover, mutation, and selection. The handling of the encoding technique and constraint conditions by that GA reflects the peculiar characteristics of the BWR. In addition, strategies such as elitism and self-reproduction are effectively used in order to improve the search speed. The LP evaluations were performed with a three-dimensional diffusion code that coupled neutronic and thermal-hydraulic models. Strong axial heterogeneities and constraints dependent on three dimensions have always necessitated the use of three-dimensional core simulators for BWRs, so that optimization of computational efficiency is required. The proposed algorithm is demonstrated by successfully generating LPs for an actual BWR plant in two phases. One phase is only LP optimization applying the Haling technique. The other phase is an LP optimization that considers the CRP during reactor operation. In test calculations, candidates that shuffled fresh and burned fuel assemblies within a reasonable computation time were obtained

  11. Characterization of a low frequency magnetic noise from a two stage pulse tube cryocooler

    International Nuclear Information System (INIS)

    Eshraghi, Mohamad Javad; Sasada, Ichiro; Kim, Jin Mok; Lee, Yong Ho

    2008-01-01

    Magnetic noise of a two stage pulse tube cryocooler(PT) has been measured by a fundamental mode orthogonal fluxgate magnetometer and by a LTS SQUID gradiometer. The magnetometer was installed in a Dewar made of aluminum at 12 cm apart from a section containing magnetic regenerative materials of the PT. The magnetic noise shows a clear peak at 1.8 Hz which is the fundamental frequency of the He gas pumping rate. The 1.8 Hz magnetic noise took a peak, during the cooling process, when the cold stage temperature was at (or close to) 12 K, which resembles the variation of the temperature of the second cold stage of 1.8 Hz. Hence we attributed the main source of this magnetic noise to the temperature dependency of magnetic susceptibility of magnetic regenerative materials such as Er3Ni and HoCu2 used at the second stage. We pointed out that the superconducting magnetic shield by lead sheets reduced the interfering magnetic noise generated from this part. With this scheme, the magnetic noise amplitude measured with the first order gradiometer DROS, mounted in the vicinity of the magnetic regenerator, when the noise amplitude is minimum, which could be found from the fluxgate measurement results, was less than 500 pT peak to peak. Whereas without lead shielding the noise level was higher than the dynamic range of SQUID instrumentations which is around ±10nT. (author)

  12. Heat transfer and pressure measurements and comparison with prediction for the SSME two-stage turbine

    Science.gov (United States)

    Dunn, M. G.; Kim, J.

    1992-01-01

    Time averaged Stanton number and surface pressure distributions are reported for the first stage vane row, the first stage blade row, and the second stage vane row of the Rocketdyne Space Shuttle Main Engine (SSME) two-stage fuel-side turbine. Unsteady pressure envelope measurements for the first blade are also reported. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the first stage components. Additional Stanton number measurements were made on the first stage blade platform, blade tip, and shroud, and at 50 percent span on the second vane. A shock tube was used as a short duration source of heated and pressurized air to which the turbine was subjected. Platinum thin film heat flux gages were used to obtain the heat flux measurements, while miniature silicon diaphragm flush-mounted pressure transducers were used to obtain the pressure measurements. The first stage vane Stanton number distributions are compared with predictions obtained using a version of STAN5 and quasi-3D Navier-Stokes solution. This same quasi-3D N-S code was also used to obtain predictions for the first blade and the second vane.

  13. Two-stage Framework for a Topology-Based Projection and Visualization of Classified Document Collections

    Energy Technology Data Exchange (ETDEWEB)

    Oesterling, Patrick; Scheuermann, Gerik; Teresniak, Sven; Heyer, Gerhard; Koch, Steffen; Ertl, Thomas; Weber, Gunther H.

    2010-07-19

    During the last decades, electronic textual information has become the world's largest and most important information source available. People have added a variety of daily newspapers, books, scientific and governmental publications, blogs and private messages to this wellspring of endless information and knowledge. Since neither the existing nor the new information can be read in its entirety, computers are used to extract and visualize meaningful or interesting topics and documents from this huge information clutter. In this paper, we extend, improve and combine existing individual approaches into an overall framework that supports topological analysis of high dimensional document point clouds given by the well-known tf-idf document-term weighting method. We show that traditional distance-based approaches fail in very high dimensional spaces, and we describe an improved two-stage method for topology-based projections from the original high dimensional information space to both two dimensional (2-D) and three dimensional (3-D) visualizations. To show the accuracy and usability of this framework, we compare it to methods introduced recently and apply it to complex document and patent collections.

  14. A Concept of Two-Stage-To-Orbit Reusable Launch Vehicle

    Science.gov (United States)

    Yang, Yong; Wang, Xiaojun; Tang, Yihua

    2002-01-01

    Reusable Launch Vehicle (RLV) has a capability of delivering a wide rang of payload to earth orbit with greater reliability, lower cost, more flexibility and operability than any of today's launch vehicles. It is the goal of future space transportation systems. Past experience on single stage to orbit (SSTO) RLVs, such as NASA's NASP project, which aims at developing an rocket-based combined-cycle (RBCC) airplane and X-33, which aims at developing a rocket RLV, indicates that SSTO RLV can not be realized in the next few years based on the state-of-the-art technologies. This paper presents a concept of all rocket two-stage-to-orbit (TSTO) reusable launch vehicle. The TSTO RLV comprises an orbiter and a booster stage. The orbiter is mounted on the top of the booster stage. The TSTO RLV takes off vertically. At the altitude about 50km the booster stage is separated from the orbiter, returns and lands by parachutes and airbags, or lands horizontally by means of its own propulsion system. The orbiter continues its ascent flight and delivers the payload into LEO orbit. After completing orbit mission, the orbiter will reenter into the atmosphere, automatically fly to the ground base and finally horizontally land on the runway. TSTO RLV has less technology difficulties and risk than SSTO, and maybe the practical approach to the RLV in the near future.

  15. Evidence that viral RNAs have evolved for efficient, two-stage packaging.

    Science.gov (United States)

    Borodavka, Alexander; Tuma, Roman; Stockley, Peter G

    2012-09-25

    Genome packaging is an essential step in virus replication and a potential drug target. Single-stranded RNA viruses have been thought to encapsidate their genomes by gradual co-assembly with capsid subunits. In contrast, using a single molecule fluorescence assay to monitor RNA conformation and virus assembly in real time, with two viruses from differing structural families, we have discovered that packaging is a two-stage process. Initially, the genomic RNAs undergo rapid and dramatic (approximately 20-30%) collapse of their solution conformations upon addition of cognate coat proteins. The collapse occurs with a substoichiometric ratio of coat protein subunits and is followed by a gradual increase in particle size, consistent with the recruitment of additional subunits to complete a growing capsid. Equivalently sized nonviral RNAs, including high copy potential in vivo competitor mRNAs, do not collapse. They do support particle assembly, however, but yield many aberrant structures in contrast to viral RNAs that make only capsids of the correct size. The collapse is specific to viral RNA fragments, implying that it depends on a series of specific RNA-protein interactions. For bacteriophage MS2, we have shown that collapse is driven by subsequent protein-protein interactions, consistent with the RNA-protein contacts occurring in defined spatial locations. Conformational collapse appears to be a distinct feature of viral RNA that has evolved to facilitate assembly. Aspects of this process mimic those seen in ribosome assembly.

  16. Production of acids and alcohols from syngas in a two-stage continuous fermentation process.

    Science.gov (United States)

    Abubackar, Haris Nalakath; Veiga, María C; Kennes, Christian

    2018-04-01

    A two-stage continuous system with two stirred tank reactors in series was utilized to perform syngas fermentation using Clostridium carboxidivorans. The first bioreactor (bioreactor 1) was maintained at pH 6 to promote acidogenesis and the second one (bioreactor 2) at pH 5 to stimulate solventogenesis. Both reactors were operated in continuous mode by feeding syngas (CO:CO 2 :H 2 :N 2 ; 30:10:20:40; vol%) at a constant flow rate while supplying a nutrient medium at different flow rates of 8.1, 15, 22 and 30 ml/h. A cell recycling unit was added to bioreactor 2 in order to recycle the cells back to the reactor, maintaining the OD 600 around 1 in bioreactor 2 throughout the experimental run. When comparing the flow rates, the best results in terms of solvent production were obtained with a flow rate of 22 ml/h, reaching the highest average outlet concentration for alcohols (1.51 g/L) and the most favorable alcohol/acid ratio of 0.32. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Two stage enucleation and deflation of a large unicystic ameloblastoma with mural invasion in mandible.

    Science.gov (United States)

    Sasaki, Ryo; Watanabe, Yorikatsu; Ando, Tomohiro; Akizuki, Tanetaka

    2014-06-01

    A treatment for strategy of unicystic ameloblastoma (UA) should be decided by its pathology type including luminal or mural type. Luminal type of UA can be treated only by enucleation alone, but UA with mural invasion should be treated aggressively like conventional ameloblastomas. However, it is difficult to diagnose the subtype of UA by an initial biopsy. There is a possibility that the lesion is an ordinary cyst or keratocystic odontogenic tumor, leading to a possible overtreatment. Therefore, this study performed the enucleation of the cyst wall and deflation at first, and the pathological finding confirmed mural invasion into the cystic wall, leading to the second surgery. The second surgery enucleated scar tissue, bone curettage, and deflation, and was able to contribute to the reduction of the recurrence rate by removing tumor nest in scar tissue or new bone, enhancing new bone formation, and shrinking the mandibular expanding by fenestration. In this study, a large UA with mural invasion including condyle was treated by "two-stage enucleation and deflation" in a 20-year-old patient.

  18. Fleet Planning Decision-Making: Two-Stage Optimization with Slot Purchase

    Directory of Open Access Journals (Sweden)

    Lay Eng Teoh

    2016-01-01

    Full Text Available Essentially, strategic fleet planning is vital for airlines to yield a higher profit margin while providing a desired service frequency to meet stochastic demand. In contrast to most studies that did not consider slot purchase which would affect the service frequency determination of airlines, this paper proposes a novel approach to solve the fleet planning problem subject to various operational constraints. A two-stage fleet planning model is formulated in which the first stage selects the individual operating route that requires slot purchase for network expansions while the second stage, in the form of probabilistic dynamic programming model, determines the quantity and type of aircraft (with the corresponding service frequency to meet the demand profitably. By analyzing an illustrative case study (with 38 international routes, the results show that the incorporation of slot purchase in fleet planning is beneficial to airlines in achieving economic and social sustainability. The developed model is practically viable for airlines not only to provide a better service quality (via a higher service frequency to meet more demand but also to obtain a higher revenue and profit margin, by making an optimal slot purchase and fleet planning decision throughout the long-term planning horizon.

  19. Two-Stage Tissue-Expander Breast Reconstruction: A Focus on the Surgical Technique

    Directory of Open Access Journals (Sweden)

    Elisa Bellini

    2017-01-01

    Full Text Available Objective. Breast cancer, the most common malignancy in women, comprises 18% of all female cancers. Mastectomy is an essential intervention to save lives, but it can destroy one’s body image, causing both physical and psychological trauma. Reconstruction is an important step in restoring patient quality of life after the mutilating treatment. Material and Methods. Tissue expanders and implants are now commonly used in breast reconstruction. Autologous reconstruction allows a better aesthetic result; however, many patients prefer implant reconstruction due to the shorter operation time and lack of donor site morbidity. Moreover, this reconstruction strategy is safe and can be performed in patients with multiple health problems. Tissue-expander reconstruction is conventionally performed as a two-stage procedure starting immediately after mammary gland removal. Results. Mastectomy is a destructive but essential intervention for women with breast cancer. Tissue expansion breast reconstruction is a safe, reliable, and efficacious procedure with considerable psychological benefits since it provides a healthy body image. Conclusion. This article focuses on this surgical technique and how to achieve the best reconstruction possible.

  20. Stepwise encapsulation and controlled two-stage release system for cis-Diamminediiodoplatinum

    Directory of Open Access Journals (Sweden)

    Chen Y

    2014-06-01

    Full Text Available Yun Chen,1,* Qian Li,1,2,* Qingsheng Wu1 1Department of Chemistry, Key Laboratory of Yangtze River Water Environment, Ministry of Education, Tongji University, Shanghai; 2Shanghai Institute of Quality Inspection and Technical Research, Shanghai, People’s Republic of China *These authors contributed equally to this work Abstract: cis-Diamminediiodoplatinum (cis-DIDP is a cisplatin-like anticancer drug with higher anticancer activity, but lower stability and price than cisplatin. In this study, a cis-DIDP carrier system based on micro-sized stearic acid was prepared by an emulsion solvent evaporation method. The maximum drug loading capacity of cis-DIDP-loaded solid lipid nanoparticles was 22.03%, and their encapsulation efficiency was 97.24%. In vitro drug release in phosphate-buffered saline (pH =7.4 at 37.5°C exhibited a unique two-stage process, which could prove beneficial for patients with tumors and malignancies. MTT (3-[4,5-dimethylthiazol-2-yl]-2, 5-diphenyltetrazolium bromide assay results showed that cis-DIDP released from cis-DIDP-loaded solid lipid nanoparticles had better inhibition activity than cis-DIDP that had not been loaded. Keywords: stearic acid, emulsion solvent evaporation method, drug delivery, cis-DIDP, in vitro

  1. Effluent composition prediction of a two-stage anaerobic digestion process: machine learning and stoichiometry techniques.

    Science.gov (United States)

    Alejo, Luz; Atkinson, John; Guzmán-Fierro, Víctor; Roeckel, Marlene

    2018-05-16

    Computational self-adapting methods (Support Vector Machines, SVM) are compared with an analytical method in effluent composition prediction of a two-stage anaerobic digestion (AD) process. Experimental data for the AD of poultry manure were used. The analytical method considers the protein as the only source of ammonia production in AD after degradation. Total ammonia nitrogen (TAN), total solids (TS), chemical oxygen demand (COD), and total volatile solids (TVS) were measured in the influent and effluent of the process. The TAN concentration in the effluent was predicted, this being the most inhibiting and polluting compound in AD. Despite the limited data available, the SVM-based model outperformed the analytical method for the TAN prediction, achieving a relative average error of 15.2% against 43% for the analytical method. Moreover, SVM showed higher prediction accuracy in comparison with Artificial Neural Networks. This result reveals the future promise of SVM for prediction in non-linear and dynamic AD processes. Graphical abstract ᅟ.

  2. A two-stage preventive maintenance optimization model incorporating two-dimensional extended warranty

    International Nuclear Information System (INIS)

    Su, Chun; Wang, Xiaolin

    2016-01-01

    In practice, customers can decide whether to buy an extended warranty or not, at the time of item sale or at the end of the basic warranty. In this paper, by taking into account the moments of customers purchasing two-dimensional extended warranty, the optimization of imperfect preventive maintenance for repairable items is investigated from the manufacturer's perspective. A two-dimensional preventive maintenance strategy is proposed, under which the item is preventively maintained according to a specified age interval or usage interval, whichever occurs first. It is highlighted that when the extended warranty is purchased upon the expiration of the basic warranty, the manufacturer faces a two-stage preventive maintenance optimization problem. Moreover, in the second stage, the possibility of reducing the servicing cost over the extended warranty period is explored by classifying customers on the basis of their usage rates and then providing them with customized preventive maintenance programs. Numerical examples show that offering customized preventive maintenance programs can reduce the manufacturer's warranty cost, while a larger saving in warranty cost comes from encouraging customers to buy the extended warranty at the time of item sale. - Highlights: • A two-dimensional PM strategy is investigated. • Imperfect PM strategy is optimized by considering both two-dimensional BW and EW. • Customers are categorized based on their usage rates throughout the BW period. • Servicing cost of the EW is reduced by offering customized PM programs. • Customers buying the EW at the time of sale is preferred for the manufacturer.

  3. Implications of the two stage clonal expansion model to radiation risk estimation

    International Nuclear Information System (INIS)

    Curtis, S.B.; Hazelton, W.D.; Luebeck, E.G.; Moolgavkar, S.H.

    2003-01-01

    The Two Stage Clonal Expansion Model of carcinogenesis has been applied to the analysis of several cohorts of persons exposed to chronic exposures of high and low LET radiation. The results of these analyses are: (1) the importance of radiation-induced initiation is small and, if present at all, contributes to cancers only late in life and only if exposure begins early in life, (2) radiation-induced promotion dominates and produces the majority of cancers by accelerating proliferation of already-initiated cells, and (3) radiation-induced malignant conversion is important only during and immediately after exposure ceases and tends to dominate only late in life, acting on already initiated and promoted cells. Two populations, the Colorado Plateau miners (high-LET, radon exposed) and the Canadian radiation workers (low-LET, gamma ray exposed) are used as examples to show the time dependence of the hazard function and the relative importance of the three hypothesized processes (initiation, promotion and malignant conversion) for each radiation quality

  4. Two-Stage Dynamic Pricing and Advertising Strategies for Online Video Services

    Directory of Open Access Journals (Sweden)

    Zhi Li

    2017-01-01

    Full Text Available As the demands for online video services increase intensively, the selection of business models has drawn the great attention of online providers. Among them, pay-per-view mode and advertising mode are two important resource modes, where the reasonable fee charge and suitable volume of ads need to be determined. This paper establishes an analytical framework studying the optimal dynamic pricing and advertising strategies for online providers; it shows how the strategies are influenced by the videos available time and the viewers’ emotional factor. We create the two-stage strategy of revenue models involving a single fee mode and a mixed fee-free mode and find out the optimal fee charge and advertising level of online video services. According to the results, the optimal video price and ads volume dynamically vary over time. The viewer’s aversion level to advertising has direct effects on both the volume of ads and the number of viewers who have selected low-quality content. The optimal volume of ads decreases with the increase of ads-aversion coefficient, while increasing as the quality of videos increases. The results also indicate that, in the long run, a pure fee mode or free mode is the optimal strategy for online providers.

  5. Compressed gas combined single- and two-stage light-gas gun

    Science.gov (United States)

    Lamberson, L. E.; Boettcher, P. A.

    2018-02-01

    With more than 1 trillion artificial objects smaller than 1 μm in low and geostationary Earth orbit, space assets are subject to the constant threat of space debris impact. These collisions occur at hypervelocity or speeds greater than 3 km/s. In order to characterize material behavior under this extreme event as well as study next-generation materials for space exploration, this paper presents a unique two-stage light-gas gun capable of replicating hypervelocity impacts. While a limited number of these types of facilities exist, they typically are extremely large and can be costly and dangerous to operate. The design presented in this paper is novel in two distinct ways. First, it does not use a form of combustion in the first stage. The projectile is accelerated from a pressure differential using air and inert gases (or purely inert gases), firing a projectile in a nominal range of 1-4 km/s. Second, the design is modular in that the first stage sits on a track sled and can be pulled back and used in itself to study lower speed impacts without any further modifications, with the first stage piston as the impactor. The modularity of the instrument allows the ability to investigate three orders of magnitude of impact velocities or between 101 and 103 m/s in a single, relatively small, cost effective instrument.

  6. Two-stage combined treatment of acid mine drainage and municipal wastewater.

    Science.gov (United States)

    Deng, Dongyang; Lin, Lian-Shin

    2013-01-01

    This study examined the feasibility of the combined treatment of field-collected acid mine drainages (AMD, pH = 4.2 ± 0.9, iron = 112 ± 118 mg/L, sulfate = 1,846 ± 594 mg/L) and municipal wastewater (MWW, avg. chemical oxygen demand (COD) = 234-333 mg/L) using a two-stage process. The process consisted of batch mixing of the two wastes to condition the mixture solutions, followed by anaerobic biological treatment. The mixings performed under a range of AMD/MWW ratios resulted in phosphate removal of 9 to ∼100%, the mixture pH of 6.2-7.9, and COD/sulfate concentration ratio of 0.05-5.4. The biological treatment consistently removed COD and sulfate by >80% from the mixture solutions for COD/sulfate ratios of 0.6-5.4. Alkalinity was produced in the biological treatment causing increased pH and further removal of metals from the solutions. Scanning electron microscopy of produced sludge with energy dispersion analysis suggested chemical precipitation and associated adsorption and co-precipitation as the mechanisms for metal removal (Fe: >99%, Al: ∼100%, Mn: 75 to ∼100%, Ca: 52-81%, Mg: 13-76%, and Na: 56-76%). The study showed promising results for the treatment method and denoted the potential of developing innovative technologies for combined management of the two wastes in mining regions.

  7. HOUSEHOLD FOOD DEMAND IN INDONESIA: A TWO-STAGE BUDGETING APPROACH

    Directory of Open Access Journals (Sweden)

    Agus Widarjono

    2016-05-01

    Full Text Available A two-stage budgeting approach was applied to analyze the food demand in urban areas separated by geographical areas and classified by income groups. The demographically augmented Quadratic Almost Ideal Demand System (QUAIDS was employed to estimate the demand elasticity. Data from the National Social and Economic Survey of Households (SUSENAS in 2011 were used. The demand system is a censored model because the data contains zero expenditures and is estimated by employing the consistent two-step estimation procedure to solve biased estimation. The results show that price and income elasticities become less elastic from poor households to rich households. Demand by urban households in Java is more responsive to price but less responsive to income than urban households outside of Java. Simulation policies indicate that an increase in food prices would have more adverse impacts than a decrease in income levels. Poor families would suffer more than rich families from rising food prices and/or decreasing incomes. More importantly, urban households on Java are more vulnerable to an economic crisis, and would respond by reducing their food consumption. Economic policies to stabilize food prices are better than income policies, such as the cash transfer, to maintain the well-being of the population in Indonesia

  8. A two-stage storage routing model for green roof runoff detention.

    Science.gov (United States)

    Vesuviano, Gianni; Sonnenwald, Fred; Stovin, Virginia

    2014-01-01

    Green roofs have been adopted in urban drainage systems to control the total quantity and volumetric flow rate of runoff. Modern green roof designs are multi-layered, their main components being vegetation, substrate and, in almost all cases, a separate drainage layer. Most current hydrological models of green roofs combine the modelling of the separate layers into a single process; these models have limited predictive capability for roofs not sharing the same design. An adaptable, generic, two-stage model for a system consisting of a granular substrate over a hard plastic 'egg box'-style drainage layer and fibrous protection mat is presented. The substrate and drainage layer/protection mat are modelled separately by previously verified sub-models. Controlled storm events are applied to a green roof system in a rainfall simulator. The time-series modelled runoff is compared to the monitored runoff for each storm event. The modelled runoff profiles are accurate (mean Rt(2) = 0.971), but further characterization of the substrate component is required for the model to be generically applicable to other roof configurations with different substrate.

  9. Two-stage collaborative global optimization design model of the CHPG microgrid

    Science.gov (United States)

    Liao, Qingfen; Xu, Yeyan; Tang, Fei; Peng, Sicheng; Yang, Zheng

    2017-06-01

    With the continuous developing of technology and reducing of investment costs, renewable energy proportion in the power grid is becoming higher and higher because of the clean and environmental characteristics, which may need more larger-capacity energy storage devices, increasing the cost. A two-stage collaborative global optimization design model of the combined-heat-power-and-gas (abbreviated as CHPG) microgrid is proposed in this paper, to minimize the cost by using virtual storage without extending the existing storage system. P2G technology is used as virtual multi-energy storage in CHPG, which can coordinate the operation of electric energy network and natural gas network at the same time. Demand response is also one kind of good virtual storage, including economic guide for the DGs and heat pumps in demand side and priority scheduling of controllable loads. Two kinds of storage will coordinate to smooth the high-frequency fluctuations and low-frequency fluctuations of renewable energy respectively, and achieve a lower-cost operation scheme simultaneously. Finally, the feasibility and superiority of proposed design model is proved in a simulation of a CHPG microgrid.

  10. A two-stage biological gas to liquid transfer process to convert carbon dioxide into bioplastic

    KAUST Repository

    Al Rowaihi, Israa

    2018-03-06

    The fermentation of carbon dioxide (CO2) with hydrogen (H2) uses available low-cost gases to synthesis acetic acid. Here, we present a two-stage biological process that allows the gas to liquid transfer (Bio-GTL) of CO2 into the biopolymer polyhydroxybutyrate (PHB). Using the same medium in both stages, first, acetic acid is produced (3.2 g L−1) by Acetobacterium woodii from 5.2 L gas-mixture of CO2:H2 (15:85 v/v) under elevated pressure (≥2.0 bar) to increase H2-solubility in water. Second, acetic acid is converted to PHB (3 g L−1 acetate into 0.5 g L−1 PHB) by Ralstonia eutropha H16. The efficiencies and space-time yields were evaluated, and our data show the conversion of CO2 into PHB with a 33.3% microbial cell content (percentage of the ratio of PHB concentration to cell concentration) after 217 h. Collectively, our results provide a resourceful platform for future optimization and commercialization of a Bio-GTL for PHB production.

  11. The Effect of Effluent Recirculation in a Semi-Continuous Two-Stage Anaerobic Digestion System

    Directory of Open Access Journals (Sweden)

    Karthik Rajendran

    2013-06-01

    Full Text Available The effect of recirculation in increasing organic loading rate (OLR and decreasing hydraulic retention time (HRT in a semi-continuous two-stage anaerobic digestion system using stirred tank reactor (CSTR and an upflow anaerobic sludge bed (UASB was evaluated. Two-parallel processes were in operation for 100 days, one with recirculation (closed system and the other without recirculation (open system. For this purpose, two structurally different carbohydrate-based substrates were used; starch and cotton. The digestion of starch and cotton in the closed system resulted in production of 91% and 80% of the theoretical methane yield during the first 60 days. In contrast, in the open system the methane yield was decreased to 82% and 56% of the theoretical value, for starch and cotton, respectively. The OLR could successfully be increased to 4 gVS/L/day for cotton and 10 gVS/L/day for starch. It is concluded that the recirculation supports the microorganisms for effective hydrolysis of polyhydrocarbons in CSTR and to preserve the nutrients in the system at higher OLRs, thereby improving the overall performance and stability of the process.

  12. Multifunctional Solar Systems Based On Two-Stage Regeneration Absorbent Solution

    Directory of Open Access Journals (Sweden)

    Doroshenko A.V.

    2015-04-01

    Full Text Available The concepts of multifunctional dehumidification solar systems, heat supply, cooling, and air conditioning based on the open absorption cycle with direct absorbent regeneration developed. The solar systems based on preliminary drainage of current of air and subsequent evaporated cooling. The solar system using evaporative coolers both types (direct and indirect. The principle of two-stage regeneration of absorbent used in the solar systems, it used as the basis of liquid and gas-liquid solar collectors. The main principle solutions are designed for the new generation of gas-liquid solar collectors. Analysis of the heat losses in the gas-liquid solar collectors, due to the mechanism of convection and radiation is made. Optimal cost of gas and liquid, as well as the basic dimensions and configuration of the working channel of the solar collector identified. Heat and mass transfer devices, belonging to the evaporative cooling system based on the interaction between the film and the gas stream and the liquid therein. Multichannel structure of the polymeric materials used to create the tip. Evaporative coolers of water and air both types (direct and indirect are used in the cooling of the solar systems. Preliminary analysis of the possibilities of multifunctional solar absorption systems made reference to problems of cooling media and air conditioning on the basis of experimental data the authors. Designed solar systems feature low power consumption and environmental friendliness.

  13. Armature formation in a railgun using a two-stage light-gas gun injector

    International Nuclear Information System (INIS)

    Hawke, R.S.; Susoeff, A.R.; Asay, J.R.; Hall, C.A.; Konrad, C.H.; Hickman, R.J.; Sauve, J.L.

    1989-01-01

    During the past decade several research groups have tried to achieve reliable acceleration of projectiles to velocities in excess of 8 km/s by using a railgun. All attempts have met with difficulties. However, in the past four years the researchers have come to agree on the nature and causes of the difficulties. The consensus is that the hot plasma armature - used to commutate across the rails and to accelerate the projectile - causes ablation of the barrel wall; this ablation ultimately results in parasitic secondary arc formation through armature separation and/or restrike. The subsequence deprivation of current to the propulsion armature results in a limit to the achievable projectile velocity. Methods of mitigating the process are under study. One method uses a two-stage light-gas gun as a preaccelerator/injector to the railgun. The gas gun serves a double purpose: It quickly accelerates the projectile to a high velocity, and it fills the barrel behind the propulsive armature with insulating gas. While this approach is expected to improve railgun performance, it also requires development of techniques to form the propulsive armature behind the projectile in the high-velocity, high-pressure gas stream. This paper briefly summarizes the problems encountered in attempts to achieve hypervelocities with a railgun. Included is a description of the phenomenology and details of joint Sandia National Laboratories, Albuquerque/Lawrence Livermore National Laboratory (SNLA/LNLL) work at SNLA on a method for forming the needed plasma armature

  14. A Two-Stage Framework for 3D Face Reconstruction from RGBD Images.

    Science.gov (United States)

    Wang, Kangkan; Wang, Xianwang; Pan, Zhigeng; Liu, Kai

    2014-08-01

    This paper proposes a new approach for 3D face reconstruction with RGBD images from an inexpensive commodity sensor. The challenges we face are: 1) substantial random noise and corruption are present in low-resolution depth maps; and 2) there is high degree of variability in pose and face expression. We develop a novel two-stage algorithm that effectively maps low-quality depth maps to realistic face models. Each stage is targeted toward a certain type of noise. The first stage extracts sparse errors from depth patches through the data-driven local sparse coding, while the second stage smooths noise on the boundaries between patches and reconstructs the global shape by combining local shapes using our template-based surface refinement. Our approach does not require any markers or user interaction. We perform quantitative and qualitative evaluations on both synthetic and real test sets. Experimental results show that the proposed approach is able to produce high-resolution 3D face models with high accuracy, even if inputs are of low quality, and have large variations in viewpoint and face expression.

  15. TWO-STAGE REVISION HIP REPLACEMENT PATIENS WITH SEVERE ACETABULUM DEFECT (CASE REPORT

    Directory of Open Access Journals (Sweden)

    V. V. Pavlov

    2017-01-01

    Full Text Available Favorable short-term results of arthroplasty are observed in 80–90% of cases, however, over the longer follow up period the percentage of positive outcomes is gradually reduced. Need for revision of the prosthesis or it’s components increases in proportion to time elapsed from the surgery. In addition, such revision is accompanied with a need to substitute the bone defect of the acetabulum. As a solution the authors propose to replace pelvic defects in two stages. During the first stage the defect was filled with bone allograft with platelet-rich fibrin (allografting with the use of PRF technology. After the allograft remodeling during the second stage the revision surgery is performed by implanting standard prostheses. The authors present a clinical case of a female patient with aseptic loosening of acetabular component of prosthesis in the right hip joint, with failed hip function of stage 2, right limb shortening of 2 cm. Treatment results confirm the efficiency and rationality of the proposed bone grafting option. The authors conclude bone allograft in combination with the PRF technology proves to be an alternative to the implantation of massive metal implants in the acetabulum while it reduces the risk of implant-associated infection, of metallosis in surrounding tissues and expands further revision options.

  16. Two stages of directed forgetting: Electrophysiological evidence from a short-term memory task.

    Science.gov (United States)

    Gao, Heming; Cao, Bihua; Qi, Mingming; Wang, Jing; Zhang, Qi; Li, Fuhong

    2016-06-01

    In this study, a short-term memory test was used to investigate the temporal course and neural mechanism of directed forgetting under different memory loads. Within each trial, two memory items with high or low load were presented sequentially, followed by a cue indicating whether the presented items should be remembered. After an interval, subjects were asked to respond to the probe stimuli. The ERPs locked to the cues showed that (a) the effect of cue type was initially observed during the P2 (160-240 ms) time window, with more positive ERPs for remembering relative to forgetting cues; (b) load effects were observed during the N2-P3 (250-500 ms) time window, with more positive ERPs for the high-load than low-load condition; (c) the cue effect was also observed during the N2-P3 time window, with more negative ERPs for forgetting versus remembering cues. These results demonstrated that directed forgetting involves two stages: task-relevance identification and information discarding. The cue effects during the N2 epoch supported the view that directed forgetting is an active process. © 2016 Society for Psychophysiological Research.

  17. Computational Modelling of Large Scale Phage Production Using a Two-Stage Batch Process

    Directory of Open Access Journals (Sweden)

    Konrad Krysiak-Baltyn

    2018-04-01

    Full Text Available Cost effective and scalable methods for phage production are required to meet an increasing demand for phage, as an alternative to antibiotics. Computational models can assist the optimization of such production processes. A model is developed here that can simulate the dynamics of phage population growth and production in a two-stage, self-cycling process. The model incorporates variable infection parameters as a function of bacterial growth rate and employs ordinary differential equations, allowing application to a setup with multiple reactors. The model provides simple cost estimates as a function of key operational parameters including substrate concentration, feed volume and cycling times. For the phage and bacteria pairing examined, costs and productivity varied by three orders of magnitude, with the lowest cost found to be most sensitive to the influent substrate concentration and low level setting in the first vessel. An example case study of phage production is also presented, showing how parameter values affect the production costs and estimating production times. The approach presented is flexible and can be used to optimize phage production at laboratory or factory scale by minimizing costs or maximizing productivity.

  18. Improving neuromodulation technique for refractory voiding dysfunctions: two-stage implant.

    Science.gov (United States)

    Janknegt, R A; Weil, E H; Eerdmans, P H

    1997-03-01

    Neuromodulation is a new technique that uses electrical stimulation of the sacral nerves for patients with refractory urinary urge/frequency or urge-incontinence, and some forms of urinary retention. The limiting factor for receiving an implant is often a failure of the percutaneous nerve evaluation (PNE) test. Present publications mention only about a 50% success score for PNE of all patients, although the micturition diaries and urodynamic parameters are similar. We wanted to investigate whether PNE results improved by using a permanent electrode as a PNE test. This would show that improvement of the PNE technique is feasible. In 10 patients where the original PNE had failed to improve the micturition diary parameters more than 50%, a permanent electrode was implanted by operation. It was connected to an external stimulator. In those cases where the patients improved according to their micturition diary by more than 50% during a period of 4 days, the external stimulator was replaced by a permanent subcutaneous neurostimulator. Eight of the 10 patients had a good to very good result (60% to 90% improvement) during the testing period and received their implant 5 to 14 days after the first stage. The good results of the two-stage implant technique we used indicate that the development of better PNE electrodes may lead to an improvement of the testing technique and better selection between nonresponders and technical failures.

  19. Performance analysis of a potassium-steam two stage vapour cycle

    International Nuclear Information System (INIS)

    Mitachi, Kohshi; Saito, Takeshi

    1983-01-01

    It is an important subject to raise the thermal efficiency in thermal power plants. In present thermal power plants which use steam cycle, the plant thermal efficiency has already reached 41 to 42 %, steam temperature being 839 K, and steam pressure being 24.2 MPa. That is, the thermal efficiency in a steam cycle is facing a limit. In this study, analysis was made on the performance of metal vapour/steam two-stage Rankine cycle obtained by combining a metal vapour cycle with a present steam cycle. Three different combinations using high temperature potassium regenerative cycle and low temperature steam regenerative cycle, potassium regenerative cycle and steam reheat and regenerative cycle, and potassium bleed cycle and steam reheat and regenerative cycle were systematically analyzed for the overall thermal efficiency, the output ratio and the flow rate ratio, when the inlet temperature of a potassium turbine, the temperature of a potassium condenser, and others were varied. Though the overall thermal efficiency was improved by lowering the condensing temperature of potassium vapour, it is limited by the construction because the specific volume of potassium in low pressure section increases greatly. In the combinatipn of potassium vapour regenerative cycle with steam regenerative cycle, the overall thermal efficiency can be 58.5 %, and also 60.2 % if steam reheat and regenerative cycle is employed. If a cycle to heat steam with the bled vapor out of a potassium vapour cycle is adopted, the overall thermal efficiency of 63.3 % is expected. (Wakatsuki, Y.)

  20. Comparison of Microalgae Cultivation in Photobioreactor, Open Raceway Pond, and a Two-Stage Hybrid System

    Energy Technology Data Exchange (ETDEWEB)

    Narala, Rakesh R.; Garg, Sourabh; Sharma, Kalpesh K.; Thomas-Hall, Skye R.; Deme, Miklos; Li, Yan; Schenk, Peer M., E-mail: p.schenk@uq.edu.au [Algae Biotechnology Laboratory, School of Agriculture and Food Sciences, The University of Queensland, Brisbane, QLD (Australia)

    2016-08-02

    In the wake of intensive fossil fuel usage and CO{sub 2} accumulation in the environment, research is targeted toward sustainable alternate bioenergy that can suffice the growing need for fuel and also that leaves a minimal carbon footprint. Oil production from microalgae can potentially be carried out more efficiently, leaving a smaller footprint and without competing for arable land or biodiverse landscapes. However, current algae cultivation systems and lipid induction processes must be significantly improved and are threatened by contamination with other algae or algal grazers. To address this issue, we have developed an efficient two-stage cultivation system using the marine microalga Tetraselmis sp. M8. This hybrid system combines exponential biomass production in positive pressure air lift-driven bioreactors with a separate synchronized high-lipid induction phase in nutrient deplete open raceway ponds. A comparison to either bioreactor or open raceway pond cultivation system suggests that this process potentially leads to significantly higher productivity of algal lipids. Nutrients are only added to the closed bioreactors, while open raceway ponds have turnovers of only a few days, thus reducing the issue of microalgal grazers.

  1. Modeling two-stage bunch compression with wakefields: Macroscopic properties and microbunching instability

    Directory of Open Access Journals (Sweden)

    R. A. Bosch

    2008-09-01

    Full Text Available In a two-stage compression and acceleration system, where each stage compresses a chirped bunch in a magnetic chicane, wakefields affect high-current bunches. The longitudinal wakes affect the macroscopic energy and current profiles of the compressed bunch and cause microbunching at short wavelengths. For macroscopic wavelengths, impedance formulas and tracking simulations show that the wakefields can be dominated by the resistive impedance of coherent edge radiation. For this case, we calculate the minimum initial bunch length that can be compressed without producing an upright tail in phase space and associated current spike. Formulas are also obtained for the jitter in the bunch arrival time downstream of the compressors that results from the bunch-to-bunch variation of current, energy, and chirp. Microbunching may occur at short wavelengths where the longitudinal space-charge wakes dominate or at longer wavelengths dominated by edge radiation. We model this range of wavelengths with frequency-dependent impedance before and after each stage of compression. The growth of current and energy modulations is described by analytic gain formulas that agree with simulations.

  2. Feasibility of a two-stage biological aerated filter for depth processing of electroplating-wastewater.

    Science.gov (United States)

    Liu, Bo; Yan, Dongdong; Wang, Qi; Li, Song; Yang, Shaogui; Wu, Wenfei

    2009-09-01

    A "two-stage biological aerated filter" (T-SBAF) consisting of two columns in series was developed to treat electroplating-wastewater. Due to the low BOD/CODcr values of electroplating-wastewater, "twice start-up" was employed to reduce the time for adaptation of microorganisms, a process that takes up of 20 days. Under steady-state conditions, the removal of CODcr and NH(4)(+)-N increased first and then decreased while the hydraulic loadings increased from 0.75 to 1.5 m(3) m(-2) h(-1). The air/water ratio had the same influence on the removal of CODcr and NH(4)(+)-N when increasing from 3:1 to 6:1. When the hydraulic loadings and air/water ratio were 1.20 m(3) m(-2) h(-1) and 4:1, the optimal removal of CODcr, NH(4)(+)-N and total-nitrogen (T-N) were 90.13%, 92.51% and 55.46%, respectively. The effluent steadily reached the wastewater reuse standard. Compared to the traditional BAF, the period before backwashing of the T-SBAF could be extended to 10days, and the recovery time was considerably shortened.

  3. Determinan Tingkat Efisiensi Perbankan Syariah Di Indonesia: Two Stages Data Envelopment Analysis

    Directory of Open Access Journals (Sweden)

    Zulfikar Bagus Pambuko

    2016-12-01

    Full Text Available Efficiency is an important indicator to observe the banks’ ability in resisting and facing the tight rivalry at banking industry. The study aims to evaluate the efficiency and to analyze the determinants of efficiency of Islamic bank in Indonesia on 2010 – 2013 with Two-Stage Data Envelopment Analysis approach. The objects of the study are 11 Islamic banks (BUS. The first phase of testing uses the Data Envelopment Analysis (DEA method showed that the efficiency of Islamic bank is inefficient on managing the resources and small Islamic banks are more efficient than the larger. The second phase of testing uses Tobit model showed that Capital Adequacy Ratio (CAR, Return on Asset (ROA, Non Performing Financing (NPF, Financing to Deposit Ratio (FDR, and Net Interest Margin (NIM have positive significant effect on the efficiency of Islamic banks, while Good Corporate Governance (GCG has a negative significant effect. Moreover, the macroeconomic variables, such as GDP growth and inflation have no significant effect on efficiency of Islamic banks. It suggests that to realize the optimum level of Islamic banks’ efficiency is only related with bank-specific, while the volatility of macroeconomics condition contributes nothing

  4. Enhanced acarbose production by Streptomyces M37 using a two-stage fermentation strategy.

    Directory of Open Access Journals (Sweden)

    Fei Ren

    Full Text Available In this work, we investigated the effect of pH on Streptomyces M37 growth and its acarbose biosynthesis ability. We observed that low pH was beneficial for cell growth, whereas high pH favored acarbose synthesis. Moreover, addition of glucose and maltose to the fermentation medium after 72 h of cultivation promoted acarbose production. Based on these results, a two-stage fermentation strategy was developed to improve acarbose production. Accordingly, pH was kept at 7.0 during the first 72 h and switched to 8.0 after that. At the same time, glucose and maltose were fed to increase acarbose accumulation. With this strategy, we achieved an acarbose titer of 6210 mg/L, representing an 85.7% increase over traditional batch fermentation without pH control. Finally, we determined that the increased acarbose production was related to the high activity of glutamate dehydrogenase and glucose 6-phosphate dehydrogenase.

  5. New Grapheme Generation Rules for Two-Stage Modelbased Grapheme-to-Phoneme Conversion

    Directory of Open Access Journals (Sweden)

    Seng Kheang

    2015-01-01

    Full Text Available The precise conversion of arbitrary text into its  corresponding phoneme sequence (grapheme-to-phoneme or G2P conversion is implemented in speech synthesis and recognition, pronunciation learning software, spoken term detection and spoken document retrieval systems. Because the quality of this module plays an important role in the performance of such systems and many problems regarding G2P conversion have been reported, we propose a novel two-stage model-based approach, which is implemented using an existing weighted finite-state transducer-based G2P conversion framework, to improve the performance of the G2P conversion model. The first-stage model is built for automatic conversion of words  to phonemes, while  the second-stage  model utilizes the input graphemes and output phonemes obtained from the first stage to determine the best final output phoneme sequence. Additionally, we designed new grapheme generation rules, which enable extra detail for the vowel and consonant graphemes appearing within a word. When compared with previous approaches, the evaluation results indicate that our approach using rules focusing on the vowel graphemes slightly improved the accuracy of the out-of-vocabulary dataset and consistently increased the accuracy of the in-vocabulary dataset.

  6. Two-Stage Residual Inclusion Estimation in Health Services Research and Health Economics.

    Science.gov (United States)

    Terza, Joseph V

    2018-06-01

    Empirical analyses in health services research and health economics often require implementation of nonlinear models whose regressors include one or more endogenous variables-regressors that are correlated with the unobserved random component of the model. In such cases, implementation of conventional regression methods that ignore endogeneity will likely produce results that are biased and not causally interpretable. Terza et al. (2008) discuss a relatively simple estimation method that avoids endogeneity bias and is applicable in a wide variety of nonlinear regression contexts. They call this method two-stage residual inclusion (2SRI). In the present paper, I offer a 2SRI how-to guide for practitioners and a step-by-step protocol that can be implemented with any of the popular statistical or econometric software packages. We introduce the protocol and its Stata implementation in the context of a real data example. Implementation of 2SRI for a very broad class of nonlinear models is then discussed. Additional examples are given. We analyze cigarette smoking as a determinant of infant birthweight using data from Mullahy (1997). It is hoped that the discussion will serve as a practical guide to implementation of the 2SRI protocol for applied researchers. © Health Research and Educational Trust.

  7. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar.

    Science.gov (United States)

    Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun; Huang, Yuan-Hao

    2018-04-05

    Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.

  8. Two-stage categorization in brand extension evaluation: electrophysiological time course evidence.

    Directory of Open Access Journals (Sweden)

    Qingguo Ma

    Full Text Available A brand name can be considered a mental category. Similarity-based categorization theory has been used to explain how consumers judge a new product as a member of a known brand, a process called brand extension evaluation. This study was an event-related potential study conducted in two experiments. The study found a two-stage categorization process reflected by the P2 and N400 components in brand extension evaluation. In experiment 1, a prime-probe paradigm was presented in a pair consisting of a brand name and a product name in three conditions, i.e., in-category extension, similar-category extension, and out-of-category extension. Although the task was unrelated to brand extension evaluation, P2 distinguished out-of-category extensions from similar-category and in-category ones, and N400 distinguished similar-category extensions from in-category ones. In experiment 2, a prime-probe paradigm with a related task was used, in which product names included subcategory and major-category product names. The N400 elicited by subcategory products was more significantly negative than that elicited by major-category products, with no salient difference in P2. We speculated that P2 could reflect the early low-level and similarity-based processing in the first stage, whereas N400 could reflect the late analytic and category-based processing in the second stage.

  9. A CURRENT MIRROR BASED TWO STAGE CMOS CASCODE OP-AMP FOR HIGH FREQUENCY APPLICATION

    Directory of Open Access Journals (Sweden)

    RAMKRISHNA KUNDU

    2017-03-01

    Full Text Available This paper presents a low power, high slew rate, high gain, ultra wide band two stage CMOS cascode operational amplifier for radio frequency application. Current mirror based cascoding technique and pole zero cancelation technique is used to ameliorate the gain and enhance the unity gain bandwidth respectively, which is the novelty of the circuit. In cascading technique a common source transistor drive a common gate transistor. The cascoding is used to enhance the output resistance and hence improve the overall gain of the operational amplifier with less complexity and less power dissipation. To bias the common gate transistor, a current mirror is used in this paper. The proposed circuit is designed and simulated using Cadence analog and digital system design tools of 45 nanometer CMOS technology. The simulated results of the circuit show DC gain of 63.62 dB, unity gain bandwidth of 2.70 GHz, slew rate of 1816 V/µs, phase margin of 59.53º, power supply of the proposed operational amplifier is 1.4 V (rail-to-rail ±700 mV, and power consumption is 0.71 mW. This circuit specification has encountered the requirements of radio frequency application.

  10. A New Two-Stage Approach to Short Term Electrical Load Forecasting

    Directory of Open Access Journals (Sweden)

    Dragan Tasić

    2013-04-01

    Full Text Available In the deregulated energy market, the accuracy of load forecasting has a significant effect on the planning and operational decision making of utility companies. Electric load is a random non-stationary process influenced by a number of factors which make it difficult to model. To achieve better forecasting accuracy, a wide variety of models have been proposed. These models are based on different mathematical methods and offer different features. This paper presents a new two-stage approach for short-term electrical load forecasting based on least-squares support vector machines. With the aim of improving forecasting accuracy, one more feature was added to the model feature set, the next day average load demand. As this feature is unknown for one day ahead, in the first stage, forecasting of the next day average load demand is done and then used in the model in the second stage for next day hourly load forecasting. The effectiveness of the presented model is shown on the real data of the ISO New England electricity market. The obtained results confirm the validity advantage of the proposed approach.

  11. Two-Stage Chaos Optimization Search Application in Maximum Power Point Tracking of PV Array

    Directory of Open Access Journals (Sweden)

    Lihua Wang

    2014-01-01

    Full Text Available In order to deliver the maximum available power to the load under the condition of varying solar irradiation and environment temperature, maximum power point tracking (MPPT technologies have been used widely in PV systems. Among all the MPPT schemes, the chaos method is one of the hot topics in recent years. In this paper, a novel two-stage chaos optimization method is presented which can make search faster and more effective. In the process of proposed chaos search, the improved logistic mapping with the better ergodic is used as the first carrier process. After finding the current optimal solution in a certain guarantee, the power function carrier as the secondary carrier process is used to reduce the search space of optimized variables and eventually find the maximum power point. Comparing with the traditional chaos search method, the proposed method can track the change quickly and accurately and also has better optimization results. The proposed method provides a new efficient way to track the maximum power point of PV array.

  12. [Comparison research on two-stage sequencing batch MBR and one-stage MBR].

    Science.gov (United States)

    Yuan, Xin-Yan; Shen, Heng-Gen; Sun, Lei; Wang, Lin; Li, Shi-Feng

    2011-01-01

    Aiming at resolving problems in MBR operation, like low nitrogen and phosphorous removal efficiency, severe membrane fouling and etc, comparison research on two-stage sequencing batch MBR (TSBMBR) and one-stage aerobic MBR has been done in this paper. The results indicated that TSBMBR owned advantages of SBR in removing nitrogen and phosphorous, which could make up the deficiency of traditional one-stage aerobic MBR in nitrogen and phosphorous removal. During steady operation period, effluent average NH4(+) -N, TN and TP concentration is 2.83, 12.20, 0.42 mg/L, which could reach domestic scenic environment use. From membrane fouling control point of view, TSBMBR has lower SMP in supernatant, specific trans-membrane flux deduction rate, membrane fouling resistant than one-stage aerobic MBR. The sedimentation and gel layer resistant of TSBMBR was only 6.5% and 33.12% of one-stage aerobic MBR. Besides high efficiency in removing nitrogen and phosphorous, TSBMBR could effectively reduce sedimentation and gel layer pollution on membrane surface. Comparing with one-stage MBR, TSBMBR could operate with higher trans-membrane flux, lower membrane fouling rate and better pollutants removal effects.

  13. A two-stage approach to the depot shunting driver assignment problem with workload balance considerations.

    Science.gov (United States)

    Wang, Jiaxi; Gronalt, Manfred; Sun, Yan

    2017-01-01

    Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce) sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU) depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers.

  14. A two-stage approach to the depot shunting driver assignment problem with workload balance considerations.

    Directory of Open Access Journals (Sweden)

    Jiaxi Wang

    Full Text Available Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers.

  15. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar

    Directory of Open Access Journals (Sweden)

    Kuei-Chi Tsao

    2018-04-01

    Full Text Available Complementary metal-oxide-semiconductor (CMOS radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA. The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.

  16. Comparison of Paired ROC Curves through a Two-Stage Test.

    Science.gov (United States)

    Yu, Wenbao; Park, Eunsik; Chang, Yuan-Chin Ivan

    2015-01-01

    The area under the receiver operating characteristic (ROC) curve (AUC) is a popularly used index when comparing two ROC curves. Statistical tests based on it for analyzing the difference have been well developed. However, this index is less informative when two ROC curves cross and have similar AUCs. In order to detect differences between ROC curves in such situations, a two-stage nonparametric test that uses a shifted area under the ROC curve (sAUC), along with AUCs, is proposed for paired designs. The new procedure is shown, numerically, to be effective in terms of power under a wide range of scenarios; additionally, it outperforms two conventional ROC-type tests, especially when two ROC curves cross each other and have similar AUCs. Larger sAUC implies larger partial AUC at the range of low false-positive rates in this case. Because high specificity is important in many classification tasks, such as medical diagnosis, this is an appealing characteristic. The test also implicitly analyzes the equality of two commonly used binormal ROC curves at every operating point. We also apply the proposed method to synthesized data and two real examples to illustrate its usefulness in practice.

  17. A high-power two stage traveling-wave tube amplifier

    International Nuclear Information System (INIS)

    Shiffler, D.; Nation, J.A.; Schachter, L.; Ivers, J.D.; Kerslick, G.S.

    1991-01-01

    Results are presented on the development of a two stage high-efficiency, high-power 8.76-GHz traveling-wave tube amplifier. The work presented augments previously reported data on a single stage amplifier and presents new data on the operational characteristics of two identical amplifiers operated in series and separated from each other by a sever. Peak powers of 410 MW have been obtained over the complete pulse duration of the device, with a conversion efficiency from the electron beam to microwave energy of 45%. In all operating conditions the severed amplifier showed a ''sideband''-like structure in the frequency spectrum of the microwave radiation. A similar structure was apparent at output powers in excess of 70 MW in the single stage device. The frequencies of the ''sidebands'' are not symmetric with respect to the center frequency. The maximum, single frequency, average output power was 210 MW corresponding to an amplifier efficiency of 24%. Simulation data is also presented that indicates that the short amplifiers used in this work exhibit significant differences in behavior from conventional low-power amplifiers. These include finite length effects on the gain characteristics, which may account for the observed narrow bandwidth of the amplifiers and for the appearance of the sidebands. It is also found that the bunching length for the beam may be a significant fraction of the total amplifier length

  18. Addition of seaweed and bentonite accelerates the two-stage composting of green waste.

    Science.gov (United States)

    Zhang, Lu; Sun, Xiangyang

    2017-11-01

    Green waste (GW) is an important recyclable resource, and composting is an effective technology for the recycling of organic solid waste, including GW. This study investigated the changes in physical and chemical characteristics during the two-stage composting of GW with or without addition of seaweed (SW, Ulva ohnoi) (at 0, 35, and 55%) and bentonite (BT) (at 0.0, 2.5%, and 4.5%). During the bio-oxidative phase, the combined addition of SW and BT improved the physicochemical conditions, increased the respiration rate and enzyme activities, and decreased ammonia and nitrous oxide emissions. The combination of SW and BT also enhanced the quality of the final compost in terms of water-holding capacity, porosity, particle-size distribution, water soluble organic carbon/organic nitrogen ratio, humification, nutrient content, and phytotoxicity. The best quality compost, which matured in only 21days, was obtained with 35% SW and 4.5% BT. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Chromium (Ⅵ) removal from aqueous solutions through powdered activated carbon countercurrent two-stage adsorption.

    Science.gov (United States)

    Wang, Wenqiang

    2018-01-01

    To exploit the adsorption capacity of commercial powdered activated carbon (PAC) and to improve the efficiency of Cr(VI) removal from aqueous solutions, the adsorption of Cr(VI) by commercial PAC and the countercurrent two-stage adsorption (CTA) process was investigated. Different adsorption kinetics models and isotherms were compared, and the pseudo-second-order model and the Langmuir and Freundlich models fit the experimental data well. The Cr(VI) removal efficiency was >80% and was improved by 37% through the CTA process compared with the conventional single-stage adsorption process when the initial Cr(VI) concentration was 50 mg/L with a PAC dose of 1.250 g/L and a pH of 3. A calculation method for calculating the effluent Cr(VI) concentration and the PAC dose was developed for the CTA process, and the validity of the method was confirmed by a deviation of <5%. Copyright © 2017. Published by Elsevier Ltd.

  20. Design and Analysis of a Split Deswirl Vane in a Two-Stage Refrigeration Centrifugal Compressor

    Directory of Open Access Journals (Sweden)

    Jeng-Min Huang

    2014-09-01

    Full Text Available This study numerically investigated the influence of using the second row of a double-row deswirl vane as the inlet guide vane of the second stage on the performance of the first stage in a two-stage refrigeration centrifugal compressor. The working fluid was R134a, and the turbulence model was the Spalart-Allmaras model. The parameters discussed included the cutting position of the deswirl vane, the staggered angle of two rows of vane, and the rotation angle of the second row. The results showed that the performance of staggered angle 7.5° was better than that of 15° or 22.5°. When the staggered angle was 7.5°, the performance of cutting at 1/3 and 1/2 of the original deswirl vane length was slightly different from that of the original vane but obviously better than that of cutting at 2/3. When the staggered angle was 15°, the cutting position influenced the performance slightly. At a low flow rate prone to surge, when the second row at a staggered angle 7.5° cutting at the half of vane rotated 10°, the efficiency was reduced by only about 0.6%, and 10% of the swirl remained as the preswirl of the second stage, which is generally better than other designs.